Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

SQL Server 2014 Transactional Replication Partition Switching

$
0
0

I am working with SQL Server 2014 Transactional Replication with tables that have partitions. The source tables are partition switched when they are loaded and so far I have been able to replicate this successfully.

The replication source I am working with performs some dynamic partition management by creating partitions on the Partition Function/Partition Scheme during the load into the table. This is not natively supported by Transactional Replication per the documentation found here:

After the Subscriber is initialized, data changes are propagated to the Subscriber and applied to the appropriate partitions. However, changes to the partition scheme are not supported. Transactional and merge replication do not support replicating the following commands: ALTER PARTITION FUNCTION, ALTER PARTITION SCHEME, or the REBUILD WITH PARTITION statement of ALTER INDEX. The changes associated with them will not be automatically replicated to the Subscriber. It is the responsibility of the user to make similar changes manually at the Subscriber.

This is where I am getting hung up. We've worked through most of the problems with the loading and switching of partitions and we are now dynamically creating the new partitions, as they come in, on both the replication source (publisher) and replication target (subscriber) through the use of a stored procedure as part of the load to the publisher. The stored procedure exists on both the publisher and subscriber (manually put there).

When running a test last night, we saw that partitions were dynamically created on the publisher and subscriber, but there was a new error on the Log Reader Agent. At this point, I don't even know where to begin to start tracking this down.

Error messages: The process could not execute 'sp_replcmds' on 'RNCAZFAST2'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: http://help/MSSQL_REPL20011

No catalog entry found for partition ID 72057598976393216 in database 27. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption. (Source: MSSQLServer, Error number: 608)

I started looking up the error and it looked like it was a bug back in 2005, but we are on SQL Server 2014 Enterprise on an Azure VM. Any help is greatly appreciated!


Download the replication snapshot file using FTPS

$
0
0

I have two databases for two companies. Company A's database contains domain data. The other company is pulling the data using snapshot replication. We have used FTP to communicate:

  1. Created FTP server on IIS in Window Server 2014
  2. Added the certificate to the Server
  3. Created the replication publisher and given the FTP account information
  4. It is working perfectly without the FTP server
  5. IIS set the certificate and the required SSL connection now it is not working
  6. This data is two company data and we want communication done using FTPS

It is not working, we don't want to use VPN. We got a link from MSDN and it is saying:

If you use SSL to secure the connections between computers in a replication topology, specify a value of 1 or 2 for the -EncryptionLevel parameter of each replication agent (a value of 2 is recommended). A value of 1 specifies that encryption is used, but the agent does not verify that the SSL server certificate is signed by a trusted issuer; a value of 2 specifies that the certificate is verified. Agent parameters can be specified in agent profiles and on the command line.

So where can I set this EncryptionLevel=2?

enter image description here

This is the test cases to connect to the server:

  1. We have changed the Server name during the login ftps://Domain.com
  2. Change the port 990 and open the port still not worked

In short, I want to use FTPS for communication.

I can communicate over FTP. I am working on SQL Server 2014.

SQL Server Transactional replication - The process could not bulk copy into

$
0
0

So I have setup T-replication from Publisher (SQL Server 2014) Distributor (SQL Server 2014) Subscriber (SQL Server 2008 R2) and initialized it using a snapshot.

Checking in the replication monitor I find that the Snapshot agent has completed successfully and Log Reader agent is running.

Now in 'Distributor to Subscriber History' tab just beside the 'Undistributed Commands' Tab

I get the following error:

The process could not bulk copy into table '"dbo"."BEAMDATA"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037
End of file reached, terminator missing or field data incomplete
To obtain an error file with details on the errors encountered when initializing the subscribing table, execute the bcp command that appears below. Consult the BOL for more information on the bcp utility and its supported options. (Source: MSSQLServer, Error number: 20253)
Get help: http://help/20253
bcp "LOWIS_BUCT"."dbo"."BEAMDATA" in "C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\ReplData\unc\LOWISBUCT_CSSQLDB_BUCT_CSSQLDB_BUCT_ALL_TABLES\20160826064516\BEAMDATA_34#1.bcp" -e "errorfile" -t"\n\n" -r"\n<,@g>\n" -m10000 -SLOWISTSTSQL -T -w (Source: MSSQLServer, Error number: 20253)
Get help: http://help/20253

I thought this could be some kind of data overflow and hence checked the schema of the table at both Publisher and Distributor and they match exactly.

I cleaned the whole replication setup completely and re did it but still stuck at the very same place for the same table.

Has anyone encountered this before? Ask me if you need more information from my end which I can furnish.

Unexpected growth Transaction log file 100 GB which is part of Transactional Replication

$
0
0

I configured transactional replication on SQL Server 2014/Windows Server 2012. The database size is only 30 MB, but log file size is 100 GB. Every day, the log file size is growing 5-7 GB.

Database transaction log file size increased abnormally. Data file size is 30MB and the transaction log file grows to 95 GB.

While database under testing mode...

Subscriber of replication is also primary replica of Always On Availability Group.

How to reduce the size of transaction log file?

I had taken log backup and in full recovery model.

SELECT name, log_reuse_wait_desc FROM sys.databases;

...returns LOG_BACKUP.

Replication is running successfully. Subscriber is receiving changes from publisher.

Re-initializing a transactional replication wihch was initially synced from backup

$
0
0

I have a transactional replication which was initially synced from backup. Now I need to add a new table which is really big so we have decided to backup and restore a fresh copy of the db to subscriber to re-intializing it.

My question is, in this scenario should I be dropping the subscription, backup restore and then re-add the subscription? is that the correct way or is there any other way of going about it?

Thanks

Transactional replication with updates on the subscriber?

$
0
0

Summary

We need a "two-way" (bi-directional) replication topology which should be easy to setup and administer, with minimal downtime when changes are required.

Current solution

We have two SQL Server Instances 2008 R2 Standard, on Servers SQL1 and SQL2. Server SQL1 has a database db1 with tables a,b,c,d,e,f,g,h,i,j,k. Server SQL2 has a database db2 with tables a,b,c,d,e,f,g,h,i, x,y,z

db1 serves as the primary database of the application app1, where all transactions are executed (mainly OLTP workload). db2 serves as the replica database of db1 where all the reports and DW tasks are executed (mainly DW workload). db2 has almost all data of db1 (tables a-i) and some extra tables (x,y,z).

We have setup a Transactional Replication (db1_to_db2) with the following (important) options for the publication: - @repl_freq = N'continuous' - @immediate_sync = N'true' - @replicate_ddl = 1

for the published articles: - @schema_option = 0x000001410DC37EDF

As there is little maintenance window available for both databases, with the described setup we can: 1. replicate schema changes to the subscriber 2. add new tables to the publication and use a partial snapshot rather than having to reinitialize the entire subscription

This is working like a charm. But now, a new requirement has come up.

The Problem

Some changes on specific replicated tables, that are updated on db2 directly, should be transferred back to db1. Let's say that tables b on db2 (that references table a and is referenced by table c through FKs), accepts updates on its data that should travel to table b on db1 and not get replicated back to db2. Only Updates are permitted on table b on db2. Not Inserts or Deletes.

Thoughts

We have tried every possible (simple and easily adopted) solution we could think of:

  1. Setup merge replication instead of transactional.

    • It does not accept partial snapshots for newly added articles.
    • It adds an extra column which means major changes to the application app1.

    FAIL

  2. Transactional replication with updatable subscriber

    • It does not accept partial snapshots for newly added articles.
    • It adds an extra column which means major changes to the application app1.

    FAIL

  3. Update db1.b through a (ON INSERT) trigger on db2.b:

    • This is a 2Phase-commit transaction that could fail due to network connectivity issues (which otherwise do not bother us)
    • There is no easy way to explude these Updates from being replicated back to db2.b through the replication.

    FAIL

  4. Setup a transactional replication from db2 to db1 with table b as the only article.

    • Again there is no way to exclude these transactions from being replicated back to db2. It would be nice to have something like a 'NOT FOR REPLICATION' option for these transactions...

    FAIL

This is as far as we have gone in search of a solution.

Please help

Please state any idea you might have, taking into account our specific needs. Forgive my being too analytic but I wanted to give every piece of required information you might need.

SQL Server Replication - Only weeks worth of data

$
0
0

I have a need for a test server that hosts a small subset of data from our production systems. Is it possible to setup a SQL Server Replication Job that only keeps a week's worth of data so developers can develop reports?

Keep running 7 days of data, keeping the storage need small is the goal.

how to publish user defined table type as an article in Transactional replication

$
0
0

I am new to replication and trying to use transactional replication. I am trying to publish all data and schema. My stored procedure takes user defined table type as input.

CREATE type TableBParam as table
(
    Id Bigint,
    TableAId Bigint not null,
    FieldB1 nvarchar(50)
)

--Deadlock was observed on the save query
go
CREATE PROC SaveTableB
(  
 @val [dbo].[TableBParam] READONLY  
)  
AS 
BEGIN 
SET NOCOUNT ON;  

MERGE [dbo].[TableB] AS T  
USING (SELECT * FROM @val) AS S  
  ON ( T.Id = S.Id)  
WHEN MATCHED THEN  
    update set FieldB1 = S.FieldB1
WHEN NOT MATCHED THEN 
    insert(TableAId, FieldB1) Values(S.TableAId, S.FieldB1);

END
go

when the snapshot agent runs it gives me an error "Script failed for user defined table type TableBParam"

I couldn't find an option to specify user defined table types in the article dialog when we setup Local publish. I have also explore the article properties to which didn't help me.

enter image description here

Appreciate your suggestions.


SQL Replication for Schema changes

$
0
0

I want to configure the snapshot replication for all the articles. The problem iam facing is first time all the articles got sync on the subscriber. But later on let suppose if there is a schema change (added new table) in the publisher that newly created table is not getting replicated on the subscriber.

So every time if there a schema changes in publisher i have to manually update the selected articles on the publisher to get it replicated on the subscriber.

Is there a way to automate this step that whenever there is a schema changes at the publisher (added new articles) it should be automatically replicated on the subscriber ? Thanks

Postgresql Relation ID

$
0
0

I'm trying to utilze Postgres 10 logical replication mechanism by reading replication messages in Go code. Most of the logical replication messages refer to something called "Relation Id".

My question is: how to get Relation Ids for all of the existing tables? I am aware of "Relation" message type, but I don't know how to trigger them.

Transactional Replication hangs when huge transactions flow

$
0
0

We are running push transactional replication in our production environment ( SQL server 2014 (x64) Default configuration). It hangs when thousands of live transactions flows in and this continues for hours and eventually days. So as to come out we generally stop the replication. I am not sure what causes it and also don't know how to troubleshoot this issue. Since I am new to replication, can anybody help me out with the steps to troubleshoot the issue?

How do I replicate a temporal table

$
0
0

I have a temporal table, and I want to replicate it using transactional replication. The history table cannot have a primary key required for transactional replication. When I try replicating the current table, replication fails because it cannot insert into the GENERATED ALWAYS AS ROW START or GENERATED ALWAYS AS ROW END columns.

Efficiently bulk upsert unrelated rows

$
0
0

As part of a manual replication process between databases with different but related schemas, for each table, we identify new and modified records in the source database and feed them via a table valued parameter to a stored procedure in the SQL Server destination database. The source database is not SQL Server and is in a different data center.

For each row to be upserted, the stored procedure queries for the primary keys of the related rows already in the destination database.

Currently, we do a single SELECT to get the primary keys, followed by a single MERGE to perform the upsert. There are two aspects of this approach that I believe may not be as efficient as possible.

  1. An implicit transaction unnecessarily wraps the MERGE. The database would remain consistent even with each row being upserted one at a time. If an row's upsert fails, we want the remaining rows to proceed.

  2. MERGE interleaves inserts and sets as it goes through the rows, which is fine, but we don't need this. It would be acceptable to set all the modified rows and subsequently insert all the new rows.

Based on the flexibility we have, the MERGE performance tip to use UPDATE and INSERT seems to apply to our case:

When simply updating one table based on the rows of another table, improved performance and scalability can be achieved with basic INSERT, UPDATE, and DELETE statements.

Do I understand that right? Am I better off with a separate UPDATE and INSERT? And what about the implicit transaction? Is performing a single SELECT, UPDATE, and INSERT over a large batch of rows most efficient, or is it better to take advantage of the ability to do one row at time by using a FOR loop? Or something else?

In general, what is the most efficient way to upsert a large batch or rows to a SQL Server table in which the rows are not transactionally related and sets and inserts need not be interleaved?

Set up new publication on SQL Server 2014 database which is part of an availability group

$
0
0

I like to setup a publication (transactional replication) on a SQL Server 2014 database which is part of an availability group. We have a primary and one secondary (non-readable) replica.

On the instance I have already another running publication. The distributor is running outside of the availability group, even on a different domain.

Problem

Initially the set up of the distributor was done on the other replica (which is currently the secondary). Also the other existing publication was initially created on the other hosts. That's the first publication I have to create after I did a failover of the availability group to the secondary.

Error

Running the following command on the database to publish

EXEC sp_replicationdboption @dbname = 'mydatabase',
                            @optname = N'publish',
                            @value = N'true';

I received the following error:

Msg 20028, Level 16, State 1, Procedure sp_MSpublishdb, Line 56 [Batch Start Line 0]
The Distributor has not been installed correctly. Could not enable database for publishing.

I wonder why that occur as I have already on publication up and running on another database without any error and also working properly.

Ideas

Checked the management logs doesn't help as nothing was logged. After cross checking the distributor I saw also that the publisher is registered properly.

Any ideas what could be issues here?

AWS DMS (Database Migration Service) SQL Server to SQL Server not replicating changes

$
0
0

I have 2 AWS SQL Servers (as RDS instances) in the same VPC, however one is in a private subnet (the source) and one is in a public subnet (the target). I am replicating FROM SQL Server Standard Edition TO SQL Server Web Edition.

I have set up DMS (Database Migration Service) between them to do a full table load, then replicate ongoing changes. The initial load occurs without issue, however ongoing changes are not repicated. When I check the table status, I can see that the last updated date-time is continually updating, however as you can see, there are no inserts or updates being tracked. These figures remain 0.

enter image description here

The status of the migration task is: Load complete, replication ongoing The source database backup model is FULL (Was SIMPLE, but realised this wouldn't work so it's been changed to FULL).

The CloudWatch log is just repeats of the below:

2019-03-02T23:13:22 [SOURCE_CAPTURE ]I: Throughput monitor: Last DB time scanned: 2019-03-03T10:12:37.947. Last LSN scanned: 00065a3e:00030286:0003. #scanned events: 183. (sqlserver_log_utils.c:4565)
2019-03-02T23:15:22 [SOURCE_CAPTURE ]I: Throughput monitor: Last DB time scanned: 2019-03-03T10:15:04.940. Last LSN scanned: 00065a3e:0003040e:0003. #scanned events: 413. (sqlserver_log_utils.c:4565)
2019-03-02T23:17:22 [SOURCE_CAPTURE ]I: Throughput monitor: Last DB time scanned: 2019-03-03T10:16:54.523. Last LSN scanned: 00065a3e:00030463:0003. #scanned events: 188. (sqlserver_log_utils.c:4565)
2019-03-02T23:19:22 [SOURCE_CAPTURE ]I: Throughput monitor: Last DB time scanned: 2019-03-03T10:19:12.697. Last LSN scanned: 00065a3e:0003053d:0003. #scanned events: 402. (sqlserver_log_utils.c:4565)
2019-03-02T23:21:22 [SOURCE_CAPTURE ]I: Throughput monitor: Last DB time scanned: 2019-03-03T10:21:22.300. Last LSN scanned: 00065a3e:000305d3:0003. #scanned events: 225. (sqlserver_log_utils.c:4565)

Which is different to when the full load occurs when the task is started, which details many tables being copied across etc. I've stop/started the task, I've tried changing the behavior from truncating target tables to drop and re-create etc, but none of this has any effect. There is no 'last failure message' listed in the Dashboard, nor is there any CDC start position or recovery checkpoint:

Change data capture (CDC)
Change data capture (CDC) start position
-
Change data capture (CDC) recovery checkpoint
-

Task status never seems to change from CHANGE_PROCESSING

server_name task_name   task_status status_time pending_changes disk_swap_size  task_memory source_current_position source_current_timestamp    source_tail_position    source_tail_timestamp   source_timestamp_applied
localhost.localdomain   TIXLNKU6OELULHNTU2G5IABSF4  CHANGE PROCESSING   2019-03-02 23:25:12 0   0   927 00065a3e:000306a5:0003  2019-03-02 23:25:11 000659f3:00000540:0004  2019-03-02 08:37:28 1970-01-01 00:00:00

There are no errors in awsdms_apply_exceptions.

Can someone please assist as to why replication is not occurring?


Replicate TBs of data between AWS rds and on premise sql instances in < 5 mins

$
0
0

I have just started a project with some fairly daunting requirements. Company A uses an application that writes records to Company B. The task is to move/update/sync very large amounts of data (195 tables and 2500 gb of data, millions and millions of rows) from an AWS RDS sql instance (Company B) which I believe is sql 2017, to an on premise instance (Company A) which is 2016. The acceptable threshold for latency is <=5 mins. We only have read access to the source and cannot install anything there so the traditional means of replication are not available.

There is 1 central table we'll call table A, which has a primary key defined (TableAID). The rest of the tables have a foreign key relationship to Table A, and they also have their own primary keys defined with other relationships between them. The "gotcha" to all of this is that when there is an update to the source, all data is dropped and reinserted thereby creating all new primary keys, with the exception of TableAID. So TableAID is the only primary key that persists and can be counted on. The other tables will still maintain their relationships, but with different primary keys after an update. This makes updating the target with deltas very difficult with all of the one to many relationships. In addition to this, Company B will archive data from time to time and we at Company A will have to sync existing data while retaining the data that was archived and no longer part of the data stream.

We explored using SSIS for this but can't get anywhere close to the latency expectations. After some digging in other forum topics I ran across a recommendation to use MS StreamInsight. I am not familiar with this, but if it will work as a means of real-time replication, I can get up to speed. I am not tied to any particular technology, but having said that, my background is with the MS toolset. Before I invest tons of time with StreamInsight, I would like to get an idea if this is a viable solution to my problem. Any other recommendations are also welcome!

Thanks!

Transactional replication re-configuration have problem

$
0
0

I am using SQL Server 2008 R2. I have configure it for Transactional replication. For some reason, I decide to re-install main server. So I get backup of database from main server and re-store it on backup server, and now our backup sever was the main server. As the main server gone for installation.

When my main server get ready, then I get .mdf and .ldf files from backup server and attach these files to main server. In this way my main server was back again and It is working fine.

But when try to re-configure the replication. its giving the error invalid object name 'dbo.syspublications (Microsoft SQL Server, Error:208). During trouble shooting I feel that System Tables from the database are missed.

Now please help me how I can fix this issue. I have all these table in old database .mdf and .ldf, but how I can put all these tables System Table folder. Is there any other way to solve this issue?

Replication not working even after adding the missing row at subscriber

$
0
0

I was getting error 20598, row was missing at subscriber. I added the missing row but replication is still not working. I reinitialized the subscription but it is still not working. The status in replication monitor for this publication is- "Not Running, Performance Critical". When I right click and view details, it says it has 0 commands to apply. There is no error in the replication monitor but when I query msrepl_errors I can see the same error- 20598. I am again going to apply the missing records but would need your suggestions how can I investigate the problem deeper.

The replication agent has not logged a progress message in 10 minutes

$
0
0

I am trying to configure Transactional Replication, the Snapshot generation took around 1 hr and after successful snapshot generation the Distributor to Subscriber is showing an error as follows "The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor are still active." How to resolve this issue?

Thanks

what is the best solution to have online secondary database in sql server 2017 except Always on?

$
0
0

I have a large database about 600G, I need to have a secondary online database to execute some heavy queries(select). I have been engaged in an important project which has fixed deadline. I implemented a transactional replication as an emergency solution after that I'll work on always on as a final solution.

I have a process every day which add about 3G data in the database and I have many inserts, updates and delete statement at sql server side. Transaction transferring takes a large time(~50 minutes).

1) Is it reasonable to use transactional replication? 2) Is there any solution to reduce this time? 3) Have I taken the best decision?

If Secondary database be adapted less than 1 minute it will be useful for us

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>