Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

SQL Server Replication

$
0
0

We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.

I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.

For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).

My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?


Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>