Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

mysql server with several db. One DB is a replica for cluster and other are not from cluster

$
0
0

Can you suggest the way how to handle the situation when we have 1 mysql server where we should have 1 read-replica for cluster and some DBs as a standalone dbs? Then we will handle data from all dbs with Stored procedures


Fire triggers in replica nodes (cassandra)

$
0
0

I am using a Cassandra 4-node cluster with full replication in all nodes.

I have defined a trigger on a table. However, when I update a row in this table, trigger is fired only on the local node.

Is there any way to fire this trigger in all nodes (based on replication)?

RAISERROR when replication falls behind

$
0
0

I have a third party incident management tool, that creates tickets from errors in the SQL logs.

Occasionally replication stops replicating without creating errors in the SQL logs.

I want to create a job that runs every 10 minutes or so to check on replication and then uses RAISERROR to start our ticketing (and alerting) process.

There are a number of stored procedures to Programmatically Monitor Replication that will show issues, and there is also a pretty good solution at Monitoring Transactional Replication in SQL Server(by Francis Hanlon 11 April 2013) that is good start on what I want. But would need to be tweaked a bit to meet my needs. Before I start reworking Francis's solution I am wondering if there are any other solutions I might leverage on.

I have searched around here and google, and Francis's solution is the only one I found that gets close to inhouse monitoring, without new third party tools.

Are there any solutions to monitor SQL replication real time with T-SQL?

  • SQL 2008 to SQL 2019
  • Mostly transactional replication
  • Same AND cross server replication

Transactional replication with updates on the subscriber?

$
0
0

Summary

We need a "two-way" (bi-directional) replication topology which should be easy to setup and administer, with minimal downtime when changes are required.

Current solution

We have two SQL Server Instances 2008 R2 Standard, on Servers SQL1 and SQL2. Server SQL1 has a database db1 with tables a,b,c,d,e,f,g,h,i,j,k. Server SQL2 has a database db2 with tables a,b,c,d,e,f,g,h,i, x,y,z

db1 serves as the primary database of the application app1, where all transactions are executed (mainly OLTP workload). db2 serves as the replica database of db1 where all the reports and DW tasks are executed (mainly DW workload). db2 has almost all data of db1 (tables a-i) and some extra tables (x,y,z).

We have setup a Transactional Replication (db1_to_db2) with the following (important) options for the publication: - @repl_freq = N'continuous' - @immediate_sync = N'true' - @replicate_ddl = 1

for the published articles: - @schema_option = 0x000001410DC37EDF

As there is little maintenance window available for both databases, with the described setup we can: 1. replicate schema changes to the subscriber 2. add new tables to the publication and use a partial snapshot rather than having to reinitialize the entire subscription

This is working like a charm. But now, a new requirement has come up.

The Problem

Some changes on specific replicated tables, that are updated on db2 directly, should be transferred back to db1. Let's say that tables b on db2 (that references table a and is referenced by table c through FKs), accepts updates on its data that should travel to table b on db1 and not get replicated back to db2. Only Updates are permitted on table b on db2. Not Inserts or Deletes.

Thoughts

We have tried every possible (simple and easily adopted) solution we could think of:

  1. Setup merge replication instead of transactional.

    • It does not accept partial snapshots for newly added articles.
    • It adds an extra column which means major changes to the application app1.

    FAIL

  2. Transactional replication with updatable subscriber

    • It does not accept partial snapshots for newly added articles.
    • It adds an extra column which means major changes to the application app1.

    FAIL

  3. Update db1.b through a (ON INSERT) trigger on db2.b:

    • This is a 2Phase-commit transaction that could fail due to network connectivity issues (which otherwise do not bother us)
    • There is no easy way to explude these Updates from being replicated back to db2.b through the replication.

    FAIL

  4. Setup a transactional replication from db2 to db1 with table b as the only article.

    • Again there is no way to exclude these transactions from being replicated back to db2. It would be nice to have something like a 'NOT FOR REPLICATION' option for these transactions...

    FAIL

This is as far as we have gone in search of a solution.

Please help

Please state any idea you might have, taking into account our specific needs. Forgive my being too analytic but I wanted to give every piece of required information you might need.

Replication recommendation

$
0
0

I am asked to investigate viability to use SQL Server replication technology for one of our upcoming project. In this project, basically there will be one central server located at corporate HQ (say Hub) and there will be multiple servers located at remote locations (say Nodes). All nodes will get data by using our application and Hub will get data directly injected into SQL Server. I am given 2 business requirements that need to be satisfied by this solution,

  1. Data to be synchronized between hub and nodes both ways.
  2. Data from one node NOT to appear on another node.

So based on those requirements, I chose to try Merge Replication.For test, I setup hub and 2 nodes. Hub is also acting as distributor. Then selected all objects of database and created subscription. And registered both nodes as subscribers. And this setup works... almost. The issue I am facing is, if I update data from node1 it appears in hub as well as node2 (and vise versa). That doesn't satisfy our 2nd requirement.

I am familiar with Replication but not master at it and I would really appreciate if someone can suggest if something is wrong with my approach. Or is merge replication not a good choice for my case?

I am using SQL Server 2014 standard editions for all 3 instances installed in Windows Server 2012R2.

Never changing the password for distributor_admin for local instance only?

$
0
0

Per Does distributor_admin need sysadmin? distributor_admin needs SA and per Can the account name of 'distributor_admin' be changed? the name can not be changed.

The password is changed with sp_changedistributor_password See sp_changedistributor_password (Transact-SQL) for details.

When the distribution database is contained on a local instance, a random password is generated and configured automatically. Source

What this all means is that when you have replication on a local instance (i.e. reporting database) no one knows what the password is. It would seem to be impossible to compromise the password. Assuming you are watching for and reporting on failed login attempts, you would know if anyone was trying to brute force the password. This leads to an argument where there is no good reason to actually change the password on a regular bases. You might want to change it on day one, if you don't trust the random password generation, but even that could add more risk then it removes, as now the password has been seen and processed with human contact.

Is there a good reason to change the password for distributor_admin for local instance only?

How to replicate big transactional tables with IBM InfoSphere CDC?

$
0
0

I have a big transactional table with 80,000,000 records and about 1,000 tps in informix. How can I replicate it without losing data?

-using load/unload for skipping refresh before mirror end with data loss

-using refresh before mirror, stops the subscription after replicating 12,000,000 records with 242 sql error number.

High availability in two node Elasticsearch cluster

$
0
0

We have a requirement to run highly available Elasticsearch cluster with only two nodes. I understand for reliable HA it is recommended to have odd number of nodes (3+) but cannot do that because of some particular environment restrictions.

In ES 6.x we could achieve that by setting discovery.zen.minimum_master_nodes config option to 1, this allows to keep remaining node UP in case of single node failure ( https://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/ ). This comes at cost of potential data inconsistency but that's something we know how to deal with.

Now, in ES 7.x. discovery.zen.minimum_master_nodes option has no effect anymore. Is there any way to achieve similar behavior with ES 7.x on two node cluster, i.e. keep high availability at cost of losing some data integrity when split-brain occurs?


Status of PostgreSQL replication after pause and resume

$
0
0

To backup the PostgreSQL database we setup a slave server which uses streaming replication.

We found the commands pg_xlog_replay_pause() and pg_xlog_replay_resume(). So we thought we use the commands to pause the replication (the replaying of the xlogs), start pg_dump and then resume the replication.

How do I find out when the PostgreSQL database is again in sync after the "resume"? Usually I used this command to see the "lag":

SELECT extract(seconds from (now() - pg_last_xact_replay_timestamp())) AS time_lag;

If I use it after pg_xlog_replay_pause() then it shows only a "lag" in seconds. So it seems that it is not the right query here. So is there a good solution for this?

One solution would be to write a cronjob on the master server to update a timestamp every minute and then check on the slave when the timestamp is more or less up to date. But I think there should be a better solution from PostgreSQL itself.

Splitting Snapshot files with MaxBCPThreads for Transactional Replication

$
0
0

I've just set up a publication, and I'm attempting to get the snapshot to apply faster. So far the Distribution Agent is respecting the MaxBCPThreads settings, but the Snapshot Agent is not. I'm expecting it to split the files so the threads on the Distribution Agent would go and grab the data. But it doesn't appear to be able to do that any time I snapshot.

Some possible solutions that I've seen online where to update the agent profile (I originally just edited the agent step with the flag, and that worked for the dist agent but not for snapshot).

I tried updating the agent profiles and that hasn't made any difference. I also found people saying that you should have sync_method set to native so I checked my script and I had already created the publication with native mode specified.

I'm wondering if I'm missing a specific setting that MaxBCPThreads needs in order to split all the bcp files into different files each.

I thought I had solved my own issue: It looks like you have to have a clustered index with a distinct set of ranges to get SQL Server to split the files into partitions. But right now my index seems to have 0's for all ranges.

DBCC SHOW_STATISTICS

enter image description here

After additional testing, I've found that this seems to only work on replicated tables. If you were to replicate based on (indexed) views then it seems that you only get the 1 bcp file instead of the partitioned stuff you'd get from normal tables.

The question is: Why doesn't SQL replication partition bcp files for Indexed Views like it does with normal tables?

I'm replicating the indexed view itself without the table ("Indexed View as a Table"). The reason is I have to join identifying information for the database for the subscriber to use for other things. The only way I've found to do it so far is to manually split my views using BETWEEN, which isn't particularly efficient. I'm hoping I can get SQL Server to do what I'd expect when replicating a normal table.

ArangoDB - Replication

$
0
0

I am using ArangoDB and has created a collection with a replication factor of 5 (min replication factor 3). I'm not sure to understand correctly how it is working. What would happen if for example 2 servers are temporaily down and live after some time? I believe it would still work without the read only error triggered. But are these 2 servers going to be sync later once they become live again? Thanks you very much in advance for any explanation you can provide.

Amazon CDC replication with Mysql auora to SQL Server

$
0
0

I need Mysql auora to SQL Server data replication with Amazon RDS. I am not able to see any document which help me for this.

Steps for creating Mysql auora to SQL Server

Step 1: I created a Mysql RDS on Amazon account.

Step 2:

I need to enable CDC(change data capture) into mysql server, but I am not able to find any document which help me to enable the CDC (Mysql server)

Please help me on this,

Thanks

Aman middha

Many-to-One replication / Transactional Replication

$
0
0

I have several PostgreSQL DBs in different geographical locations (local sites).

  • Each local site DB have the same schema, but unique data. For example, Take a table with columns: Site_ID, Department_ID, Department_Name. Site_ID is unique for each site.

  • I want to collect all the data from the local site DBs into a centralised DB (PostgreSQL again) which acts as a data warehouse.

  • The corresponding example table on the centralised DB will have the same columns as above. All local site data will go into this table. Each site data designated by the Site_ID, of course.

Question: How to achieve this with PostgreSQL replication methods (streaming/multi-master UDR/BDR/etc.) I see this can be done with SQLServer using Transactional Replication. What is the best way to achieve this functionality with PostgreSQL?

Restriction: The local sites can make only outgoing network connections (i.e. no inbound connections due to firewall restrictions)

Column cannot be converted from type 'int' to type 'bigint(20) unsigned'?

$
0
0

I have a Percona MySQL cluster, with a master and a few slaves. Each of them is running "Ver 14.14 Distrib 5.5.40-36.1" of MySQL. Replication is row based.

I ran an alter query on one of the tables of the slave only. The plan is to run this query on all slaves and then do a master switch as we cannot afford to lock tables in master.

The query is:

ALTER TABLE order_item_units 
MODIFY parent_id BIGINT(20) unsigned ;

And post this when I checked the slave using show slave status, I see the replication is broken with following error:

Column 3 of table 'database_name.order_item_units' cannot be converted from type 'int' to type 'bigint(20) unsigned'

And when I checked the column, it did convert into BIGINT.

And now I am not able to fix this. I did stop slave and start slave, didn't help. Did stop slave and MySQL restart, didn't help. Did a skip counter, didn't help either.

Either the column should not have gotten converted, then the error would have be just, but then may be there should not have been any error at all.

And if the column did get converted, then why the error?

Any clue what am I missing here?

What is the difference between 'two way transactional replication' and 'peer to peer replication'?

$
0
0

Can someone explain in detail the exact differences between 2 way transactional replication and peer to peer replication?


mysql master-master data replication consistency

$
0
0

As we know mysql do replication asynchronously. I heard that I need some extra plugins to do
synchronous replication.

So let us Consider the situation of asynchronous replication: The master writes events to its binary log but does not know whether or when a master2 has retrieved and processed them. With asynchronous replication, if the master1 crashes, transactions that it has committed might not have been transmitted to any master2.

My question is whether these transactions will finally be replicated to master2 later when master1 starts up again? If it is not, then it is a big inconsistency problem.

My question is same for master-slave replication and master is down with same situation.

Do I need some special configuration parameter to make it happen automatically?

Or Do I have to manually dump out the data from master1 and import to master2 etc?

======

Update: I probably mis-used the word "crashes" above, I just want to refer the situation that master1 fails to sync the data to others for some time period. This replies (thanks) below cover two cases: real un-recoverable crash due to disk failure for example, or temporarily offline problem due to network problem etc.

Distribution agent can't connect to subscriber

$
0
0

I have two servers on different untrusted domains. Server A is the publisher and is running SQL Server 2008 R2. Server B is the subscriber and is running SQL 2008 R2 Express. Since the servers are on separate domains without a trust relationship, I am using pass-through authentication to connect to each server. This involves creating a local windows account on each server with the same username and password and then using windows authentication to connect to the remote server. Using this method, I am able to connect Server A to Server B and vice versa in SQL Server Management Studio. I am also able to create a transactional publication on Server A and create a push subscription to it on Server B.

However when I open up the View Synchronization Status Window, I get the message "The process could not connect to SUbscriber 'Server B'." Opening up Replication Monitor gives me the following error messages:

The process could not connect to Subscriber 'Server B'. (Source: MSSQL_REPL, Error number: MSSQL_REPL0)

Named Pipes Provider: Could not open a connection to SQL Server [53]. (Source: MSSQLServer, Error number: 53)

A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (Source: MSSQLServer, Error number: 53)

Login timeout expired (Source: MSSQLServer, Error number: HYT00)

Everything else that I have read about this error says that it's a permissions issue, but I don't think this is the case. Just to make sure that there weren't any permissions issues, I made the windows accounts that I am using for the pass-through authentication local administrators on each server, db_owners on both the publisher and subscriber databases, and sysadmins on each instance of SQL Server.

Does anyone know if something other than permissions could be causing this error? What confuses me is that the servers are clearly able to connect to each other using the pass-through authentication, but the connection still fails at the distribution agent.

Best way to transfer a few tables from reserve server to production

$
0
0

We have two MS SQL Servers (2012 Standard Edition on Windows 2012 Server) which are located in the same subnet. One is production server. Another is reserve server.

Currently there is a transactional replication between two databases on these servers where production server is Publisher and Distributor and reserve server is Subscriber.

On production server every night runs some tasks which inserts to (or updates) about of 5 tables one of them is about 6 Gb.

We would like to move this tasks to reserve server and create on one another database with the same scheme. Then when tasks upload data to another database we need to synchronize one with production database on production server.

One moment : For supporting actual data we must to include another database on reserve in replication as Subscriber from production server.

As one way we consider linked server:

  1. Truncate table on production (in one table there is FKs, so we can't)
  2. Insert data from another database

Edit: How about linked server and merge functionality?

How do you think are there the better way?

Temporarily removing replication from Publisher

$
0
0

Is it possible to temporarily remove Transactional Replication from a publisher database in order to allow a release to be executed against the database?

If I stopped the log reader agent would I be able to perform DDL/DML changes against the publisher database without error?

The product vendor has a bespoke tool which issues the T-SQL commands against the database for the release, which will include dropping and recreating constraints, keys, indexes etc. I need to make sure releases can happen without error caused by replication - the database needs to allow the changes.

Publisher and Distribution db live on the same server. Both publisher and subscriber are SQL 2014 Enterprise. I am replicating the entire database.

Help is appreciated. Thanks Peter

Update: Yes I am replicating schema changes

Update: Is this the answer? https://stackoverflow.com/questions/15673829/resume-replication-after-restoring-publication-database

Update: I've tested our vendor's release on a published database and found that dropping stored procedures is the first issue we see.

Identity management in Transaction replication with updatable subscription

$
0
0

I have migrated my publisher DB to new server yesterday. For migration what I did was

  1. removed subscriptions and publication completely
  2. backed up DB on which publication was configured
  3. restored DB to new publisher server
  4. Ran sp_removedbreplication on newly restored DB to give a clean start
  5. recreated publication and subscribers on my respective servers

I was able to setup replication properly and it came in sync normally. But when user tried to insert new records at subscriber I got primary key violation errors for lot of tables. Primary key is on identity columns for most of my tables and have not for replication true on both publisher and subscriber end. Upon checking the identity check constraint on subscriber I saw that it had old ranges than my current max values. It wasn't getting the new ranges even though it had automatic identity range management set. I had to manually RESEED the identity on subscriber to make it work and had to go through manual process of reseeding it for all tables. My environment is :

Replication type: Transaction replication with updatable subscription with immediate updating

  1. 1 publisher -- SQL server 2008 R2 + SP2
  2. 1 distributor -- SQL server 2008 R2 + SP2
  3. 2 subscribers -- SQL server 2008 R2 + SP2

I have below questions:

  1. What could be the reason for automatic identity management not working?
  2. How to fix this problem permanently? Is there a script available which can check my max values on publisher and subscribers and then RESEED it accordingly?
  3. What is the fastest way to fix this?
Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>