Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Bidirectional Replication Push and Pull with SQL Server

$
0
0

Please bear with me, I'm a developer working with a client without access to a strong DBA. I have a question about bi-directional replication with the following setup and requirements:

  • MSSQL Server Database A: the back-end of a web application in a remote environment
  • MSSQL Server Database B: a mirror of Database A that resides in a DMZ

Databases A and B can be updated independently from each other but need to stay in sync. Normally this would be a good candidate for bi-directional replication because the business rules are such that conflicts will not occur in the replicated tables.

However, the client has a security requirement that no transactions can be initiated from the remote environment into the DMZ.

Can I set up bi-directional replication so that the DMZ Database B pushes its changes to Database A and pulls Database A's changes in? Or do you suggest another strategy - replication or otherwise?

Thanks!


rebar erlang package mnesia replication dynamically

$
0
0

Problem Statement I have created an Erlang package using Rebar3. It is a tar ball which just need to be put in desired OS and get started once you start the node. So like wise there are two nodes, of same kind, in two different linux boxes. Those are not mnesia replicated, how do i make then replicated?

change_config(Config, Value)

This fucntion would be ok if other node iscomplete empty schema, but thats not the case with me.

Please let me know how can i resolve this problem?

MySQL GTID Replication of new external slave on google cloud platform

$
0
0

I am using google cloud platform, and have a MySQL replica set as a failover attached to the master.

I would like to build an external replica that allows for us to create other databases for reporting etc and not hog up resources.

I am following this guide, and am hoping someone can provide some clarification. https://cloud.google.com/sql/docs/mysql/replication/configure-external-replica

I keep getting this error on show slave status after starting the slave. Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

The help article for doing the mysqldump describes not locking the tables.https://cloud.google.com/sql/docs/mysql/import-export/creating-sqldump-csv#ext

There is a flag that outputs the binary log information —-master-data=1

1

If the tables are not locking, how will the data be consistent when you import it into the slave?

mysqldump --databases [DATABASE_NAME1, DATABASE_NAME2, ...] -h [INSTANCE_IP] -u [USERNAME] -p \

--master-data=1 --flush-privileges --hex-blob --skip-triggers --ignore-table [VIEW_NAME1] [...] \ --default-character-set=utf8 > [SQL_FILE].sql

Most other articles recommend doing the mysqldump article with this flag to lock the tables.

--single-transaction

2

Any recommendations to not get the 1236 error..

Replicating from MySQL 5.1 master to 5.6 slave failing because 'INSERT ... VALUES (NOW())' results in 'Error_code: 1062'

$
0
0

I am migrating away from some old MySQL 5.1 servers to some new MySQL 5.6 servers. During this process, I'm creating a new MySQL 5.6 slave from an existing MySQL 5.1 slave, using the procedure in the mysqldump reference guide.

For example, if my MySQL 5.1 servers are named 'master1' and 'replica1' and I have a new MySQL 5.6 server named 'replica2', the following should make replica2 a second slave of 'master1':

replica2 % mysqldump --login-path=replica1 --all-databases --dump-slave --include-master-host-port --apply-slave-statements --lock-all-tables  --add-drop-database > all.sql
replica2 % mysql < all.sql

And this seems work well, but replication fails with the following error complaining about duplicate entries for the primary key.

2015-06-12 10:00:00 1234 [ERROR] Slave SQL: Worker 0 failed executing transaction '' at master log mysql-bin.009332, end_log_pos 12341234; Error 'Duplicate entry '8072' for key 'PRIMARY'' on query. Default database: 'DATABASE'. Query: 'INSERT INTO "Member" ("Created") VALUES (NOW())', Error_code: 1062

Can I assume that 'INSERT INTO "Member" ("Created") VALUES (NOW())' is triggering the error here? Can I get replication to work without skipping rows with SET GLOBAL sql_slave_skip_counter = 1;?

Some additional details:

  • I'm using classic MySQL replication, and GTIDs are currently disabled.
  • The MySQL 5.1 servers are using STATEMENT-based replication, but the new MySQL 5.6 servers are using ROW-based replication.
  • I don't own the application code, and I cannot change the SQL.

Q: Redis Clusters and Replication Configuration Parameters

$
0
0

With the various deployment options for Redis (standalone, HA, cluster), it is not clear if the REPLICATION section in the redis.conf file applies to a Redis Cluster (cluster-enabled yes).

In general, do any of the configuration settings in the REPLICATION section of redis.conf apply to Redis Clusters?

In particular, does the repl-diskless-sync parameter apply to Redis Clusters?

What is Data Virtualisation?

$
0
0

I've just been asked whether our company should consider Data Virtualisation for our test environments. The benefits are given as:

  • Screening of sensitive data
  • Fast data refreshes in our test environments
  • Potential benefits for DR and BI scenarios

However I've only found marketing info; nothing technical. From what I can figure out there are 2 approaches:

  • A service layer over a production database which abstracts you from the data model (presumably resulting in a different data model presented by that new layer).
  • A tool to automate the restore and subsequent manipulation of data which can be used by non-technical users and is faster than using database backups and SQL scripts.

Without seeing any technical information this smells of snake oil to me; but I want to understand it rather than dismissing out of hand.


Keywords: [data-as-a-service] [data-virtualisation] [data-virtualization] [delphix] [denodo]

LogShipping replicated database

$
0
0

We use transaction replication to replicate our production database to another server(server 1) for reporting purpose. For a standby copy, we also logship the main database to another server (server 2).

Last week I had to reinitialise the standby copy. However, after restoring the production database on Server 2; under the replication publications node, I see the database has been published, and under that, I see the subscriber server.

enter image description here

The Server 2 is not configured for replication. Because the secondary database is on Standby\Read Only mode, the system is not allowing me to make any modification.

How can I remove the replication configuration from the secondary server?

Many thanks

MongoDB replication setup in production

$
0
0

I have have this mongoDB database that I want to deploy to production with redundancy / replication. I'm a software engineer and only have some experience in setting up a production environment. I'm also not yet a master of mongoDB, hence this question, so please bear with me :)

I've build a C# Web API application with the MongoDB.NET driver to connect to Mongo.

For our existing production setup we have the application running on IIS on two different physical machines, behind a load balancer.

The question I am struggling with is how I should add mongoDB to this setup. Should I run a mongoDB on both physical machines with one primary and one secondary? If so how would I make sure write operations always go to the correct machine, can/should I tell the load balancer to always redirect HTTP/POST request to machine x?

While I am asking this question I'm realising the setup I am suggesting is probably not going to work, as Mongo should be replicated with a odd number of instances, so three. Then my question would be should I deploy the databases on separate machines, different from the application servers? I know I'm probably answering my own question here but would like to know how others see it :)

So my real and final question would be, based on the above, would you say the setup should be as follows (physical machines):

--> IIS/Web API --> mongo secondary load balancer -----> mongo primary --> IIS/Web API --> mongo secondary

Or am I totally of target here...


I get a "An attempt was made to load a program with an incorrect format" error on a SQL Server replication project

$
0
0

The exact error is as follows

Could not load file or assembly 'Microsoft.SqlServer.Replication, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. An attempt was made to load a program with an incorrect format.

I've recently started working on this project again after a two month move to another project. It worked perfectly before, and I've double checked all the references.

How do I query the running pg_hba configuration?

$
0
0

I want to test if a replication connection is authorized by pg_hba.conf on the provider before issuing the replication-starting command, and don't know how. (I have access to both unix and postgresql shells on both nodes)

For the non-replication connection, I would connect psql using a connstring like 'host=$MASTER_IP port=5432 dbname=$DATABASE user=$DBUSER password=$DBPASSWORD'

Context: I am writing a script to automate the setup of replication between servers, and configuration of the servers is managed through different systems/repositories (legacy reasons). Therefore, I want to test if settings are all right at each step.

How NFS works client-side

$
0
0

I'm looking for a solution to keep a bunch of small files (< 1 constantly synchronized between multiple servers (one "master" and multiple "slaves" in a private network).

These files are served to clients by slaves through a NGINX web server.

I've already found and successfully tested lsyncd https://github.com/axkibe/lsyncd which is based on ssh and rsync.

However, I wanted to check if out there I could find a faster solution with a higher throughput.

I'm currently looking into NFS, however it's not clear to me how the NFS client (slave) retrieves the files.

Does the NFS client store files locally until they're updated by the NFS server (master)? Or every time the file is accessed by a slave, the slave has to download the file from the NFS server?

The reason behind this question is that I don't want to slow down the performance of the web server by requiring too much communication between slaves and the NFS server.

How To Prevent Replication Failure

$
0
0

If I become a MySQL DBA, will I had to deal all day with those kind of issues or do you have tips to prevent from breaking the whole replication?

I received this message because I removed manually the database, and after that the php script remove it.

Last_SQL_Errno: 1133`
Error 'Can't find any matching row in the user table' on query.`

Mongo in current 3.6 version: How to make collection invisible (Sharding, replica)

$
0
0

My mongo has 3 collections (a,b,c)

Is it possible provide access only for defined collections (not to all in db) with sharing or replicas?

For example host1 access on shard1 and this shard provide only collection a and collections b and c are not visible and not accessible?

Or may be somebody has other idea how to restrict access to special collections and keep allowed.

PS: i know that I can restrict read access on user level. :-)

After Restore Log_reuse_wait_desc of Replication

$
0
0

I'm wondering if anybody has ran into this issue before and has any suggestions on what's causing it/how to fix it. We are restoring a database into lower Dev environments and after the restore the log_reuse_wait_desc is Replication. The problem is we are not using keep replication or CDCs.

This happens in a couple of different setups but I'll keep this initial one simple.

Database from a 2008 instance restored to a 2012 instance. Only way to clear it out is to create a test publication on any random table, no subscription needed, and log_reuse_wait clears.

How to delete replication document for apache couchdb

$
0
0

I need to remove a replication document from the replicator db from couchdb, as mentioned by their documents. However I didn't find any sample curl used to delete the replication document. I have tried:

curl -vX DELETE -H "Content-type:application/json" http://localhost:5984/_replicator/33e6ca194de0f30420d15ecfea8b2f21

But the result is:

{"error":"not_found","reason":"missing"} * Connection #0 to host localhost left intact * Closing connection #0

What would be the proper curl syntax for deleting the replication document? Thanks in advance.


Getting Issues with Triggered data Insertion in Mysql Round Robin Replication Setup

$
0
0

I have 2 servers A and B, in which both have mysql is installed. I have setup round robin replication from both A and B servers. So A is master of B and B is master of A, similarly both are slaves of each other.

Now I have 2 tables details and search, when I insert records in details in server A, it replicates to server Bdetails table. On server B i put a trigger on that table so it invokes that trigger (in trigger i call SP) and it inserts a record in search table of server B. Till this it is going fine, but the record in search table is not getting replicated onto server A. I have enabled general log. If I insert the same record manually on server B first then it inserts records in search table of server B also and A also. But from server A it is not. Please help.

TDE on replicated databases

$
0
0

I have implemented Master-Slave replication between two databases. Now I have to implement Transparent Data Encryption (TDE) on the master database.

Is it possible to implement TDE in the master database only? Will the replicated data be encrypted along with replication?

Prevent replication of ALTER commands

$
0
0

I am using MariaDB 10.0 multi-source replication for a specific use case.

For security reasons, I would like to prevent ALTER commands on master to replicate (such as CREATE, ALTER, DROP...) whatever user run these commands (even root) but of course let SELECT, INSERT, UPDATE, DELETE commands to replicate....

I do not want to use SET SQL_LOG_BIN=0|1 on client side. In fact, I never want to replicate schema modification.

In practice, I wish I could revoke alter permissions to my replication user (who currently has REPLICATION SLAVE permission)

Is there a way to achieve this?

EDIT 2018-02-19

Since my requirements seems nonsense for some readers, I give some additional information about this use case.

I created one (or more) MariaDB Proxy database(s) with tables using BLACKHOLE Storage Engine. So data is not stored on this proxy server, but binlogs are.

I have other MariaDB servers running the same database schema but with INNODB storage engine that replicates data from proxy server(s) using MariaDB Multi-source Replication.

On the proxy server, I can safely recreate, for example, a table schema with a CREATE OR REPLACE TABLE mytable (id int) ENGINE=BLACKHOLE statement as there is no data stored in it.

But this kind of statement MUST NOT run as is on the "slaves" (which are not real slaves as you noticed) as they must remain in their original storage engine or any other option they may have at the table level.

I can do this by issuing a SET SQL_LOG_BIN=0 before executing my command, but I was looking for a way to make sure that I will not break the slaves in case I forget to do it.

Postgresql size of Slave is bigger than Master size

$
0
0

I have a Postgresql in master slave streaming replication. Master is located in four partitions (tablespaces). df -h on Master shows me

Filesystem      Size  Used Avail Use% Mounted on
***
/dev/md121      880G  490G  346G  59% /mydb/1
/dev/md122      880G  613G  223G  74% /mydb/2
/dev/md123      880G  322G  514G  39% /mydb/3
/dev/md124      880G  506G  330G  61% /mydb/4

but on Slave it takes more disk space on /mydb/4 partition

Filesystem      Size  Used Avail Use% Mounted on
***
/dev/sdb        880G  613G  223G  74% /mydb/2
/dev/sda        880G  448G  388G  54% /mydb/1
/dev/sdc        880G  322G  513G  39% /mydb/3
/dev/sdd        880G  773G   63G  93% /mydb/4

And it grows. WAL files are located in /mydb/1. Where I was mistaken?

Config of Slave

wal_compression = on
autovacuum_naptime = 2s
autovacuum_analyze_scale_factor = 0
autovacuum_vacuum_scale_factor = 0
max_wal_senders = 5
autovacuum_analyze_threshold = 1000
checkpoint_timeout = 40min
temp_buffers = 3000MB
autovacuum_vacuum_threshold = 1000
autovacuum_vacuum_cost_delay = 100ms
wal_keep_segments = 1000
wal_level = hot_standby
autovacuum_vacuum_cost_limit = 5000
autovacuum_max_workers = 6
listen_addresses = '192.168.1.4'
max_wal_size = 100GB
hot_standby = on

Recovery.conf on Slave

standby_mode = 'on'
primary_conninfo = 'user=replication password=mysecretpassword host=master.mydomain.local port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'

Config on Master

wal_compression = on
autovacuum_naptime = 2s
autovacuum_analyze_scale_factor = 0
autovacuum_vacuum_scale_factor = 0
max_wal_senders = 5
autovacuum_analyze_threshold = 1000
checkpoint_timeout = 40min
temp_buffers = 1GB
autovacuum_vacuum_threshold = 1000
autovacuum_vacuum_cost_delay = 100ms
wal_keep_segments = 6000
wal_level = hot_standby
autovacuum_vacuum_cost_limit = 5000
autovacuum_max_workers = 6
listen_addresses = '192.168.1.5'
cpu_index_tuple_cost = '0.0005'
wal_buffers = 16MB
checkpoint_completion_target = '0.9'
random_page_cost = 2
maintenance_work_mem = 32GB
max_wal_size = 60GB
synchronous_commit = false
work_mem = 2GB
cpu_tuple_cost = '0.001'
default_statistics_target = 500
effective_cache_size = 96GB

How to replicate and keep synchronize our sql server database

$
0
0

We have our online database hosted on AWS. We want to keep the database local for every client on sql express and sync to online database. How can we achieve this. I have checked transnational migration but it can work on remote server and secondly the sql express can be a publisher. How can we achieve this?

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>