Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Windows Server - Storage Replication - Memory Error

$
0
0

I'm trying to set Windows Server Datacenter edition 2016+ feature storage replication. I'm using AWS EC2 instances and both (server and target) serves are joined to Windows Active Directory Domain.

WSFC is not yet set up. I've admin rights on the servers and logged in using my AD login. I ran the below command in power shell with run as administrator.

Test-SRTopology -SourceComputerName TM-SQL-01 -SourceVolumeName d: -DestinationComputerName TM-S QL-02 -DestinationVolumeName d: -SourceLogVolumeName g: -DestinationLogVolumeName g: -DurationInMinutes 1 -ResultPath c: \Temp

About 19 steps/tests passed and I got the strange memory error below. I just spunk up the EC2 instance. It has 64GB of memory and about 62GB is free and still got the below error!

**Physical Memory Requirement Test: TM-SQL-01 does not have the required physical memory to deploy Storage Replica. The minimum physical memory requirement to deploy Storage Replica is 2GB. Actual physical memory available on TM-SQL-01 is 0GB.

Physical Memory Requirement Test: TM-SQL-02 does not have the required physical memory to deploy Storage Replica. The minimum physical memory requirement to deploy Storage Replica is 2GB. Actual physical memory available on TM-SQL-02 is 0GB.**

I'm totally stumped. What am I doing wrong? Any ideas? Thanks!


AWS RDS PostgreSQL cross region replication streaming or back slot window

$
0
0

I am trying to do the PoC for RDS PostgreSQL replication across multi-region

I went though these links [replication, streaming ] docs and found the streaming info, in order to create a read replication , AWS RDS PostgreSQL uses Postgres native streaming replication. Data changes at the source instance stream to Read Replica using streaming replication.

But I am not able to understand the configuration on what is the stream slot window to stream the WAL logs..

is it every second or as soon as each transnation is written to WAL or is it based on the count like after 'n' transaction WAL logs etc?

Slave failing to replication in MariaDB 10

$
0
0

I am not able to enable row based replication. I am trying to set up a master slave replication between two nodes running.

Considerations

  • Server version 10.0.23-MariaDB-log
  • Protocol version 10
  • UNIX socket /var/lib/mysql/mysql.sock.

The scenario I have followed is this:

  1. Shut the prod (mc1), took a cold backup and restored on slave (mc2).

  2. On master ran show master status; prodarchivedlogs-bin.000001 and the pos as 582240

  3. Set these info in slave.

  4. Started the slave, it started throwing up error message:

160812 3:31:50 [ERROR] Slave SQL: Could not execute Update_rows_v1 event on table radius1.radacct; Can't find record in 'radacct', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log prodarchivedlogs-bin.000001, end_log_pos 587695, Gtid 0-1-79, Internal MariaDB error code: 1032

160812 3:17:18 [Warning] Slave: Can't find record in 'radacct' Error_code: 1032

160812 3:17:18 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'prodarchivedlogs-bin.000001' position 587072

I have tried all methods based on my knowledge, but it is just not functioning. I tried using mysqlbinlog tool to extract to a file and then tried inserting, even then it is failing.

The prod my.cnf is as follows

symbolic-links=0
innodb_buffer_pool_size=10G
innodb_data_home_dir= /var/lib/mysql/data2
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/redologs
innodb_log_file_size    = 256M
innodb_buffer_pool_size = 10000M
innodb_flush_method     = O_DIRECT
thread_stack    = 256K
max_connections=100000

expire_logs_days        = 5
max_binlog_size         = 100M

log-bin=/backup/archivedlogs/prodarchivedlogs-bin
binlog_format=row
server-id=1

The slave my.cnf is as follows

skip-name-resolve
skip-slave-start
datadir=/var/lib/mysql/data1
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0

innodb_buffer_pool_size=10G
innodb_data_home_dir= /var/lib/mysql/data2
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/redologs
innodb_log_file_size    = 256M
innodb_buffer_pool_size = 10000M
innodb_flush_method     = O_DIRECT
thread_stack    = 256K
max_connections=100000
expire_logs_days        = 5
max_binlog_size         = 100M
log-bin=/backup/archivedlogs/prodarchivedlogs-bin
binlog_format=row
server-id=2

Further details

The master prod machine has 40 GB RAM and has a 16 core CPU.

I followed the process below:

  1. created the rep user and gave the access.
  2. shutdown prod.
  3. scp'ied the files from prod to slave machine.
  4. brought up master instance.
  5. ran the command show master status\G;. took note of bing log and the postion.
  6. brought up the slave instance
  7. configured the slave profile with the bin log file and the position
  8. ran the start slave command. then ran the command show slave status\G; and I started seeing the error messages of

    HA_ERR_KEY_NOT_FOUND.

Also:

  • Tried to run the query mysqlbinlog on the bin log. The report said the file is being used.

  • Ran the command flush logs; still no results.

Trying all methods to fix this.

Any help will be greatly appreciated.

Is logical subscription replicated (as all other data) by streaming replication in PostgreSQL?

$
0
0

Assuming to have 3 PostgreSQL 12 nodes: A,B,C.

  1. A node contains dataset and it's a logical publisher.
  2. B node doesn't contain data but has same schema and tables as node A. B node is logical subscriber to A.

If C node is configured as streaming replica of B - will logical subscription be replicated from B to C? Should it start another logical subscription with A as publisher and C as subscriber?

Transaction Logs: How much log data generated (hourly or daily)?

$
0
0

I am trying to find out the amount of transaction-log data generated by SQL Server 2016 during every hour or every day. By "data generated" I mean how much data (in bytes/KBs/etc) was written to disk every hour (or every day).

Is there a way to find this out?

Our database is in FULL recovery mode and we do have regular transaction log backups. So, I am on the opinion that querying the backup metadata in msdb may help to achieve that. Does it work ? Will it give me correct and reliable results?

Second option will probably be to look at the amount of read/write IO that is happening to the transaction log files. Can this one work? If yes, how can I do that? Are there any SQL Server DMVs providing such information? What about Windows tools? (such as Windows performance monitors)?

If possible at all, I would prefer the second option above because it won't require having transaction log backups. Therefore, it can be used even with SIMPLE recovery model databases. So, my question is, is it possible?

Are there any other alternatives? Such as SQL Server tools or views that are readily available?

Please note, I need this data because we are trying to estimate the amount of network IO that will be required if we create a (near) real-time replication of our databases in the cloud. So, I thought we should somehow measure the amount of IO attributed to the transaction log files. Is my assumption correct that the required IO will be equal to the amount of Io to the transaction log files? Is this how SQL Server transaction replication work ? (i.e. by sending VLFs to the replicated site)?

Minimizing downtime during MariaDB Replication Setup

$
0
0

I have the task of setting up MariaDB (10.0) replication from a master running in a datacenter to a slave running out in an AWS VPC. Reading up on the instructions on Setting Up Replication - MariaDB Knowledge Base there's mention of taking a lock on all tables on the master (FLUSH TABLES WITH READ LOCK) while using mysqldump to transfer the Databases across to the slave. This step has me concerned a slight bit for the following reasons

  1. We have a number of large databases (~20 DBs @ 40G) with some huge ones (~3 @ 100G).
  2. It takes about ~40 minutes to transfer each DB across to the slave. In total, this entire process could take up to 20 hours (assuming good conditions).
  3. We can't keep locks on the master for too long - as we disrupt business while the tables are locked.
  4. Should anything fail in the process of copying the dumps across to the slave - we'll have taken the master down in vain and may need to attempt this again.

What would be a good way to go about setting up replication in this scenario with minimal downtime to production. I'm open to some out of the box thinking - is there a way to do this by breaking things down and doing them in stages so that we can verify each stage?

EDIT I should add that we do take nightly dumps of the DBs as part of the backup strategy. Perhaps there's a way of using these to aid the process?

Delete records in Peer to Peer Replication in SQL Server 2008 R2

$
0
0

We have a peer-to-peer replication setup on our real time servers I want to delete records from some of the tables that are involved in the replication. I'm using a stored procedure to do this. Some of the tables have 8+ million records. I want to know if by running the stored procedure on one server is good enough or do i need to run on both the servers? And if I run on just one server how does it impact the performance? Please recommend any best practices for this process. This needs to be run as a job every month.

Bootstrap bucardo replication after pg_restore

$
0
0

Currently I am setting up Master/Master Replication with bucardo between 5 Nodes on different locations (should provide location transparency). The database holds ~500 Tables which should be replicated. I grouped them into smaller replication herds of 50 Tables at maximum based on their dependency on each other. All tables have primary keys defined and the sequencers on each node are set up to provide system wide unique identities (based on residue class)

To get an initial database on each node, I made a --data-only custom format pg_dump into a File and restored this on each node via pg_restore. Bucardo sync is setup with the bucardo_latest strategy to resolve conflicts. Now when I start syncing bucardo is deleting all datasets in the origin database first and inserting it again from one of the restored nodes, because all restored datasets have a "later timestamp" (the point in time when I called pg_restore). This ultimately prohibits the inital startup as bucardo needs very much time and also fails, as there are lots of datasets to solve and timeouts often too short.

I also have 'last_modified' timestamps on each table which are managed by UPDATE triggers, but as I understand it, pg_dump inserts data via COPY, and therefore these triggers don't get fired.

  • Which timestamp does bucardo use to find out who is bucardo_latest?
  • Do I have to call pg_dump with something like set SESSION_REPLICATION_ROLE = 'replica';?

I just want bucardo to keep track of every new change, not executing pseudo changes because of the restore.

EDIT: pg_restore has definitely fired several triggers at restore time...as said I keep track on user and last modification date in each table, and those values are set to the user and timestamp when the restore was done. I am aware, that I can set SESSION_REPLICATION_ROLE for a plain text format restore via psql. Is this also possible for pg_restore somehow?


Drop a replicated subscription table that has been orphaned after restore migration

$
0
0

We restored a database on a new server for a migration.

When attempting to set a subscription to a transnational publication, we get the error:

Can't drop the table because it is set for replication.

However, it is not being replicated to.

I have already set several push subscriptions up from this server that go to another so I don't want to completely wipe out all replication and rebuild everything.

I was wondering if there is a place that I could delete the piece of information that makes the article think it is still replicated to, so that I can add it back to a subscription

Date        1/29/2018 3:42:13 PM
Log     Job History 

Step ID     1
Server      Server1Pub
Job Name        RepJobName
Step Name       Run agent.
Duration        00:00:05
Sql Severity    0
Sql Message ID  0
Operator Emailed    
Operator Net sent   
Operator Paged  
Retries Attempted   0

Message
2018-01-29 21:42:13.174 Copyright (c) 2016 Microsoft Corporation
2018-01-29 21:42:13.174 Microsoft SQL Server Replication Agent: distrib
2018-01-29 21:42:13.174 
2018-01-29 21:42:13.174 The timestamps prepended to the output lines are expressed in terms of UTC time.
2018-01-29 21:42:13.174 User-specified agent parameter values:
            -Publisher Server1Pub
            -PublisherDB DBPub
            -Publication PubName
            -Distributor Server1Pub
            -SubscriptionType 1
            -Subscriber Server2Sub
            -SubscriberSecurityMode 1
            -SubscriberDB db1
            -XJOBID 0x466A0D3EF212C442B2B0E6521144ED71
            -XJOBNAME ReplicationJobName
            -XSTEPID 1
            -XSUBSYSTEM Distribution
            -XSERVER Server2Sub
            -XCMDLINE 0
            -XCancelEventHandle 0000000000006F44
            -XParentProcessHandle 000000000000592C
2018-01-29 21:42:13.174 Startup Delay: 2294 (msecs)
2018-01-29 21:42:15.474 Connecting to Subscriber 'Server2Sub'
2018-01-29 21:42:15.509 Connecting to Distributor 'Server1Pub'
2018-01-29 21:42:16.279 Parameter values obtained from agent profile:
            -bcpbatchsize 2147473647
            -commitbatchsize 100
            -commitbatchthreshold 1000
            -historyverboselevel 1
            -keepalivemessageinterval 300
            -logintimeout 15
            -maxbcpthreads 1
            -maxdeliveredtransactions 0
            -pollinginterval 5000
            -querytimeout 1800
            -skiperrors 
            -transactionsperhistory 100
2018-01-29 21:42:16.864 Initializing
2018-01-29 21:42:17.239 Snapshot will be applied from the alternate folder '\\192.168.5.124\Replications\unc\Folder'
2018-01-29 21:42:17.914 Agent message code 3724. Cannot drop the table 'dbo.table' because it is being used for replication.
2018-01-29 21:42:18.144 Category:COMMAND
Source:  Failed Command
Number:  
Message: drop Table [dbo].[table]

2018-01-29 21:42:18.194 Category:NULL
Source:  Microsoft SQL Server Native Client 11.0
Number:  3724
Message: Cannot drop the table 'dbo.table' because it is being used for replication.

Creating database replica in Access-VBA does not work

$
0
0

I am trying to create a replica of a database:

Dim db as Database
Set db=DBEngine.Workspaces(0).OpenDatabase("h:\source.accdb")
db.MakeReplica "h:\replica.accdb", "TEST", dbRepMakeReadOnly

I always get the runtime error 3032 "Cannot perform this Operation" whenever the 3rd line runs. I do not have any idea why this happens.

As I understand, MakeReplica creates a "copy" of the source database and allows later synchronization using db.Synchronize.

how to do Apache SVN synchronization in multiple servers? [closed]

$
0
0

I have three servers installed Apache subversion on debian 9 OS. 3 servers are High availability servers using pacemaker. I am using Virtual IP for access Apache. So that means when Master down, client become Master. so I need SVN real time synchronization to all servers.

Is there any other method to do this like svnsync?

i did not found any documentation about multiple server SVN synchronization.

please help me

Replication priority

$
0
0

I have two MongoDB instances running on two servers (instance1 and instance2) and they are configured as a replica set. Instance1 is the primary and instance2 is the secondary.

When instance1 was shut down, instance2 became the primary. However, they both had priority value set to 1. Therefore when instance1 was up again, instance2 kept being the primary node.

I changed the priority value of instance1=2 and instance2=1. If instance1 is shut down again, instance2 will become the primary. When instance1 is up again, will this new priority setting make sure that instance1 becomes the primary again?

Postgresql Replication fail over scenario - not able to bring back old primary as slave

$
0
0

I am facing issue while making old primary to standby after 1st fail over.

1st time slave has switched to master and when old master comes back it is still acting as primary.

I am using repmgr.

Bootstrap bucardo replication after pg_restore

$
0
0

Currently I am setting up Master/Master Replication with bucardo between 5 Nodes on different locations (should provide location transparency). The database holds ~500 Tables which should be replicated. I grouped them into smaller replication herds of 50 Tables at maximum based on their dependency on each other. All tables have primary keys defined and the sequencers on each node are set up to provide system wide unique identities (based on residue class)

To get an initial database on each node, I made a --data-only custom format pg_dump into a File and restored this on each node via pg_restore. Bucardo sync is setup with the bucardo_latest strategy to resolve conflicts. Now when I start syncing bucardo is deleting all datasets in the origin database first and inserting it again from one of the restored nodes, because all restored datasets have a "later timestamp" (the point in time when I called pg_restore). This ultimately prohibits the inital startup as bucardo needs very much time and also fails, as there are lots of datasets to solve and timeouts often too short.

I also have 'last_modified' timestamps on each table which are managed by UPDATE triggers, but as I understand it, pg_dump inserts data via COPY, and therefore these triggers don't get fired.

  • Which timestamp does bucardo use to find out who is bucardo_latest?
  • Do I have to call pg_dump with something like set SESSION_REPLICATION_ROLE = 'replica';?

I just want bucardo to keep track of every new change, not executing pseudo changes because of the restore.

EDIT: pg_restore has definitely fired several triggers at restore time...as said I keep track on user and last modification date in each table, and those values are set to the user and timestamp when the restore was done. I am aware, that I can set SESSION_REPLICATION_ROLE for a plain text format restore via psql. Is this also possible for pg_restore somehow?

MySQL DB Should replication be paused before point in time recovery?

$
0
0

I have a MySQL DB and replica. I want to perform a Point In Time Recovery for the master. Should I stop replication or it is OK to proceed as is?

Thanks


Can I change the wal_level from logical to replica? If so are any impact on the replication process?

$
0
0

Is there a chance to change the wal_level from logical to replica without any impact. I know this involves a restart of services but is this possible? For example if there is a already a logical replication running with master and slave setup. The question here is can I change the wal_level now from logical to replica(streaming) with the same master and slave.

Error while connecting to Oracle Endpoint in Qlik replicate "cannot load libclntsh.so.11.1"

$
0
0

The replicate server is installed on Linux and i am trying to connect to a oracle database as source which is on a different server. While testing the connection i am getting the attached error message. The libclntsh.so is already defined in the oracle server machine. Kindly let me know the root cause of this.

enter image description here

Mongo filtered replication

$
0
0

I would like to replicate a specific collection in a mongo cluster onto a different cluster, from what I have found in my search is called "selective filter" have not found information about mongo been able to do it.

Is it possible to do selective replication between mongo clusters?

MySQL replication master-slave with 2 or more users in master

$
0
0

I've just configured MySQL replication master-slave. I have into master instance 3 user: root (with all privileges and grant), admin(with all privileges) and slaveUser(with replication salve privilege). I've configured slave server to, and if I modify (with root user) in my master instance some resource this are replicated into slave. Well, if I modify a resource, in master instance, with admin user this resource isn't modified into slave and I've this error

Error 'Can't find any matching row in the user table' on query. Default database: 'my_db'. Query: 'GRANT ALL PRIVILEGES ON . TO 'admin'@'%''

I've configured this on the same machine with 2 docker instances.

Any suggestions??

EDIT

This is my users on MASTER

+-----------+-----------+
| user      | host      |
+-----------+-----------+
| admin     | %         |
| root      | %         |
| userSlave | %         |
| mysql.sys | localhost |
+-----------+-----------+

and this is users on SLAVE

+-----------+-----------+
| user      | host      |
+-----------+-----------+
| root      | %         |
| mysql.sys | localhost |
+-----------+-----------+

Which kind of permission does admin need to write replication on slave??

EDIT add SHOW SLAVE STATUS

mysql> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.17.0.3
                  Master_User: userSlave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 317959980
               Relay_Log_File: mysql-relay.000002
                Relay_Log_Pos: 5834
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 1133
                   Last_Error: Error 'Can't find any matching row in the user table' on query. Default database: 'mydb'. Query: 'GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%''
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 5621
              Relay_Log_Space: 317961680
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 1133
               Last_SQL_Error: Error 'Can't find any matching row in the user table' on query. Default database: 'mydb'. Query: 'GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%''
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 100
                  Master_UUID: 8527301e-9765-11e5-a957-0242ac110003
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: 
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 151215 20:26:46
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
1 row in set (0.00 sec)

ERROR: 
No query specified

Replicate a single table

$
0
0

Is it possible to replicate a single table?

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>