Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Implementing Replication on existing database

$
0
0

I'm a developer but I need to implement replication on an existing database. I'm a bit familiar with replication and I already was able to implement it but I had a few issues and the two most notable ones are:

  1. Replication job affects maintenance plan -- it seems like the maintenance plan jobs can't be run because another job is currently running. I know some replication jobs are ran periodically. Can I avoid certain time of the day so it won't interfere with the maintenance plan?

  2. The Distribution database grew too large and caused a no free space issue. Since this is transactional replication, publisher is continuously sending copies of the commands to the subscriber. How can I minimize the size of distribution database?

Replication has been stopped and the disk space issue is now fixed but this might happen again in the future. Before I attempt to set it up again, what should I take note of to set up the replication successfully next time?


Create temp tables in Amazon RDS PostgreSQL read-replica

$
0
0

I have dozens of legacy stored procedures which create temporary tables inside for collecting results for a read-only application.

I've created a read-replica of my PostgreSQL in Amazon RDS and tried to perform this procedures, but failed, as it doesn't allow to create even temporary tables in a read-only transaction.

Are any ways how to solve this issue with minimal efforts?

mysql slave parallel workers from lower version master

$
0
0

Is there any specific reason for not able to use slave parallel workers while working with lower version of master which not supports parallel workers and higher version of slave which supports parallel workers. Here i am trying MASTER ( 5.5.28 ) and slave (5.6.19)

bin log dump - mysql master and slave

$
0
0

I have some issues regarding the master-slave data replication, is there any chance to re-sync them in order to have the same data in both master and slave without trying to add manually data on slave to be the same with master?

MySQL GTID replication: is it possible to make DDL changes to a slave and then catch it up to a master?

$
0
0

Here's what I would like to do: 1. hang a slave off my production master. 2. stop replication 3. adjust some data types and create some indexes on the slave (no data mods) 4. restart replication 5. once the slave has caught up, lock the db & swap slave & master

-the goal, obv. is to have no significant downtime while making some changes that will take time (200G table getting modified) -it should be possible, because the DDL changes I'm making have to do with shrinking data types & adding constraints - nothing that changes the actual values of data. -since GTID replication is row-based, I don't know if replication will choke trying to insert, for example, 11 digit ids into a 5 digit id column, even if the values are in range.

Master Master replication not working showing

$
0
0

error from master 1

Last_SQL_Error:

Error 'Cannot add or update a child row: a foreign key constraint fails (stylanzo_live.report_viewed_product_index, CONSTRAINTFK_REPORT_VIEWED_PRD_IDX_PRD_ID_CAT_PRD_ENTT_ENTT_ID FOREIGN KEY (product_id) REFERENCES catalog_product_entity (entity_id) ON)' on query. Default database: 'stylanzo_live'. Query: 'INSERT INTOreport_viewed_product_index (visitor_id,customer_id,product_id,store_id,added_at) VALUES (NULL, NULL, '1092', '23', '2015-05-28 19:20:00') ON DUPLICATE KEY UPDATE visitor_id = VALUES(visitor_id), customer_id = VALUES(customer_id), product_id = VALUES(product_id), store_id = VALUES(store_id), added_at = VALUES(added_at)' Replicate_Ignore_Server_Ids:

error from master 2

Last_SQL_Error:

Error 'Duplicate entry '157493' for key 'PRIMARY'' on query. Default database: 'stylanzo_live'. Query: 'INSERT INTO log_visitor (session_id, first_visit_at, last_visit_at, last_url_id,store_id) VALUES ('7059f889233675a315cd2a35a92e2480', '2015-05-28 19:20:27', '2015-05-28 19:20:27', '0', '23')

SELECT create_statement FROM common_schema.sql_foreign_keys WHERE TABLE_SCHEMA='report_viewed_product_index';

Also, when I am starting replication it shows duplicate error again.

How to completely get rid of replication subscriptions?

$
0
0

I have restored a database in another server. I don't want any of the transactional subscriptions of the former database. I already called

exec sp_removedbreplication 'MyRestoredDB' 
go

And also

exec sp_cleanupdbreplication
go

And

exec sp_replicationdboption 'MyRestoredDB','Publish','False',1
go

But when I try

truncate table dbo.sample

I get the following error:

Msg 4711, Level 16, State 1, Line 7 Cannot truncate table 'sample' because it is published for replication or enabled for Change Data Capture.

I verified that the database isn't enabled for CDC. What else I can do to perform the truncate? Delete isn't an option for reasons not relevant to the question.

Streaming replication is failing with "WAL segment has already been moved"

$
0
0

I am trying to implement Master/Slave streaming replication on Postgres 11.5. I ran the following steps -

On Master

select pg_start_backup('replication-setup',true);

On Slave Stopped the postgres 11 database and ran

rsync -aHAXxv --numeric-ids --progress -e "ssh -T -o Compression=no -x" --exclude pg_wal --exclude postgresql.pid --exclude pg_log MASTER:/var/lib/postgresql/11/main/* /var/lib/postgresql/11/main

On Master

select pg_stop_backup();

On Slave

rsync -aHAXxv --numeric-ids --progress -e "ssh -T -o Compression=no -x"  MASTER:/var/lib/postgresql/11/main/pg_wal/* /var/lib/postgresql/11/main/pg_wal

I created the recovery.conf file on slave ~/11/main folder

standby_mode = 'on'
primary_conninfo = 'user=postgres host=MASTER port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
primary_slot_name='my_repl_slot'

When I start Postgres on Slave, I get the error on both MASTER and SLAVE logs -

019-11-08 09:03:51.205 CST [27633] LOG:  00000: database system was interrupted; last known up at 2019-11-08 02:53:04 CST
2019-11-08 09:03:51.205 CST [27633] LOCATION:  StartupXLOG, xlog.c:6388
2019-11-08 09:03:51.252 CST [27633] LOG:  00000: entering standby mode
2019-11-08 09:03:51.252 CST [27633] LOCATION:  StartupXLOG, xlog.c:6443
2019-11-08 09:03:51.384 CST [27634] LOG:  00000: started streaming WAL from primary at 12DB/C000000 on timeline 1
2019-11-08 09:03:51.384 CST [27634] LOCATION:  WalReceiverMain, walreceiver.c:383
2019-11-08 09:03:51.384 CST [27634] FATAL:  XX000: could not receive data from WAL stream: ERROR:  requested WAL segment 00000001000012DB0000000C has already been removed
2019-11-08 09:03:51.384 CST [27634] LOCATION:  libpqrcv_receive, libpqwalreceiver.c:772
2019-11-08 09:03:51.408 CST [27635] LOG:  00000: started streaming WAL from primary at 12DB/C000000 on timeline 1
2019-11-08 09:03:51.408 CST [27635] LOCATION:  WalReceiverMain, walreceiver.c:383

The problem is the START WAL - 00000001000012DB0000000C is available right until I run the pg_stop_backup() and is getting archived and no longer available, once the pg_stop_backup() is executed. So this is not an issue of the WAL being archived out due to low WAL_KEEP_SEGMENTS.

postgres@SLAVE:~/11/main/pg_wal$ cat 00000001000012DB0000000C.00000718.backup
START WAL LOCATION: 12DB/C000718 (file 00000001000012DB0000000C)
STOP WAL LOCATION: 12DB/F4C30720 (file 00000001000012DB000000F4)
CHECKPOINT LOCATION: 12DB/C000750
BACKUP METHOD: pg_start_backup
BACKUP FROM: master
START TIME: 2019-11-07 15:47:26 CST
LABEL: replication-setup-mdurbha
START TIMELINE: 1
STOP TIME: 2019-11-08 08:48:35 CST
STOP TIMELINE: 1

My MASTER has archive_command set, and I have the missing WALs available. I copied them into a restore directory on the SLAVE and tried the recovery.conf below, but it still fails with the MASTER reporting the same WAL segment has already been moved error.
Any idea how I can address this issue? I have used rsync to setup replication without any issues in the past on Postgres 9.6, but have been experiencing this issue on Postgres 11.

standby_mode = 'on'
primary_conninfo = 'user=postgres host=MASTER port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
restore_command='cp /var/lib/postgresql/restore/%f %p'

EC2 and setting up MYSQL Replication

$
0
0

my database size is very big around 100gb and when I restore it takes 3 days to restore. I am using amazon ec2.

My question is I want to setup a new slave to MYSQL master, can I take a snapshot of the Master and create a new instance using the snapshot and then enable replication in between.Given that I take log position before the snapshot and my.cnf changes as required. Snapshot is very fast and takes 1-2hours to create a new machine, whereas mysqldump is taking very very long.

I was just worried if replication works using above method, can anyone help please.

Restore server instance using logical backup and wal files

$
0
0

Is it possible to restore a database instance using logical backup and wal files?

A senior SQL Server DBA asked me to implement below scenario in PostgreSQL

took the logical backup using pg_dumpall of the master then do failover after some time. Now restore database instance using the logical backup of primary + wal files of primary + wal files of secondary.

master-master replication or master slave replication?

$
0
0

We have right now one server with a database. A device and a website are accessing that database causing load

What I want is to create two servers master-slave where master databases are replicated to slave but slave have some databases that must not be in master database.

Is it possible to do it using master-slave replication or using master-master replication?

Grabbing SQL Dump of Master DB Without Downtime

$
0
0

I'm curious whether downtime will be necessary to grab a SQL dump of my master database.

Right now, I'm in the process of rebuilding my one slave. There is actually only one database from master that is being replicated onto slave. All tables in that database are InnoDB. This is the command I want to run:

mysqldump --master-data --single-transaction --hex-blob dbname | gzip > dbname.sql.gz

I'm running MySQL 5.1 and here is a redacted version of my my.cnf file:

[mysqld]
default-storage-engine=InnoDB
character-set-server=UTF8
lower_case_table_names=1
transaction_isolation=READ-COMMITTED
wait_timeout=86400
interactive_timeout=3600
delayed_insert_timeout=10000
connect_timeout=100000
max_connections=750  
max_connect_errors=1000
back_log=50
max_allowed_packet=1G
max_heap_table_size=64M
tmp_table_size=64M
bulk_insert_buffer_size=128M
innodb_buffer_pool_size=10000M
innodb_data_file_path=ibdata1:256M:autoextend
innodb_file_per_table=1
innodb_additional_mem_pool_size=32M
innodb_log_file_size=1G
innodb_log_buffer_size=8M
innodb_flush_method=O_DIRECT
innodb_lock_wait_timeout=240
innodb_flush_log_at_trx_commit=2
innodb_open_files=8192
innodb_support_xa=ON
thread_cache_size=500
expire_logs_days=2
server-id=1
log_bin=1
binlog_format=MIXED
sync_binlog=0

[mysqldump]
max_allowed_packet=128M

Am I good without downtime or not? I'm concerned about a possible read lock being placed on tables.

How to make UUID() function, statement-based replication safe?

$
0
0

I'm trying the copy and generating UUID()s on the fly for exiting data when executing A insert from select.

How can I make the UUID() function statement-based replication safe? Can I somehow use the old ID's from the primary table or at least save them for the entire operation? so the replica will have the same IDs?

INSERT INTO table_name (number,REPLACE(UUID(),………(SELECT ….));

which means primary and any its slave have inconsistent data for table_name, because of the UUID func.

Mongodb automatic Replica Set creation with JS driver

$
0
0

The MongoDB docs state that one can use the mongo client console to initialize replica sets by running rs.initiate([config]). However, I don't want to have to manually use the command line for this. There is a driver method called Admin.command in the docs but I don't know how that would work here. How do I automate configuring replica sets using the MongoDB JS driver?

I have a js script that initializes my database installation by creating the data folder and automatically setting up users for access control. I want it to also configure the replica set so the database is completely ready to use after running it. I'm using MongoDB 4.2.1 and driver version 3.3.

mariadb create slave from another slave

$
0
0

I have a big database its like 800G, and I'm using MariaDB 10.2.9. My configuration is :

Server A (master) Server B ( Slave from Server A)

Now I setup Server C and I wanted that it reads everything from Server B not from Server A, I have already add these lines to my.cnf : log_slave_updates log_bin = /var/log/mysql/mariadb-bin log_bin_index = /var/log/mysql/mariadb-bin.index max_binlog_size = 100M binlog_format=MIXED but I don't know what should I do on Server B and Server C to do this solution.


MySQL replication - independent data update on slave

$
0
0

I have an unusual situation where I am migrating websites and their corresponding databases from one server to another.

I have a database that is hosted on one server, A, where new records are created, and these are accessed and modified on the second server B (but no new records are created).

I have set up A as master, and B as slave to ensure that B has access to new records created by A, but A does not need access to the amended record data that are changed on B.

I know there are lots of warnings about not changing data on the slave, but it seems to me that this should work OK (until I finish the migration, at which point I will move the record creation facilities to server B and break the slave link).

Any thoughts?

Understanding pg-pool in details

$
0
0

I have following problem regarding the pg-pool (3.3.3)

  1. When master fails, automatic failover happens and standby is promoted to master. But this new master don't have any replicators. Other slave servers still thinks as old master is their master and trying replicate from it.

  2. Now insert is waiting indefinitely until synchronous replication happens. This wait is indefinite since now there's no replication happens from new master.

Is this the correct behaviour of pg-pool ?

Now in this setup I need to sync the old master manually and setup the replication. After everything setup pg-pool fails to find primary node. This primary node finding process success, after I comment # - Backend Connection Settings - on pgpool.conf, start pg-pool and again uncomment the connection settings, restart the pg-pool.

sysarticles Status 57

$
0
0

I am looking at the data kept in the various SQL Replication tables. I am trying to understand what information is available and what it means.

In this example I have push replication on the database AdventureWorks2014 in a SQL 2017 Enterprise edition instance. The subscription is on the same instance.

The table sysarticles has the column status and it is showing a value of 57, I don't understand how that is possible, nor what it might mean

status tinyint

The bitmask of the article options and status, which can be the bitwise logical OR result of one or more of these values:

1 = Article is active.

8 = Include the column name in INSERT statements.

16 = Use parameterized statements.

24 = Both include the column name in INSERT statements and use parameterized statements.

64 = Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.

For example, an active article using parameterized statements would have a value of 17 in this column. A value of 0 means that the article is inactive and no additional properties are defined.

This query

SELECT TOP (1000) [artid]
      ,[del_cmd]
      ,[dest_table]
      ,[filter]
      ,[ins_cmd]
      ,[name]
      ,[pubid]
      ,[status]
      ,[type]
      ,[upd_cmd]
      ,[schema_option]
  FROM [AdventureWorks2014].[dbo].[sysarticles]

Gives these results

sysarticles

There is one clue (emphases mine)

The bitmask of the article options and status, which can be the bitwise logical OR result of one or more of these values:

Which leads to sp_addarticle Additionally I have included the value of schema_option in the results the value in my example is 0x000000000803509F I am not sure how/if this could create the 57

schema_option binary(8)

A bitmask of the schema generation options for the article, which control what parts of the article schema are scripted out for delivery to the Subscriber. For more information about schema options, see sp_addarticle (Transact-SQL).

I can understand 17, but how do you get 57? 1+8+16+24=49

If you subtract one (57-1=56) Fifty Six is a factor of Eight, (7*8=56) but it is hard to imagine where any potential values could be reported twice.

What is the maximum replication factor for a partition of kafka topic

$
0
0

I hava kafka cluster having 3 brokers and a couple of topics with each having 5 partitions. Now i want to set the replication factor for the partitions.

What is the maximum replication factor which i can set for a partition of kafka topic?

Group Policy - replication snapshot

$
0
0

Has anyone seen a domain Group Policy stop a transactional replication snapshot from being configured and executed "A required privilege is not held by the client".

To cut a long story short... I am having trouble getting to much from our server admin guys. All they would do was create me a new server, added it to an OU with a filter to block the Group Policy and I got it to work (using domain user for agent) but now I can't get anything out of them about the Group Policies being delivered.

So my question is, does anyone know the policy setting that could be blocking the making of a snapshot.

Any help greatly appreciated. David

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>