Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

CRITICAL: system ID mismatch, node belongs to a different cluster: 6859654378827691778 != 6859654951670505099

$
0
0

i installed a patroni master node and need to create a pgbackrest replica the master node state is running but the slave node is stopped then it disappear because it looks like it belongs to another database identifier, when i try to run restore command manually pgbackrest --stanza=main --log-level-console=info --delta restore the slave node state change to running

here is the master postgresql.yml file:

> scope: {{ obasicat }} namespace: /pg_cluster/ name: {{ master }}> > restapi:>     listen: {{ master_ip }}:8008>     connect_address: {{ master_ip }}:8008> > etcd:>     host: {{ etcd_ip }}:2379> > bootstrap:   dcs:>     ttl: 30>     loop_wait: 10>     retry_timeout: 10>     maximum_lag_on_failover: {{ lag }}>     postgresql:>       use_pg_rewind: false>       use_slots: true> >   method: pgbackrest   pgbackrest:>     command: /home/osadmin/custom_bootstrap.sh>     keep_existing_recovery_conf: False>     no_params: False>     recovery_conf:>       recovery_target: immediate>       recovery_target_action: pause>       restore_command: pgbackrest --stanza={{ obasicat }} archive-get %f %p> >   pg_hba:>   - host all         postgres    0.0.0.0/0      trust>   - host replication replicate {{ master_ip }}/0 md5>   - host replication replicate {{ slave_ip }}/0 md5>   - host all all 0.0.0.0/0 md5> >   users:>     admin:>       password: admin>       options:>         - createrole>         - createdb> > postgresql:   listen: "*:5432"   connect_address: {{ master_ip }}:5432> data_dir: /{{opgbase}}/{{opgname}}/data/   bin_dir:> /opt/pgsql/na/11.7/bin/   authentication:>     replication:>       username: replicate>       password: replicate>     superuser:>       username: postgres>       password: postgres> >   create_replica_methods:>     - pgbackrest   pgbackrest:>     command: pgbackrest --stanza={{ obasicat }} --delta restore --config=/etc/pgbackrest.conf --pg1-path=/pgqdata/pgserver01/data/ --log-level-console=info>     keep_data: True>     no_params: True> > tags:>     nofailover: false>     noloadbalance: false>     clonefrom: false>     nosync: false> > log:>     level: DEBUG>     dir: /tmp/

and this is the slave postgresql.yml file:

> scope: {{ obasicat }} namespace: /pg_cluster/ name: {{ slave }}> > restapi:>     listen: {{ slave_ip }}:8008>     connect_address: {{ slave_ip }}:8008> > etcd:>     host: {{ etcd_ip }}:2379> > bootstrap:   dcs:>     ttl: 30>     loop_wait: 10>     retry_timeout: 10>     maximum_lag_on_failover: {{ lag }}>     postgresql:>       use_pg_rewind: false>       use_slots: true> >   method: pgbackrest   pgbackrest:>     command: /home/osadmin/custom_bootstrap.sh>     keep_existing_recovery_conf: False>     no_params: False>     recovery_conf:>       recovery_target: immediate>       recovery_target_action: pause>       restore_command: pgbackrest --stanza={{ obasicat }} archive-get %f %p> >   pg_hba:>   - host all         postgres    0.0.0.0/0      trust>   - host replication replicate {{ master_ip }}/0 md5>   - host replication replicate {{ slave_ip }}/0 md5>   - host all all 0.0.0.0/0 md5> >   users:>     admin:>       password: admin>       options:>         - createrole>         - createdb> > postgresql:   listen: "*:5432"   connect_address: {{ slave_ip }}:5432 > data_dir: /{{opgbase}}/{{opgname}}/data/   bin_dir:> /opt/pgsql/na/11.7/bin/   authentication:>     replication:>       username: replicate>       password: replicate>     superuser:>       username: postgres>       password: postgres> >   create_replica_methods:>     - pgbackrest   pgbackrest:>     command: pgbackrest --stanza={{ obasicat }} --delta restore --config=/etc/pgbackrest_slave.conf --pg1-path=/pgqdata/pgserver01/data/ --log-level-console=info>     keep_data: True>     no_params: True> > tags:>     nofailover: false>     noloadbalance: false>     clonefrom: false>     nosync: false> > log:>     level: DEBUG>     dir: /tmp/

Please any help ?! Thanks


Configuring slave from EC2 master to Amazon RDS slave

$
0
0

I am configuring replication from EC2 master to Amazon RDS instance.

After starting the slave, I don't see any errors but I see slave_IO:

_thread is connecting. 

Master version:5.6.23
Slave Version:5.6.19

show slave status \G

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Connecting to master
                  Master_Host: XXXXXXXXXXX
                  Master_User: XXXXXX
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: cli-bin.000032
          Read_Master_Log_Pos: 713
               Relay_Log_File: relaylog.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: cli-bin.000032
             Slave_IO_Running: Connecting
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table: mysql.plugin,innodb_memcache.cache_policies,mysql.rds_sysinfo,mysql.rds_replication_status,mysql.rds_history,innodb_memcache.config_options
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 713
              Relay_Log_Space: 618
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 0
                  Master_UUID:
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
1 row in set (0.00 sec)

show global variables

mysql> show global variables like '%old_passwords%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| old_passwords | 0     |
+---------------+-------+
1 row in set (0.01 sec)

mysql> show global variables like '%secure_auth%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| secure_auth   | OFF   |
+---------------+-------+
1 row in set (0.00 sec)
====================

The problem is Slave_IO_state is connecting and Slave_io_thread is connecting state but replication is not happening.

Replicate_Do_DB and Replicate_Wild_Do_Table - Replication does not work

$
0
0

I have been tasked to set replication up on a slave host. The master database is a "data store" where tables are dropped, recreated, and reloaded on a daily basis.

My initial setup of the slave host worked fine by replication was always days behind the master. After talking to the users of the "data store", I realized that not all of the databases and not all of the tables need to be replicated over. So this is what I did

CI-DB002-PRD [root@localhost] ON (none)>  STOP SLAVE\G

CI-DB002-PRD [root@localhost] ON (none)> CHANGE REPLICATION FILTER REPLICATE_DO_DB = (bidw,cf2_fact,ct_fact,ez_fact,gt_fact,sfdc,soa_fact,tesla_fact,tmc_fact);

CI-DB002-PRD [root@localhost] ON (none)> CHANGE REPLICATION FILTER REPLICATE_WILD_DO_TABLE = ('bidw.domo%', 'bidw.consolidated%', 'bidw.cf2%', 'bidw.tesla%');

CI-DB002-PRD [root@localhost] ON (none)>  START SLAVE\G

Very quickly, replication caught up and now, the slave is 3-7 seconds behind the master.

But replication is not touching the databases I want to be replicated. Even though data changes on master every 10 minutes for the schemas I list above. I did several validation queries and results now differ. When I check the status of my slave, nothing stands out

CI-DB002-PRD [root@localhost] ON (none)> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Queueing master event to the relay log
                  Master_Host: 10.239.0.34
                  Master_User: ci02replicadb
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binary-log.009871
          Read_Master_Log_Pos: 14678596
               Relay_Log_File: ci-db002-prd-relay-bin.007914
                Relay_Log_Pos: 814824
        Relay_Master_Log_File: binary-log.009871
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: bidw,cf2_fact,ct_fact,ez_fact,gt_fact,sfdc,soa_fact,tesla_fact,tmc_fact,staging,phoenix,data_science
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table: bidw.domo%,bidw.consolidated%,bidw.cf2%,bidw.tesla%,bidw.ez%,bidw.ct%,bidw.gt%,bidw.tmc%,bidw.did%,bidw.Shortened%,bidw.other_revenue%,bidw.revenue%,bidw.advanced_cohort%
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 814626
              Relay_Log_Space: 14679056
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 22
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: a485ab14-aa57-11ea-bef5-42010aef0022
             Master_Info_File: /data/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Reading event from the relay log
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.01 sec)

For "Replicate_Wild_Do_Table", I replicate only the tables listed there because the entire "bidw" database is lousy with junk tables.

What am I missing here? Am I not allowed to use both filters at the same time? I am on version 5.7.31.

Openldap Replication don't run after configuring

$
0
0

I'm practicing OpenLDAP replication with 2 Centos 6.9 64bit virtual machines.

The setting-up process was fine, but after I insert data into the master server, nothing happens in the slave.

MASTER SETTING

slapd.conf

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
updatedn "cn=Manager,dc=example,dc=com"
updateref ldap://192.168.1.11:389 

SLAVE SETTING

slapd.conf

   syncrepl     rid=2
                provider=ldap://192.168.1.10
                type=refreshOnly
                interval=00:00:00:01
                searchbase="dc=example,dc=com"
                filter="(objectClass=*)"
                attrs="*"
                scope=sub
                schemachecking=off
                updatedn="cn=manager,dc=example,dc=com"
                bindmethod=simple
                binddn="cn=manager,dc=example,dc=com"
                credentials=secret
updateref       ldap://192.168.1.10

I'm using OpenLDAP 2.4.21, BerkeleyDB-4.8.

Manual Binary logging in Amazon's Aurora

$
0
0

How would I enable binary logging in RDS? I get the following error:

Binary logging is disabled for MySQL server [1020418] (mysql_endpoint_capture.c:352)

What parameters do I need to change in order to get binlogging on? My parameters from a query are:

> show variables like '%log_bin%'

log_bin OFF
log_bin_basename    
log_bin_index   
log_bin_trust_function_creators OFF
log_bin_use_v1_row_events   OFF
sql_log_bin ON

how long can your replica be offline when using repmgr with Barman?

$
0
0

I'm setting up some postgres database EC2 instances that I'd like to use for load balancing. The application I'm running has some very expensive and very unique queries so CPU usage is a concern. While a single instance is OK most of the time, I'd like to quickly be able to spin up some read replicas when we're expecting to process a lot of transactions.

The issue is that it could be days or weeks in between needing to bring up these machines. Since we're using repmgr with Barman, it's very quick to clone a server. But ideally we'd just like to start\stop instances as needed with little thought\overhead.

My question is, when a replica comes back online after being offline for a while, and the WALs on the primary have long vanished... Is repmgr on the replicas smart enough to know to get the backup data from barman? I would have initially thought no, except I had a replica offline for a week, brought it online and when I checked the database, it was in sync with the primary. pg_wal only had 2 days of wals on primary.

I do have restore_command='/usr/bin/barman-wal-restore barman node1 %f %p' but I thought that was more for initial cloning or recovery.

Identify rows that replicate in a data frame

$
0
0

Please see below the dataset that I am working with:

  index d1_t1 d1_t2 d1_t3 d1_t4 d2_t1 d2_t2 d2_t3 d2_t4 d3_t1 d3_t2 d3_t3 d3_t4 d4_t1 d4_t2 d4_t3 d4_t4 d5_t1 d5_t2 d5_t3 d5_t4 d6_t1 d6_t2 d6_t3 d6_t4
   101     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1
   200     1     1     1     1     1     1     0     0     1     1     1     0     1     1     1     1     1     1     1     1     1     1     0     0
   200     1     1     1     0     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1
   101     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1

  d7_t1 d7_t2 d7_t3 d7_t4
    1     1     1     1
    1     1     0     0
    1     1     1     1
    1     1     1     1

A short explanation of the variables:

d1t1=Day 1 time 1
d1t2=Day 1 time 2
....
d2t1=Day2 time 1
d2t2=Day2 time 2

0,1= different types of measurements taken at a specific time

I would like to identify serials that have similar measurements across the week

Output:

  index d1_t1 d1_t2 d1_t3 d1_t4 d2_t1 d2_t2 d2_t3 d2_t4 d3_t1 d3_t2 d3_t3 d3_t4 d4_t1 d4_t2 d4_t3 d4_t4 d5_t1 d5_t2 d5_t3 d5_t4 d6_t1 d6_t2 d6_t3 d6_t4 d7_t1 d7_t2 d7_t3 d7_t4
1   101     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1     1

Sample data:

    df<-structure(list(index=c (101,200,200,101), d1_t1 = c(1, 1, 1, 1),
                   d1_t2 = c(1, 1, 1, 1), 
                   d1_t3 = c(1, 1, 1, 1), 
                   d1_t4 = c(1, 1, 0,  1),
                   d2_t1 = c(1, 1, 1, 1), 
                   d2_t2 = c(1, 1, 1, 1), 
                   d2_t3 = c(1, 0, 1 ,1), 
                   d2_t4 =c(1,0,1,1),
                   d3_t1 = c(1, 1, 1, 1),
                   d3_t2 = c(1, 1, 1, 1), 
                   d3_t3 = c(1, 1, 1, 1), 
                   d3_t4 = c(1, 0, 1,  1),
                   d4_t1 = c(1, 1, 1, 1), 
                   d4_t2 = c(1, 1, 1, 1), 
                   d4_t3 = c(1, 1, 1 ,1), 
                   d4_t4 =c(1,1,1,1),
                   d5_t1 = c(1, 1, 1, 1),
                   d5_t2 = c(1, 1, 1, 1), 
                   d5_t3 = c(1, 1, 1, 1), 
                   d5_t4 = c(1, 1, 1,  1),
                   d6_t1 = c(1, 1, 1, 1), 
                   d6_t2 = c(1, 1, 1, 1), 
                   d6_t3 = c(1, 0, 1 ,1), 
                   d6_t4 =c(1,0,1,1),
                   d7_t1 = c(1, 1, 1, 1), 
                   d7_t2 = c(1, 1, 1, 1), 
                   d7_t3 = c(1, 0, 1 ,1), 
                   d7_t4 =c(1,0,1,1)), row.names = c(NA,4L), class = "data.frame")
df

In PostgreSQL 9.2, is archiving required for streaming replication?

$
0
0

Is it allowed and/or reasonable to configure a master PostgreSQL 9.2 server to NOT archive but to perform streaming replication. That is configured as shown below:

wal_level = hot_standby
archive_mode = off

Can the "slave" server (hot standby), be configured to archive WAL segments?

wal_level = hot_standby
hot_standby = on
archive_mode = on

This would allow the archiving network traffic on the master server to be cut in half (replication but not archiving). This seems reasonable and the documentation appears to support this configuration but I'd prefer a bit of reassurance that we have a good configuration.


MySQL thinks Master & Slave have same server-id, but they don't

$
0
0

Trying to setup MySQL to run one-way replication. I have the master set to server_id = 1, a replication user setup, binary logging enabled, the Slave server_id = 2 and is connected and waiting for an event. I am also using Workbench.

However, the Master server has the following error...

Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; these ids must be different for replication to work (or the --replicate-same-server-id option must be used on slave but this does not always make sense; please check the manual before using it).

I can't figure out any reason for this error. I have looked at dozens of tutorials, manuals, etc and none of them is explaining to me what is happening.

Replicate selected postgresql tables between two servers?

$
0
0

What would be the best way to replicate individual DB tables from a Master postgresql server to a slave machine? It can be done with cron+rsync, or with whatever postgresql might have build in, or some sort of OSS tool, but so far the postgres docs don't seem to cover how to do table replication. I'm not able to do a full DB replication because some tables have license->IP stuff connected to it, and I can't replicate those on the slave machine. I don't need instant replication, hourly would be acceptable as well.

If I need to just rsync, can someone help identify what files within the /var/lib/pgsql directory would need to be synced, or how I would know what tables they are.

replication error

$
0
0

I am getting error for connect to master (mysql 5.1) from slave (mysql5.7)

Slave I/O for channel '': error connecting to master 'replica@hostname' - retry-time: 60 retries: 6, Error_code: 2027

Please help to solve this errror

Unable to truncate transaction log, log_reuse_wait_desc - AVAILABILITY_REPLICA

$
0
0

This morning i was woken up by a transaction log full alert on one of our database. This server is an alwayson cluster and also a transactional replication subscriber. I checked log_reuse_wait_desc and it showed logbackup. Someone had accidentally disabled the logbackup jobs 4 days earlier, I re-enabled the log backup job and the log got cleared. Since it was 4am I thought I will go to office later that morning and shirnk the log as it has grown to 400GB.

10AM- Im in office and I check the log usage before shrinking and it was around 16%. I was surprised and check the log_reuse_wait_desc, which showed replication. I was confused cause this was a replication subscriber. We then saw that the db was enabled for CDC and thought that might be the cause, so disabled CDC and now the log_reuse_wait_desc shows AVAILABILITY_REPLICA.

The log usage meanwhile still steadily growing and its at 17% now. I check the alwayson dashboard and check the sent and redo queue and both are virtually zero. I am not sure why the log reuse is showing as AVAILABILITY_REPLICA and unable to clear the log.

Any idea why this is happening?

SQL Server Merge Replication Error: The merge process could not replicate one or more INSERT statements to the 'Subscriber'

$
0
0

I have a merge replication with a push subscription from the publisher. It is getting error and stopped synchronization. The synchronization started again automatically and get the same error. Data don't get synced. The details of the error are

enter image description here

After getting this error when I have checked replication monitor it displays the following error messages:

Error messages

The merge process could not replicate one or more INSERT statements to the 'Subscriber'. A stored procedure failed to execute. Troubleshoot by using SQL Profiler. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200990)

Get help: http://help/MSSQL_REPL-2147200990

A query executing on Subscriber 'XXXXX' failed because the connection was chosen as the victim in a deadlock. Please rerun the merge process if you still see this error after internal retries by the merge process. (Source: MSSQL_REPL, Error number: MSSQL_REPL20245)

Get help: http://help/MSSQL_REPL20245

Cannot issue SAVE TRANSACTION when there is no active transaction. (Source: MSSQLServer, Error number: 628)

Get help: http://help/628

The process was successfully stopped. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200990)

Get help: http://help/MSSQL_REPL-2147200990

I am very new to replication, please let me know a way to get rid of the error.

High availability database that will re-sync itself a failed node

$
0
0

I know that StackExchange questions are for a specific problem, but I don't really know where to ask this question :

I am searching for a database that can be installed on 2 physical machine and that will repair a failed node when this node is back online.

The idea is that there is no main node, and both nodes can be queried, the data will be replicated on the other node, and if one node is shutdown, when it will be online again, an automatic sync process will be executed to align its store from the other one.

Thanks.

Postgres logical replication: db table grows indefinitely

$
0
0

I have a postgres table (300Mb size) which is logically replicated to another server. Until I've made some changes everything was perfectly good. Then master started to grow (up to 2,5 Gb at rate 15 mb at 5 minutes roughly). I tried to tune WAL settings and do a WAL cleanup, but it didn't help.

What I have done before this issue was discovered:

  • Rebuilt a materialized view dependent on master table a lot of times (and it is a heavy CPU consuming operation)

  • Added a new column on master table and slave table

  • Added a rule on inserts (copy a value from jsonfield to charfield)

What could have caused this issue?


Updating AUTO_INCREMENT value of all tables in a MySQL database

$
0
0

It is possbile set/reset the AUTO_INCREMENT value of a MySQL table via

ALTER TABLE some_table AUTO_INCREMENT = 1000

However I need to set the AUTO_INCREMENTupon its existing value (to fix M-M replication), something like:

ALTER TABLE some_table SET AUTO_INCREMENT = AUTO_INCREMENT + 1 which is not working

Well actually, I would like to run this query for all tables within a database. But actually this is not very crucial.

I could not find out a way to deal with this problem, except running the queries manually. Will you please suggest something or point me out to some ideas.

Thanks

MySQL in star topology

$
0
0

I have one central database with all the data in MySQL 5.1-lastest-stable.
I want to hook up multiple clients in a master-master relationship.

Question

How do I setup a star topology with 1 central server in the middle with multiple client-databases so that changes in one client get propagated first to the central server and from there to all the other client-databases?

Database info

I'm using inno-db for all the tables and I've enabled the binary-log.
Other than that I've learned how to do master-master between to databases.
All tables have primary keys primary integer autoincrement. Where the autoincrements offset and start is tuned to different client-databases never have primary key conflicts.

Why do I want this

I have client software (not a website or php) that connects to a local MySQL database on the laptop, this needs to sync to a central database, so that all folks using the program on their laptop see all the other changes that other folks make.
I do not want to connect directly against the central database because if the internet connection drops between the laptop and the central database my application dies.
In this setup the application continues, the laptop just does not get updates from other people until the connection to the central database is reestablished.

Too many WAL files in archive folder PostgreSQL Streaming Replication

$
0
0

I have two Postgresql 9.5 servers configured in streaming replication. There are no errors in log files and the servers seem to be in sync.

MASTER postgresl.config

archive_command = 'test ! -f mnt/server/archivedir/%f && cp %p mnt/server/archivedir/%f'
wal_keep_segments = 32

Everything else is pretty much default. And yet I have 1600+ files in the archive directory! Because of space constraints I have dared to delete some of them. I would like to figure out what is causing this. I also see that there are only very old files on the slave for the same directory.

The only thing I can think of is that I recently did a restore on a database. I dropped the database, created it and did a backup restore. I did this several times. The slave seems to have caught up with the changes. But I have a ton of WAL files in the archive folder for each of the backup restores. Would appreciate some insight into what could be happening.

MsMerge_genhistory has alot of rows with pubid = null

$
0
0

I have a merge replication and I am worried that the cleanup of metadata might not be enough. I have a retention period of 60 days and I can see thet the metadatacleanup-job do remove rows i msmege_genhistory that are older but only for rows that has the right guid in pubid. most of the rows, about 1,6 million, has the value NULL in pubid and I cannot figure out why. does anybody know why there is so many null-values?

which is more faster for reading data from a replication table Python or SQL?

$
0
0

We have a live sql server replication table using Golden Gate and users want almost real time data thru reports. When we're running sql queries on those replication tables the query is taking more time to get the results. Is there any alternate way for reading the data faster instead of SQL?

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>