Could you tell me the purpose of these two roles in SQLServer.
MSReplPAL_9_1 and MStran_PAL_role
MSReplPAL_9_1 and MStran_PAL_role roles in SQLServer
Can I replicate only data that is being inserted after a specific date?
I have an MS SQL Server merge replication between two servers ,server A the publisher which has static ip and another server is server b the subscriber that is a cloud server on godaddy ,i have limited storage on server b so i decided to replicate only the data has been inserted this year, i dont want to put a condition for date on each article on snapshot publisher, so i am searching a way on which could i perform a replication after a specific date
Artemis replication backup does not failback
I have three "replication" mode servers, one is master the two are slaves.
Master has "check-for-live-server" enabled. The slave has "allow-failback" enabled.
Failover works fine, but when the master is available again, after waiting for at least 5 minutes, the slave still does not do the auto-failback.
Here are the HA settings below.
Master (192.168.102.55) broker.xml
:
<connectors><connector name="netty-connector">tcp://192.168.102.55:61616</connector></connectors><acceptors><acceptor name="netty-acceptor">tcp://192.168.102.55:61616</acceptor></acceptors><cluster-user>user</cluster-user><cluster-password>password</cluster-password><broadcast-groups><broadcast-group name="bg-group"><group-address>231.7.7.7</group-address><group-port>9876</group-port><broadcast-period>5000</broadcast-period><connector-ref>netty-connector</connector-ref></broadcast-group></broadcast-groups><discovery-groups><discovery-group name="dg-group"><group-address>231.7.7.7</group-address><group-port>9876</group-port><refresh-timeout>10000</refresh-timeout></discovery-group></discovery-groups><cluster-connections><cluster-connection name="my-cluster"><connector-ref>netty-connector</connector-ref><message-load-balancing>ON_DEMAND</message-load-balancing><max-hops>2</max-hops><discovery-group-ref discovery-group-name="dg-group"/></cluster-connection></cluster-connections><ha-policy><replication><master><check-for-live-server>true</check-for-live-server></master></replication></ha-policy>
Slave (192.168.102.53) broker.xml
:
<connectors><connector name="netty-connector">tcp://192.168.102.53:61616</connector></connectors><acceptors><acceptor name="netty-acceptor">tcp://192.168.102.53:61616</acceptor></acceptors><cluster-user>user</cluster-user><cluster-password>password</cluster-password><broadcast-groups><broadcast-group name="bg-group"><group-address>231.7.7.7</group-address><group-port>9876</group-port><broadcast-period>5000</broadcast-period><connector-ref>netty-connector</connector-ref></broadcast-group></broadcast-groups><discovery-groups><discovery-group name="dg-group"><group-address>231.7.7.7</group-address><group-port>9876</group-port><refresh-timeout>10000</refresh-timeout></discovery-group></discovery-groups><cluster-connections><cluster-connection name="my-cluster"><connector-ref>netty-connector</connector-ref><message-load-balancing>ON_DEMAND</message-load-balancing><max-hops>2</max-hops><discovery-group-ref discovery-group-name="dg-group"/></cluster-connection></cluster-connections><ha-policy><replication><slave><allow-failback>true</allow-failback></slave></replication></ha-policy>
The Artemis version I used is 2.11.0. Does anyone know what I possibly could do wrong?
behavior of a raft cluster with too many disconnect
I am implementing replication for my database thanks to the raft algorithm. I have some issues dealing with disconnections; here's how I'm handling it so far:
- heartbeat messages timeouts n times
- The leader propose a configuration change to exclude the node that timeouts
- raft does its stuff, and eventually, the configuration change gets comitted.
Now when the penultimate server goes down, the leader is unable to reach consensus and remove the node, but it still considers that the server that went down is part of the cluster (as it needs consensus to commit the removal) and continues to propose the configuration forever.
On a practical standpoint, should I crash, or should I cheat the consensus and let the leader commit its entry without approval from the majority?
Bootstrap bucardo replication after pg_restore
Currently I am setting up Master/Master Replication with bucardo between 5 Nodes on different locations (should provide location transparency). The database holds ~500 Tables which should be replicated. I grouped them into smaller replication herds of 50 Tables at maximum based on their dependency on each other. All tables have primary keys defined and the sequencers on each node are set up to provide system wide unique identities (based on residue class)
To get an initial database on each node, I made a --data-only
custom format pg_dump into a File and restored this on each node via pg_restore
. Bucardo sync is setup with the bucardo_latest
strategy to resolve conflicts. Now when I start syncing bucardo is deleting all datasets in the origin database first and inserting it again from one of the restored nodes, because all restored datasets have a "later timestamp" (the point in time when I called pg_restore). This ultimately prohibits the inital startup as bucardo needs very much time and also fails, as there are lots of datasets to solve and timeouts often too short.
I also have 'last_modified' timestamps on each table which are managed by UPDATE triggers, but as I understand it, pg_dump inserts data via COPY, and therefore these triggers don't get fired.
- Which timestamp does bucardo use to find out who is
bucardo_latest
? - Do I have to call
pg_dump
with something likeset SESSION_REPLICATION_ROLE = 'replica';
?
I just want bucardo to keep track of every new change, not executing pseudo changes because of the restore.
EDIT: pg_restore has definitely fired several triggers at restore time...as said I keep track on user and last modification date in each table, and those values are set to the user and timestamp when the restore was done. I am aware, that I can set SESSION_REPLICATION_ROLE for a plain text format restore via psql
. Is this also possible for pg_restore somehow?
Is there a way to add transformers to Kafka Strimzi MirrorMaker2?
Right now, I need to replicate some topics from one Kafka cluster to another, but in the second I need it in another format. We are using Strimzi in Kubernetes. In some connectors one can do something like this, but I am not sure if MirrorMaker2 let us do it since it is based on Kafka Connect:
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: sample-connector
spec:
class: com.sample.SampleConnector
tasksMax: 2
config:
...
transforms: TimestampConversion,RectificationDateTimeConversion
transforms.TimestampConversion.type: org.apache.kafka.connect.transforms.TimestampConverter$Value
transforms.TimestampConversion.format: yyyy-MM-dd HH:mm:ss.SSS
transforms.TimestampConversion.field: timestamp
transforms.TimestampConversion.target.type: string
transforms.RectificationDateTimeConversion.type: org.apache.kafka.connect.transforms.TimestampConverter$Value
transforms.RectificationDateTimeConversion.format: yyyy-MM-dd HH:mm:ss.SSS
transforms.RectificationDateTimeConversion.field: rectificationDateTime
transforms.RectificationDateTimeConversion.target.type: string
In mysql master-slave Replication which server requests the other? [migrated]
I want to setup mysql replication between two servers one of them is my localhost and the other is online server. I have all availability to make any one of them the master. But according to that my localhost server doesn't have a static IP, i need to know which server of the two (master & slave) is the one which requests the other for doing updates. Does the master sends the binlog updates, or the slave is the one which requests for new updates periodically ? so i will make it the localhost. thank you in advance.
SQL Server Replication using RMO
We are using SQL Server replication using RMO. We have SQL 2016 (Standard Edition) on the server acting as the publisher and SQL Server Express Edition as the subscriber.
Previously, the distributor and the publisher were on the same server and the replication was working.
We have a client application, the data needs to be synced with the server on a regular basis.
We have Transactional and merger replication set and rely on pull approach where the client application pulls the data on demand.
For security reasons, the client doesn't want to expose port 1433 (or any other port) on the publisher to the subscribers.
So, we decided to move the distributor on a remote server, so that the subscriber talks to the publisher via remote distributor. (The remote distributor can connect and talk to the Publisher.) However, I am getting an error when I try to sync.
Wanted to check if replication is possible when port 1433 is blocked for the subscribers?
If yes, can you provide me some sample code or pointers to it. If no, what are the different options that I can have?
Sql Server Replication in Azure Data Studio
I've always used SSMS, but am considering switching to a Mac, so I've been exploring Azure Data Studio for my SQL Server needs. I have replication set up, and SSMS offers a nice Replication tab to monitor and manage replication. I can't find anything similar in Azure Data Studio, though. Does anyone know if it has something like this?
Push new data rows from staging to production [closed]
We currently have a staging environment and a production environment. Each month we receive data that needs processed and tested. Currently, this data is pushed to a staging environment where it is tested and then a python script is run which invokes a series of SQL stored procedures to push this data to the production environment. This has worked for quite some time but as the client offering has changed, new data incorporated, etc this has become sluggish and fails due to filled transaction logs, etc. Just wondering if anyone has any other recommendations on methods to push this data. I'm currently looking at using replication for this as the schemas are exactly the same but I can't seem to find a good guide to trigger this manually once testing completed?
redis replication multiple masters to one single replica node
currently I've 3 node Redis running as master in standalone mode on 3 servers.
Each redis instance is not aware of the other instances.
Is there a way to have a replication (just in read mode) of the three master instances, just in one single redis node running in another server?
Looking the docs of redis, I haven't found this possibility.
Thanks in advance
SQL Server replication cleanup
I'm trying to sort out replication on a server that's been around since SQL Server 2000. It's been upgraded many times over the years and is now on SQL Server 2017.
Recently the distribution database has a growth spurt and I tracked that down to a subscriber server that has been decommissioned without the subscription being removed. While scratching around I found a few linked servers that have also been decommissioned, and their subscriptions not properly removed. There is no sign of them under any publication, yet I can't delete the link server because
'Cannot drop server xyz because it s used as a Subscriber to....'
Question #1: how do I get rid of this server and any other orphaned entries related to this server?
Then... I also noticed that I have entries in the [MSrepl_transactions]
folder that date back to 2001. I am guessing it's part of the above problem but cant be sure. The distribution cleanup job doesn't seem to want to touch it.
Question #2: how do get rid of these entries in [MSrepl_transactions]
?
Facing issue while Setting up MySQL Group Replication with MySQL Docker images
log og the node:
root@worker01:~# docker logs node1 [Entrypoint] MySQL Docker Image 8.0.21-1.1.17 [Entrypoint] Initializing database 2020-08-05T13:14:59.546377Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 23 2020-08-05T13:14:59.655627Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-08-05T13:15:35.222094Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2020-08-05T13:16:56.203484Z 0 [Warning] [MY-013501] [Server] Ignoring --plugin-load[_add] list as the server is running with --initialize(-insecure). 2020-08-05T13:17:56.492555Z 0 [ERROR] [MY-000067] [Server] unknown variable 'group-replication-start-on-boot=OFF'. 2020-08-05T13:17:56.493399Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it. 2020-08-05T13:17:56.494797Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-08-05T13:18:46.000320Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.21-1.1.17 [Entrypoint] Starting MySQL 8.0.21-1.1.17 2020-08-05T13:19:32.543042Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.21) starting as process 22 2020-08-05T13:19:32.580976Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-08-05T13:19:35.142750Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. mysqld: Table 'mysql.plugin' doesn't exist 2020-08-05T13:19:35.445862Z 0 [ERROR] [MY-010735] [Server] Could not open the mysql.plugin table. Please perform the MySQL upgrade procedure. 2020-08-05T13:19:35.502671Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2020-08-05T13:19:35.894268Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2020-08-05T13:19:36.590170Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2020-08-05T13:19:36.733128Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2020-08-05T13:19:36.734145Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2020-08-05T13:19:37.088526Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2020-08-05T13:19:37.090739Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist 2020-08-05T13:19:37.092141Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition. 2020-08-05T13:19:37.095750Z 0 [ERROR] [MY-010326] [Server] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 2020-08-05T13:19:37.096903Z 0 [ERROR] [MY-010952] [Server] The privilege system failed to initialize correctly. For complete instructions on how to upgrade MySQL to a new version please see the 'Upgrading MySQL' section from the MySQL manual. 2020-08-05T13:19:37.099193Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-08-05T13:19:38.715864Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. root@worker01:~#
MySQL DB Should replication be paused before point in time recovery?
I have a MySQL DB and replica. I want to perform a Point In Time Recovery for the master. Should I stop replication or it is OK to proceed as is?
Thanks
MySQL backup of incremental changesets
With a MySQL database, is there a way of logging and/or backing up incremental changesets?
We have a number of large databases (100-200 GB), for which we take daily backups using mysqldump. These backups are a snapshot of the entire database, which can be restored as whole. However, taking/restoring from these backups is not a quick process.
As such, I'd like to be able to take backups of incremental changes - i.e. a backup of what's changed in a given day. Then, assuming a database was in a state consistent with a previous snapshot, we could restore the set of incremental changes to get it to the same/current state.
To some extent, this could be done by operating on mysqldump files - i.e. comparing today's snapshot dump with the previous day's snapshot dump to create a .sql backup of commands to run to convert from the previous snapshot state to today's snapshot state. However, I'd rather not write this myself if there's already a better way of doing this.
Also, I'm conscious that, to some degree this is analogous to MySQL master-slave replication, with the exceptions that the changesets are grouped by time period (i.e. a 24hr window) and not automatically read into another database / server. Furthermore, the binary logs are for the entire database server, not for individual databases.
Additional Comments:
With some further research on this, I've also come across the incremental backup functions in MySQL Enterprise, but we're not using the enterprise version.
Prevent replication of ALTER commands
I am using MariaDB 10.0 multi-source replication for a specific use case.
For security reasons, I would like to prevent ALTER commands on master to replicate (such as CREATE
, ALTER
, DROP
...) whatever user run these commands (even root) but of course let SELECT
, INSERT
, UPDATE
, DELETE
commands to replicate....
I do not want to use SET SQL_LOG_BIN=0|1
on client side. In fact, I never want to replicate schema modification.
In practice, I wish I could revoke alter permissions to my replication user (who currently has REPLICATION SLAVE permission)
Is there a way to achieve this?
EDIT 2018-02-19
Since my requirements seems nonsense for some readers, I give some additional information about this use case.
I created one (or more) MariaDB Proxy database(s) with tables using BLACKHOLE Storage Engine. So data is not stored on this proxy server, but binlogs are.
I have other MariaDB servers running the same database schema but with INNODB storage engine that replicates data from proxy server(s) using MariaDB Multi-source Replication.
On the proxy server, I can safely recreate, for example, a table schema with a CREATE OR REPLACE TABLE mytable (id int) ENGINE=BLACKHOLE
statement as there is no data stored in it.
But this kind of statement MUST NOT run as is on the "slaves" (which are not real slaves as you noticed) as they must remain in their original storage engine or any other option they may have at the table level.
I can do this by issuing a SET SQL_LOG_BIN=0
before executing my command, but I was looking for a way to make sure that I will not break the slaves in case I forget to do it.
Get the Replication/Lag time of Redshift
I'm currently running RedShift with pushes of all data from our production Postgresql databases every 10 minutes or so. Periodically, the ETL process from Postgresql to Redshift gets delayed or backed up. Is there any way to monitor the lag time between RedShift and Postgresql?
Issues adding Suscriptions and Viewing them
I am working on generating scripts to create a Transactional Replica. It seems to be working. I created an item, and it gest replicated in the Rep Database. However the subscriber is not showing in the server.
Both Subscriber and publisher are in the same server. But when I expand the Local Publications
folder I can see the subscriber, but when I expand the Local Subscriptions
folder, it doesn't show anything.
What could I be doing wrong? This is the script I am running:
use [mydatabase]
exec sp_addsubscription @publication = N'MySubscription', @subscriber = N'Server-BD001', @destination_db = N'my_rep_db', @subscription_type = N'Push', @sync_type = N'replication support only', @article = N'all', @update_mode = N'read only', @subscriber_type = 0
exec sp_addpushsubscription_agent @publication = N'MyPublication', @subscriber = N'Server-BD001', @subscriber_db = N'my_rep_db', @job_login = N'Server\Repuser', @job_password = 'Password', @subscriber_security_mode = 1, @frequency_type = 64, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 0, @frequency_subday = 4, @frequency_subday_interval = 5, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @dts_package_location = N'Distributor'
GO
SQL SERVER transnational replication Error - Agent message code 20084
I am trying to set up a test replication for a new server we got. I am getting this error, and tried every recommendation and nothing seems to solve this. I have another new server that is pretty much the same, with the same SQL server version which is:
Microsoft SQL Server 2019 (RTM) - 15.0.2000.5 (X64)
Sep 24 2019 13:48:23
Copyright (C) 2019 Microsoft Corporation
Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2019 Standard 10.0 <X64> (Build 17763: ) (Hypervisor)
I can create the same replication with the same database and same settings on the other server.
This is the message I am getting from job history:
2020-08-12 00:06:14.998 Copyright (c) 2016 Microsoft Corporation
2020-08-12 00:06:14.998 Microsoft SQL Server Replication Agent: distrib
2020-08-12 00:06:14.998
2020-08-12 00:06:14.998 The timestamps prepended to the output lines are expressed in terms of UTC time.
2020-08-12 00:06:14.998 User-specified agent parameter values:
-Publisher WIN-SERVERPUB
-PublisherDB kati_test_repl
-Publication kati_test_repl
-Distributor WIN-SERVERPUB
-SubscriptionType 1
-Subscriber WIN-SERVERSUB
-SubscriberSecurityMode 1
-SubscriberDB kati_test_repl
-Continuous
-XJOBID 0xEE00C4488A36D34BBBA31DF6C505C50F
-XJOBNAME WIN-SERVERPUB-kati_test_repl-kati_test_repl-WIN-SERVERSUB-kati_test_repl-43CBA723-24F1-44E6-8C74-37B0A2558C44
-XSTEPID 1
-XSUBSYSTEM Distribution
-XSERVER WIN-SERVERSUB
-XCMDLINE 0
-XCancelEventHandle 0000000000001E18
-XParentProcessHandle 0000000000001E1C
2020-08-12 00:06:14.998 Startup Delay: 6256 (msecs)
2020-08-12 00:06:21.264 Connecting to Subscriber 'WIN-SERVERSUB'
2020-08-12 00:06:21.264 Agent message code 20084. The process could not connect to Subscriber 'WIN-SERVERSUB'.
What I have already tried and it didn't fix the problem:
- Add an alias in configuration manager on WIN-SERVERPUB, for port 1433 (both 32 and 64), and put in the server name (WIN-SERVERPUB).
- made sure the sql server agent account is a db_owner (For the replicated database and also tried for system databases).
- Made sure it exists in PAL (publication access list).
- Created a new windows account on both publisher and subscriber, added a login for them in both servers, and gave them db_owner permissions.
- hosts files are good on both servers and each server can connect to the other using computer name.
- Tried using Administrator for agent account process.
- Reinstalled sql server.
- Restarted the server.
- Port 1433 is open.
I am list, any suggestions? Thanks in advance!
How (if at all) does Galera enforce authentication for SST via rsync when adding a node?
I have to be missing something here.
It just hit me as added a new node to my cluster in order to prepare for the removal of a different node: "How does the cluster know that it is okay to send the new node a SST?"
I am pretty sure that the only information the new node has about the cluster is the gcomm:// address. Surely that isn't looked at as "secure" information that passes for authentication. To my knowledge, no shell account on the new node has the same password as on the existing cluster nodes.
So what would prevent anyone from spinning up a new node and pointing it's gcomm:// address at one or more of my nodes to just get an SST and be able to see all of my data?
Of course, certificates will be put in place. But I'm talking about a default setup and how things work "out of the box." I couldn't find much of anything that talked about this out there.
Am I going nuts?