Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Merge Replication - Detect nonlogged agent shutdown

$
0
0

I have never worked with SQL Server replication and am now in an environment with SQL Server 2012 merge replication. On 3/29 and 3/30 there are 13 of 19 replication jobs which are failing with detect nonlogged agent shutdown on the subscribers.

The publisher and distributor are on the same server. There are 2 publications which are failing. Some of the errors are

The process could not connect to Subscriber

and some errors are

The merge process failed because it detected a mismatch between the replication metadata of the two replicas

I am able to successfully ping the publisher from the subscribers.

All revalidations are failing. Do I need to recreate the subscribers? SQL Server Express is running on the subscribers.

Thanks..


AEM reverse replication and workflow after that

$
0
0

I have a requirement to upload the files in publisher and reverse replicate to author and forward replicate to another publisher. After reverse replicating, I need to create a project in AEM author and move the reverse replicated assets to a project-specific location. So I have a custom launcher to call my workflow class(which will create a project and copy the assets to the given location and delete them). This is all fine working.

There is a problem with this, I'm trying to avoid the race condition. The project is created and moves the asset whatever available when the launcher triggers but all the assets haven't reverse replicated yet. Is there a way to wait until the complete reverse replication is complete or any condition to write at the launcher level or some job consumer something. I have a launcher at /content/vendor. I have enough info to create a project after creating data node. But I need all the files to be moved to a project specific location. Appreciate your help. Thanks. Attaching the structure of the files as a screenshot.Jcr structure

Issue with Merge Replication - Detect nonlogged agent shutdown

$
0
0

I have never worked with SQL Server replication and am now in an environment with SQL 2012 merge replication. On 3/29 and 3/30 there are 13 of 19 replication jobs which are failing with detect nonlogged agent shutdown on the subscribers. The publisher and distributor are on the same server. There are 2 publications which are failing. Some of the errors are "The process could not connect to Subscriber" and some errors are "The merge process failed because it detected a mismatch between the replication metadata of the two replicas." I am able to successfully ping the publisher from the subscribers.

The following error is listed:

The process could not connect to Subscriber 'UP002'. Client unable to establish connection because an error was encountered during handshakes before login. Common causes include client attempting to connect to an unsupported version of SQL Server, server too busy to accept new connections or a resource limitation (memory or maximum allowed connections) on the server. (Source: MSSQLServer, Error number: 26) Client unable to establish connection (Source: MSSQLServer, Error number: 26) –

All re-validations are failing. Do I need to recreate the subscribers? SQL Express is running on the subscribers.

Postgres logical replication initial setup super-slow

$
0
0

I've set up a simple postgres-10 logical replication publication

CREATE PUBLICATION active_directory_pub FOR TABLE active_directory.security_principal;

It's just a table with about 50,000 rows. However, when I try to subscribe to this publication from a separate database on the local host, the initial synchronization seems to take a very, very long time (hours and still going).

Is this expected? Do I need to set up some indexes to speed things up? Are there options or pre-loading I can do to help it along?

Replication log agent blocking distributor agent TLOG stuck with log_reuse_wait_desc replication

$
0
0

I've got a nasty sql server 2012 replication issue. I have a transactional publication going to a pull subscriber. The log reader agent on the publisher has gotten stuck in a loop of "Delivering replicated transactions, xcount: 1 with an increasing command count" messages and "The replication agent has not logged a progress message in 10 minutes" messages. So the log agent seems to be working but nothing is being replicated to the distributors. All distributors state "The process is running and is waiting for a response" or they time out and try again. When I run sp_whoIsActive I see the spid for the log reader agent for that database is blocking all the distribution agents from the subscribers of that database....yet the status of the log reader process is "sleeping" The Tlog for the publication database is growing fast and eating up storage. It's stuck with log_reuse_wait_desc replication, so I cannot shrink it. Anyone run into this before? Any suggestions on how to fix this? Appreciate the help. Thanks.

Setting up Oracle publishers in SQL Server replication

$
0
0

I am attempting to set up and oracle publisher on a sql server 2008R2 server, but I am getting the following error.

"Unable to run SQL*PLUS. Make certain that a current version of the Oracle client code is installed at the distributor. For addition information, see SQL Server Error 21617 in Troubleshooting Oracle Publishers in SQL Server Books Online. (Microsoft SQL Server, Error:21617)"

The information I have found states that an oracle client must be installed and the oracle_home\bin must be in the path variable. I have verified that it is.

So far I have taken the following steps Installed oracle administrator client Added TNS_ADMIN env variable Added ORACLE_HOME env variable connected to the remote oracle database from the distributor via sql*plus

I am hoping someone will have run into similar errors in the past

Replicate data of some tables of a local sql server to a database in Azure

$
0
0

We have a SQL Server which runs on a machine in our office with a database and tables which get written to daily.

Due to some business requirements we need some of these tables replicated on to a database on Azure.

I have several ways to do this:

  • Create a job in our application which gets the data from the local SQL Server and writes it to the Azure DB via Entity Framework

  • SQL Server Replication (could this be used?)

  • Stretch Database (unfortunately not possible because some of the tables can't be stretched apperently..)

  • Linked server in the Azure-DB with a view that selects our local DB? (is this possible?)

Did you have such requirements before and what did you use to solve this?

What different option would you suggest?

Thanks in advance :-)

Additional infos:

  • SQL Server v13.0.4435.0
  • Microsoft SQL Server Standard (64-bit) Edition

Unable to start mysqld service after enabling log_bin

$
0
0

Hi I am unable to start mysqld service after I make changes in my.cnf to enable log-bin. After making changes the file looks like below:

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin

#This is what I added to enable log-bin

log-bin=/var/log/mysql/
log-bin-index=bin-log.index
max_binlog_size=100M
binlog_format=row
socket=mysql.sock

#That was what I added to enable log-bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

I am using:

[ec2-user@pip my.cnf.d]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Following is the ownership and permissions:

d]$ ls -ld /var/log/mysql
drwxrwxrwx. 2 mysql mysql 23 Sep  8 09:01 /var/log/mysql

Following is the error that I get:

[ec2-user@i.d]$ sudo service mysqld start
Redirecting to /bin/systemctl start  mysqld.service
Job for mysqld.service failed because the control process exited with error code. See "systemctl status mysqld.service" and "journalctl -xe" for details.

MySQL version is:

Server version: 5.7.19 MySQL Community Server (GPL)

Please let me know if any other detail is required


Changing primary node in a MySQL replication set

$
0
0

I have a new MySQL 5.7 replication set using a single primary and two slaves which is working very well so far. We also have a set of scripts that are designed to write entries to the database which are, obviously, pointing at the primary node.

If the primary node should go down for any reason, the cluster will elect a new primary. If/when the failed primary comes back, it'll come back as a slave. So far, so good. However, all those scripts will stop working as they're trying to write to what is now a read-only slave.

Short of taking a cluster outage and restarting in order, is there a way to force a re-election and/or change of primary to a designated node?

Several google searches haven't shown anything so I'm starting to believe that it's not possible.

Slave failing to replication in maria db 10

$
0
0

I am not able to enable row based replication.

I am trying to set up a master slave replication between 2 nodes running

  • Server version 10.0.23-MariaDB-log
  • Protocol version 10
  • UNIX socket /var/lib/mysql/mysql.sock.

The scenario I have followed is this:

  1. Shut the prod (mc1), took a cold backup and restored on slave (mc2).

  2. On master ran show master status; prodarchivedlogs-bin.000001 and the pos as 582240

  3. Set these info in slave.

  4. Started the slave, it started throwing up error:

160812 3:31:50 [ERROR] Slave SQL: Could not execute Update_rows_v1 event on table radius1.radacct; Can't find record in 'radacct', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log prodarchivedlogs-bin.000001, end_log_pos 587695, Gtid 0-1-79, Internal MariaDB error code: 1032

160812 3:17:18 [Warning] Slave: Can't find record in 'radacct' Error_code: 1032

160812 3:17:18 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'prodarchivedlogs-bin.000001' position 587072

I have tried all methods based on my knowledge, but it is just not functioning. I tried using mysqlbinlog tool to extract to a file and then tried inserting, even then it is failing.

Any help will be greatly appreciated.

The prod my.cnf is as follows

symbolic-links=0
innodb_buffer_pool_size=10G
innodb_data_home_dir= /var/lib/mysql/data2
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/redologs
innodb_log_file_size    = 256M
innodb_buffer_pool_size = 10000M
innodb_flush_method     = O_DIRECT
thread_stack    = 256K
max_connections=100000

expire_logs_days        = 5
max_binlog_size         = 100M

log-bin=/backup/archivedlogs/prodarchivedlogs-bin
binlog_format=row
server-id=1

The slave my.cnf is as follows

skip-name-resolve
skip-slave-start
datadir=/var/lib/mysql/data1
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0

innodb_buffer_pool_size=10G
innodb_data_home_dir= /var/lib/mysql/data2
innodb_data_file_path=ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/redologs
innodb_log_file_size    = 256M
innodb_buffer_pool_size = 10000M
innodb_flush_method     = O_DIRECT
thread_stack    = 256K
max_connections=100000
expire_logs_days        = 5
max_binlog_size         = 100M
log-bin=/backup/archivedlogs/prodarchivedlogs-bin
binlog_format=row
server-id=2

Oracle streams how exclude chages in sequences in schema replication

$
0
0

I have two DBs I want to synchronize using streams

So I followed N-way replication guide. I have following config:

   BEGIN
      DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
        schema_name    => 'schema_A',
        streams_type   => 'capture',
        streams_name   => 'capture_A',
        queue_name     => 'strmadmin.captured_DBA',
        include_dml    => TRUE,
        include_ddl    => TRUE,
        inclusion_rule => TRUE);
    END;
    /
    BEGIN
      DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
        schema_name     => 'schema_A',
        streams_type    => 'apply',
        streams_name    => 'apply_from_DBB',
        queue_name      => 'strmadmin.from_DBB',
        include_dml     => TRUE,
        include_ddl     => TRUE,
        source_database => 'DB_B',
        inclusion_rule  => TRUE);
    END;

I also want to utilize the sequence hint so I don't have duplicate keys:

create sequence SEQ1 ... start with 1 increment by 2...; --on DB A
create sequence SEQ1 ... start with 2 increment by 2...; --on DB B

Bu since I have schema replication if I run those two create sequence commands I have SEQ1 start with 2 increment by 2 on both DBs

Is it possible to exclude changes on sequences in schema replication or I need to change to table replication and specify all tables one by one?

Why Mongodb replication lag increased on servers with more RAM

$
0
0

Brief description of deployment:
In out Mongodb cluster we have 12 shards, each is 3 hosts replica set.
We write about a 1 million documents in a minute (350 bytes per doc) into new collection, that changes every 24 hours.
Working set does not fit RAM.
Replication lag tends to grow, so we have a module, that sets pauses in writing and lag lowers.

Problem:
Every server has 64GB RAM and we have 1 shard with all 3 servers with 128GB RAM.
We did that to lower IO writes, it helped a lot, working set fits on 1st shard better, cache stats show that wiredTiger cache worker better, has less evicted pages and especially modified evicted pages. IO reduced by 30-40% on those servers but also replication lag grew a lot, while others have average 5-6 sec lag on 1st shadr we have 20 sec lag (with max 100 or more).

So, why replication works worse on servers with more RAM?

picture of network activity of regular secondary and 128 GB secondary

Mysql 1 way replication master lost data replication still has

$
0
0

Had a master mysql on (windows) improper shutdown. Many records could not be recovered and investigation found slave had records master did not have. Did not restore over master - left as was. As if by magic, about 6 weeks later the missing data from master re-appeared on the master, and now master>slave agree, with some lingering discrepancies. We're talking thousands of rows across many schemas and many tables within each schema. Looking for theories and data mining is not revealing much of anything. 1st question is knowing MySQL tries to automatically recover data, could something have happened where mysql cached it, then released it 6 weeks later. Anybody encountered this or any known issues like this?

Issue Using Percona-Toolkit to Add Index to Table Column

$
0
0

I need to able to add an index to a column in a MySQL table. I am using Mysql Ver 14.12 Distrib 5.0.77 and Percona-Toolkit 2.0.3. I can add the index using the Percona-Toolkit using:

pt-online-schema-change --alter "ADD INDEX server (server) USING BTREE" h=host1,D=customer,t=test_percona_restructure,u=user,p=password

This works fine on host1. The issue is that this change is not replicated over to host2 and host3 which is not desirable. Running the above on the other two instances will break replication in the environment.

Is there a way that the above can be achieved and in real time and replicated across to host 2 and 3? From the documentation, I see that pt-table-sync sounds like it could be close but sounds like you need to complete the schema change and then sync which is bad for me.

Any help would be great.

MongoDB Replica Set member cannot start due "84 key/value already in index"

$
0
0

I have a Replica Set with 3 members (1 primary and 2 secondaries), and one of the secondaries was working just fine and thrown the following error

> 2018-03-12T11:03:54.868-0400 E REPL [repl writer worker 13] writer> worker caught exception: :: caused by :: 84 key/value already in index> on: { ts: Timestamp 1520867034000|8, h: 287037468373256260, v: 2, op:> "u", ns: "Sitecore_analytics_PROD.Interactions", o2:> > { _id: BinData(3, EFC3DD1A15B75442986344FDE9CC71EB) } , o: { $set:> > { ContactVisitIndex: 2 } } } 2018-03-12T11:03:54.868-0400 I - [repl> writer worker 13] Fatal Assertion 16360 2018-03-12T11:03:54.868-0400 I> - [repl writer worker 13] > ***aborting after fassert() failure

After that the MongoDB cannot start anymore, and keep throwing the same error!

How can I determine the root cause of it? Is there a way to prevent? How to solve it?


MariaDB Galera Cluster initial rsync replication failing

$
0
0

I'm trying to install a new galera cluster. The primary host started fine, but the secondaries are failing during the state transfer with rsync, and are not starting. I haven't been able to fix the problem

Here's the log:

Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld_safe: WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.V1VQNk' --pid-file='/var/lib/mysql/node2-recover.pid'
Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:14 [Note] /usr/sbin/mysqld (mysqld 10.0.29-MariaDB-1~trusty-wsrep) starting as process 7936 ...
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld_safe: WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] /usr/sbin/mysqld (mysqld 10.0.29-MariaDB-1~trusty-wsrep) starting as process 7986 ...
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Read nil XID from storage engines, skipping position init
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_load(): Galera 25.3.19(r3667) by Codership Oy <info@codership.com> loaded successfully.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: CRC-32C: using hardware acceleration.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: Could not open state file for reading: '/var/lib/mysql//grastate.dat'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootsrap: 1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.0.109; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; p
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: c.checksum = false; pc.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(00000000-0000-0000-0000-000000000000:-1)
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_sst_grab()
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Start replication
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: protonet asio version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Using CRC-32C for message checksums.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: backend: asio
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: gcomm thread scheduling priority set to other:0 
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: restore pc from disk failed
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: GMCast version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: EVS version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: gcomm: connecting to group 'test', peer '192.168.0.102:,192.168.0.104:,192.168.0.109:'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 7bc8012e tcp://192.168.0.109:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') address 'tcp://192.168.0.109:4567' points to own listening address, blacklisting
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 76006a4b tcp://192.168.0.102:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 7b77d168 tcp://192.168.0.104:4567
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: declaring 76006a4b at tcp://192.168.0.102:4567 stable
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: declaring 7b77d168 at tcp://192.168.0.104:4567 stable
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Node 76006a4b state prim
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: view(view_id(PRIM,76006a4b,3) memb {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #0117b77d168,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: save pc into disk
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: gcomm: connected
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Opened channel 'test'
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Waiting for SST to complete.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 3
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: sent state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 0 (node1)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 1 (node3)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 2 (node2)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Quorum results:
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011version    = 4,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011component  = PRIMARY,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011conf_id    = 2,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011members    = 1/3 (joined/total),
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011act_id     = 7,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011last_appl. = -1,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011protocols  = 0/7/3 (gcs/repl/appl),
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011group UUID = 760199b2-0c88-11e7-be7d-f2d7ea489521
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Flow-control interval: [28, 28]
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 7)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: State transfer required: 
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011Group state: 760199b2-0c88-11e7-be7d-f2d7ea489521:7
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011Local state: 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: New cluster view: global state: 760199b2-0c88-11e7-be7d-f2d7ea489521:7, view# 3: Primary, number of nodes: 3, my index: 2, protocol version 3
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Warning] WSREP: Gap in state sequence. Need state transfer.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '192.168.0.109' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '7986' --binlog '/var/log/mysql/mariadb-bin' '
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Member 1.0 (node3) requested state transfer from '*any*'. Selected 0.0 (node1)(SYNCED) as donor.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 rsyncd[8036]: rsyncd version 3.1.0 starting, listening on port 4444
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Prepared SST request: rsync|192.168.0.109:4444/rsync_sst
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: REPL Protocols: 7 (3, 2)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Assign initial position for certification: 7, protocol version: 3
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Service thread queue flushed.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (760199b2-0c88-11e7-be7d-f2d7ea489521): 1 (Operation not permitted)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011 at galera/src/replicator_str.cpp:prepare_for_IST():482. IST will be unavailable.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: Member 2.0 (node2) requested state transfer from '*any*', but it is impossible to select State Transfer donor: Resource temporarily unavailable
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Requesting state transfer failed: -11(Resource temporarily unavailable). Will keep retrying every 1 second(s)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: 0.0 (node1): State transfer to 1.0 (node3) failed: -141 (Unknown error 141)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Member 0.0 (node1) synced with group.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: declaring 76006a4b at tcp://192.168.0.102:4567 stable
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: forgetting 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Node 76006a4b state prim
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: view(view_id(PRIM,76006a4b,4) memb {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #0117b77d168,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: save pc into disk
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: forgetting 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: sent state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: got state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1 from 0 (node1)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: got state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1 from 1 (node2)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Quorum results:
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011version    = 4,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011component  = PRIMARY,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011conf_id    = 3,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011members    = 1/2 (joined/total),
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011act_id     = 7,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011last_appl. = 0,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011protocols  = 0/7/3 (gcs/repl/appl),
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011group UUID = 760199b2-0c88-11e7-be7d-f2d7ea489521
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Flow-control interval: [23, 23]
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Member 1.0 (node2) requested state transfer from '*any*'. Selected 0.0 (node1)(SYNCED) as donor.
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 7)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Requesting state transfer: success after 2 tries, donor: 0
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(760199b2-0c88-11e7-be7d-f2d7ea489521:7)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Warning] WSREP: 0.0 (node1): State transfer to 1.0 (node2) failed: -141 (Unknown error 141)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [ERROR] WSREP: gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():736: Will never receive state. Need to abort.
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: terminating thread
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: joining thread
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: closing backend
Mar 19 09:43:22 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:22 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection to peer 7bc8012e with addr tcp://192.168.0.109:4567 timed out, no messages seen in PT3S
Mar 19 09:43:22 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:22 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting off
Mar 19 09:43:24 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:24 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection to peer 76006a4b with addr tcp://192.168.0.102:4567 timed out, no messages seen in PT3S
Mar 19 09:43:24 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:24 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.0.102:4567 
Mar 19 09:43:25 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:25 [Note] WSREP:  cleaning up 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:25 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:25 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') reconnecting to 76006a4b (tcp://192.168.0.102:4567), attempt 0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: evs::proto(7bc8012e, LEAVING, view_id(REG,76006a4b,4)) suspecting node: 76006a4b
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: evs::proto(7bc8012e, LEAVING, view_id(REG,76006a4b,4)) suspected node without join message, declaring inactive
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: view(view_id(NON_PRIM,76006a4b,4) memb {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: view((empty))
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: gcomm: closed
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [ERROR] Parent mysqld process (PID:7986) terminated unexpectedly. (20170319 09:43:29.239)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [INFO] Joiner cleanup. rsync PID: 8036 (20170319 09:43:29.242)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 rsyncd[8036]: sent 0 bytes  received 0 bytes  total size 0
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [INFO] Joiner cleanup done. (20170319 09:43:29.751)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld_safe: mysqld from pid file /var/run/mysqld/mysqld.pid ended
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 "Connection refused")'
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]:

And here's the configuration file:

[mysqld]
transaction-isolation = READ-COMMITTED

key_buffer = 16M
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
query_cache_limit = 8M
query_cache_size = 64M
query_cache_type = 1

max_connections = 1050
#expire_logs_days = 10
#max_binlog_size = 100M

log_bin=/var/lib/mysql/mysql_binary_log

read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M

binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="test"
wsrep_cluster_address="gcomm://192.168.0.102,192.168.0.104,192.168.0.109"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address=192.168.0.109
wsrep_node_name=node2

Two way mysql database synchronization

$
0
0

I am developing an application whose database needs to be replicated in both directions over multiple number of local offline clients.

Please see the general explanation about it:

  • User installs client software on personal computer.
  • User synch data from the remote database server onto local database.
  • Now User can work on this local database and perform insert, update, delete on it.
  • At the same time other people can also do insert, update, delete on remote database.
  • User then connect to remote database and commit their local changes to the remote one.
  • User retrieve the remote change(Done by other people in remote database) onto their local database.

Right now the idea in my mind is that, We need to track the changes making in remote and local database and creating the web service for replicating the changes, but somewhere I am getting problem with primary keys that might be the same that generated in local and remote database. But this is approach is very lengthy, I also have doubt that it will work on real time or not.

My question is that is there any technology/tool in the MYSQL Server that help me to achieve this task without creating web services. I read out about mysql replication but it work for only one side replication i.e. Master - Slave, I need two sided synchronization.

Mysqlfailover command - No slave is listed in health status

$
0
0

I have successfully created replication using GTID_MODE. It works perfectly. Now I need to setup automatic failover feature in it. I have run the following command.

mysqlfailover --master=root:abc@10.24.184.12:3306 --discover-slaves-login=root:abc

I have got the following results. No slave is listed.

MySQL Replication Failover Utility
Failover Mode = auto     Next Interval = Tue May

Master Information
------------------
Binary Log File   Position  Binlog_Do_DB  Binlog
mysql-bin.000016  9568

GTID Executed Set
8fe8b710-cd34-11e4-824d-fa163e52e544:1-1143

Replication Health Status
0 Rows Found.
Q-quit R-refresh H-health G-GTID Lists U-UUIDs U

But when I execute the mysqlrplcheck and mysqlrplshow commands, the slave is listed.

Is this normal?

Live Database Sync. MySQL - SQL Server

$
0
0

I have a Rarer scenario, Is there any possibility to achieve the target? Here is the Situation:

I have Developed a Responsive site using php with foundation framework and the back-end is MySQL for a Supermarket. It is For promoting There products, The already having a Software for billing which was developed using WPF with SQL back-end. Now They need To Include Online Shopping in the web site, And They need to Reduce the Product from the local Software according to the online sale.

I can Use Replication If both the Database are Same, But in this case, Local DB is SQL Server and Live DB is MySql. How can i Achieve the target of Sync. Live and Local DB?

Replication in peer-to-peer method

$
0
0

For replication methods, we have:
1- Master-Slave
2- Peer-to_Peer
(Based on the "Big data fundamentals" book). The image
in that book shows no communications between peers. And also in brianstorti, it says:

Remember, these replicas are not talking to each other

So, I wonder how these peers get consistent if a client changes the data in one peer? How this method handles "writes" or "updates"?
Does the client handles them or if he changes data in one peer, the peer itself do the job for other ones?

Viewing all 17268 articles
Browse latest View live