Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Postgresql Replica is in sync but repmgr says node unattached

$
0
0

I've set up a replica of a Postgresql instance using repmgr, and the replica is totally in sync with the master instance and have been in sync since some days, but repmgr tells me replica isn't connected.
This is what the cluster looks like:

postgres@www:~$ repmgr cluster show
 ID | Name             | Role    | Status    | Upstream    | Location | Connection string
----+------------------+---------+-----------+-------------+---------+------------------------------------------------------------------
 1  | orig_master      | primary | * running |             | default  | host=MASTER user=USER dbname=repmgr connect_timeout=2
 2  | orig_slave       | standby |   running | orig_master | default  | host=SLAVE user=USER dbname=repmgr connect_timeout=2

This is what we get on the replica instance:

postgres@db:~$ repmgr node status
Node "orig_slave":
    PostgreSQL version: 9.6.11
    Total data size: 5603 MB
    Conninfo: host=SLAVE user=USER dbname=repmgr connect_timeout=2
    Role: standby
    WAL archiving: disabled (on standbys "archive_mode" must be set to "always" to be effective)
    Archive command: rsync -a %p barman@BARMAN:/mnt/volume/prod/incoming/%f
    WALs pending archiving: 0 pending files
    Replication connections: 0 (of maximal 6)
    Replication slots: disabled
    Upstream node: orig_master (ID: 1)
    Replication lag: 0 seconds
    Last received LSN: 63/BB0E3C10
    Last replayed LSN: 63/BB0E3C10

Which clearly shows that we have no relpication lag and instances are in sync, and WAL numbers are correct too.
This is what I get on the master instance:

postgres@www:~$ repmgr node status
Node "orig_master":
    PostgreSQL version: 9.6.11
    Total data size: 5603 MB
    Conninfo: host=MASTER user=USER dbname=repmgr connect_timeout=2
    Role: primary
    WAL archiving: enabled
    Archive command: rsync -a %p barman@BARMAN:/mnt/volume/prod/incoming/%f
    WALs pending archiving: 0 pending files
    Replication connections: 1 (of maximal 6)
    Replication slots: disabled
    Replication lag: n/a

WARNING: following issue(s) were detected:
  - 1 of 1 downstream nodes not attached:
    - orig_slave (ID: 2)

HINT: execute "repmgr node check" for more details

Which tells me that downstream node isn't connected, which is clearly connected!
Now, my question is: Is there actually a problem in replication process? If yes, then What is it and how can I solve it? If not, how can I make repmgr sure that there is no problem?

P.S.: Postgresql 9.6.11, repmgr 4.2


How did you setup MySQL Replication with autofailover that is app transparent?

$
0
0

Recently told MySQL is shelving MySQL Fabric. Interested how others have implemented a MySQL replicated environment that is app transparent.

I am considering using HA Proxy to host virtual IP address for Master, one for slave pool. The using MySQL Failover to monitor replication cluster, auto promote a slave to master. LinuxHA or HAProxy would change which server the virtual IP address points to. Should work for both Master and Slaves.

We're primarily a PHP shop running MySQL DB's on Centos7 Linux.

SQL Server Replication, is it possible to have "upload only" syncronization on some articles?

$
0
0

Within SQL Server's merge replication, I know how to make publication's article download-only (either with update restrictions on subscriber database or not) - @subscriber_upload_options parameter in sp_addmergearticle, or corresponding choices in the SSMS's GUI.

What I would like to achieve though is the reverse behavior on some articles. So that records that originate on the subscriber's side travel up to the publisher during syncronization, but not the other way around. And that updates made on the publisher are not sync'ed back to the subscriber, while updates made on subscriber do go up to the publisher. I.e. exactly reverse of the download-only articles.

Is this possible to achieve? Either with "standard" configuration options of the replication or with some manual hacks?

How to auto config mongodb replica set

$
0
0

I want to create a MongoDB replica set, and according to the documentation I need to run sth like this in my first mongo instance in order to config the replica set and this works fine. However, I was wondering if there is a way to automate this process and don't ssh to the server and run this piece of code every time. I tried putting it in a config file but it didn't work

rs.initiate( {
 _id : "rs0",
 members: [
    { _id: 0, host: "mongodb0.example.net:27017" },
    { _id: 1, host: "mongodb1.example.net:27017" },
    { _id: 2, host: "mongodb2.example.net:27017" }
  ]
})

MariaDB Galera Cluster initial rsync replication failing

$
0
0

I'm trying to install a new galera cluster. The primary host started fine, but the secondaries are failing during the state transfer with rsync, and are not starting. I haven't been able to fix the problem

Here's the log:

Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld_safe: WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.V1VQNk' --pid-file='/var/lib/mysql/node2-recover.pid'
Mar 19 09:43:14 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:14 [Note] /usr/sbin/mysqld (mysqld 10.0.29-MariaDB-1~trusty-wsrep) starting as process 7936 ...
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld_safe: WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] /usr/sbin/mysqld (mysqld 10.0.29-MariaDB-1~trusty-wsrep) starting as process 7986 ...
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Read nil XID from storage engines, skipping position init
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_load(): Galera 25.3.19(r3667) by Codership Oy <info@codership.com> loaded successfully.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: CRC-32C: using hardware acceleration.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: Could not open state file for reading: '/var/lib/mysql//grastate.dat'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1, safe_to_bootsrap: 1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.0.109; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.recover = no; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; p
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: c.checksum = false; pc.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(00000000-0000-0000-0000-000000000000:-1)
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: wsrep_sst_grab()
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Start replication
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: protonet asio version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: Using CRC-32C for message checksums.
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: backend: asio
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: gcomm thread scheduling priority set to other:0 
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: restore pc from disk failed
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: GMCast version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: EVS version 0
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: gcomm: connecting to group 'test', peer '192.168.0.102:,192.168.0.104:,192.168.0.109:'
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 7bc8012e tcp://192.168.0.109:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Warning] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') address 'tcp://192.168.0.109:4567' points to own listening address, blacklisting
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 76006a4b tcp://192.168.0.102:4567
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
Mar 19 09:43:18 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:18 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection established to 7b77d168 tcp://192.168.0.104:4567
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: declaring 76006a4b at tcp://192.168.0.102:4567 stable
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: declaring 7b77d168 at tcp://192.168.0.104:4567 stable
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Node 76006a4b state prim
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: view(view_id(PRIM,76006a4b,3) memb {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #0117b77d168,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: save pc into disk
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: gcomm: connected
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Opened channel 'test'
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Waiting for SST to complete.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 2, memb_num = 3
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: sent state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 0 (node1)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 1 (node3)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: STATE EXCHANGE: got state msg: 7c16951b-0c88-11e7-ad86-7bb28eca7f7e from 2 (node2)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Quorum results:
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011version    = 4,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011component  = PRIMARY,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011conf_id    = 2,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011members    = 1/3 (joined/total),
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011act_id     = 7,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011last_appl. = -1,
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011protocols  = 0/7/3 (gcs/repl/appl),
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011group UUID = 760199b2-0c88-11e7-be7d-f2d7ea489521
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Flow-control interval: [28, 28]
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 7)
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: State transfer required: 
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011Group state: 760199b2-0c88-11e7-be7d-f2d7ea489521:7
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: #011Local state: 00000000-0000-0000-0000-000000000000:-1
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: New cluster view: global state: 760199b2-0c88-11e7-be7d-f2d7ea489521:7, view# 3: Primary, number of nodes: 3, my index: 2, protocol version 3
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Warning] WSREP: Gap in state sequence. Need state transfer.
Mar 19 09:43:19 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:19 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '192.168.0.109' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --defaults-group-suffix '' --parent '7986' --binlog '/var/log/mysql/mariadb-bin' '
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Member 1.0 (node3) requested state transfer from '*any*'. Selected 0.0 (node1)(SYNCED) as donor.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 rsyncd[8036]: rsyncd version 3.1.0 starting, listening on port 4444
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Prepared SST request: rsync|192.168.0.109:4444/rsync_sst
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: REPL Protocols: 7 (3, 2)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Assign initial position for certification: 7, protocol version: 3
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Service thread queue flushed.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (760199b2-0c88-11e7-be7d-f2d7ea489521): 1 (Operation not permitted)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011 at galera/src/replicator_str.cpp:prepare_for_IST():482. IST will be unavailable.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: Member 2.0 (node2) requested state transfer from '*any*', but it is impossible to select State Transfer donor: Resource temporarily unavailable
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Requesting state transfer failed: -11(Resource temporarily unavailable). Will keep retrying every 1 second(s)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Warning] WSREP: 0.0 (node1): State transfer to 1.0 (node3) failed: -141 (Unknown error 141)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Member 0.0 (node1) synced with group.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: declaring 76006a4b at tcp://192.168.0.102:4567 stable
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: forgetting 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Node 76006a4b state prim
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: view(view_id(PRIM,76006a4b,4) memb {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #0117b77d168,0
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: save pc into disk
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: forgetting 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: sent state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: got state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1 from 0 (node1)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: STATE EXCHANGE: got state msg: 7c8f8b36-0c88-11e7-8bd7-33c58acb74e1 from 1 (node2)
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Quorum results:
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011version    = 4,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011component  = PRIMARY,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011conf_id    = 3,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011members    = 1/2 (joined/total),
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011act_id     = 7,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011last_appl. = 0,
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011protocols  = 0/7/3 (gcs/repl/appl),
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: #011group UUID = 760199b2-0c88-11e7-be7d-f2d7ea489521
Mar 19 09:43:20 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:20 [Note] WSREP: Flow-control interval: [23, 23]
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Member 1.0 (node2) requested state transfer from '*any*'. Selected 0.0 (node1)(SYNCED) as donor.
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 7)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: Requesting state transfer: success after 2 tries, donor: 0
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: GCache history reset: old(00000000-0000-0000-0000-000000000000:0) -> new(760199b2-0c88-11e7-be7d-f2d7ea489521:7)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Warning] WSREP: 0.0 (node1): State transfer to 1.0 (node2) failed: -141 (Unknown error 141)
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [ERROR] WSREP: gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():736: Will never receive state. Need to abort.
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: terminating thread
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: joining thread
Mar 19 09:43:21 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:21 [Note] WSREP: gcomm: closing backend
Mar 19 09:43:22 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:22 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection to peer 7bc8012e with addr tcp://192.168.0.109:4567 timed out, no messages seen in PT3S
Mar 19 09:43:22 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:22 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting off
Mar 19 09:43:24 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:24 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') connection to peer 76006a4b with addr tcp://192.168.0.102:4567 timed out, no messages seen in PT3S
Mar 19 09:43:24 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:24 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://192.168.0.102:4567 
Mar 19 09:43:25 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:25 [Note] WSREP:  cleaning up 7b77d168 (tcp://192.168.0.104:4567)
Mar 19 09:43:25 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:25 [Note] WSREP: (7bc8012e, 'tcp://0.0.0.0:4567') reconnecting to 76006a4b (tcp://192.168.0.102:4567), attempt 0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: evs::proto(7bc8012e, LEAVING, view_id(REG,76006a4b,4)) suspecting node: 76006a4b
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: evs::proto(7bc8012e, LEAVING, view_id(REG,76006a4b,4)) suspected node without join message, declaring inactive
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: view(view_id(NON_PRIM,76006a4b,4) memb {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: #0117bc8012e,0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } joined {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } left {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: } partitioned {
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: #01176006a4b,0
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: })
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: view((empty))
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: gcomm: closed
Mar 19 09:43:26 vagrant-ubuntu-trusty-64 mysqld: 170319  9:43:26 [Note] WSREP: /usr/sbin/mysqld: Terminated.
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [ERROR] Parent mysqld process (PID:7986) terminated unexpectedly. (20170319 09:43:29.239)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [INFO] Joiner cleanup. rsync PID: 8036 (20170319 09:43:29.242)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 rsyncd[8036]: sent 0 bytes  received 0 bytes  total size 0
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld: WSREP_SST: [INFO] Joiner cleanup done. (20170319 09:43:29.751)
Mar 19 09:43:29 vagrant-ubuntu-trusty-64 mysqld_safe: mysqld from pid file /var/run/mysqld/mysqld.pid ended
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 "Connection refused")'
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
Mar 19 09:43:47 vagrant-ubuntu-trusty-64 /etc/init.d/mysql[8358]:

And here's the configuration file:

[mysqld]
transaction-isolation = READ-COMMITTED

key_buffer = 16M
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
query_cache_limit = 8M
query_cache_size = 64M
query_cache_type = 1

max_connections = 1050
#expire_logs_days = 10
#max_binlog_size = 100M

log_bin=/var/lib/mysql/mysql_binary_log

read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M

binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so

# Galera Cluster Configuration
wsrep_cluster_name="test"
wsrep_cluster_address="gcomm://192.168.0.102,192.168.0.104,192.168.0.109"

# Galera Synchronization Configuration
wsrep_sst_method=rsync

# Galera Node Configuration
wsrep_node_address=192.168.0.109
wsrep_node_name=node2

What are version vectors?

$
0
0

Folks, I am currently learning about distributed data systems via the book "Designing Data-Intensive Applications".

I think I have a pretty strong understanding about how version numbers in a single replica system allow the server to detect concurrent writes*. The author starts with this example because once you understand the single replica system, expanding that understanding to a multi-leader or leaderless replicated system is supposed to be obvious, but it is not obvious to me at all.

How do version number in a system where multiple replicas can handle write requests work? In other words, what are version vectors?

* In a single replica system, each write is accompanied by a version number. This version number is the version of the data that the write is based off of. If a write is based on Version 1 of the data for that key, and Version 2 already exists, we know that the incoming write is concurrent with Version 2. The incoming write can only overwrite data that was in Version 1, since it does not know about the data in Version 2. For example, Version 1 is [eggs], Version 2 is [eggs] and [milk]. The incoming write wants to update this key to [eggs, bacon]. Version 3 of this key will become [eggs, bacon] and [milk]. The incoming write cannot overwrite [milk] since it didn't even know that [milk] was a value in the key.

MySQL 5.6: explicit_defaults_for_timestamp

$
0
0

I have the following replication topology:

DB1 (MySQL 5.5) -> DB2 (MySQL 5.6, explicit_defaults_for_timestamp = 1) -> DB3 (MySQL 5.6, explicit_defaults_for_timestamp = 1)

- "date" field:

`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,

- DB3 replication error:

[ERROR] Slave SQL: Error 'Column 'date' cannot be null' on query. Default database: 'test'. Query: 'INSERT INTO test_log VALUES (null,'12345',12345,'test','saved')', Error_code: 1048

The reason why DB3 is failing is explained here:

No TIMESTAMP column is assigned the DEFAULT CURRENT_TIMESTAMP or ON UPDATE CURRENT_TIMESTAMP attributes automatically. Those attributes must be explicitly specified.

I would like to understand why DB2 is working fine, I guess that's because it's replicating from MySQL 5.5 but what settings are responsible for this?

Update Wed 1 Oct 09:34:03 BST 2014:

Table definition match on all three servers:

mysql> SHOW CREATE TABLE test_log\G
*************************** 1. row ***************************
       Table: feedback_log
Create Table: CREATE TABLE `feedback_log` (
  `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `order_ref` varchar(32) NOT NULL,
  `id` int(11) NOT NULL,
  `version` varchar(12) NOT NULL,
  `event` varchar(60) NOT NULL,
  KEY `order_ref` (`order_ref`),
  KEY `id` (`id`),
  KEY `version` (`version`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

SQL_MODE shouldn't be the case here:

  • DB1: None
  • DB2, DB3: NO_ENGINE_SUBSTITUTION

Summary:

I can't run this query manually on both slaves (DB1, DB2) but it's replicated successfully on DB2:

mysql [5.6.20-68.0-log]> INSERT INTO test_log VALUES (null,'12345',12345,'test','saved')';
ERROR 1048 (23000): Column 'date' cannot be null

Another quick test showing this behaviour:

DB1

mysql [5.5.39-log]> CREATE TABLE t1 (date TIMESTAMP);
Query OK, 0 rows affected (0.20 sec)

mysql [5.5.39-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

DB2

mysql [5.6.20-68.0-log]> SELECT @@explicit_defaults_for_timestamp;
+-----------------------------------+
| @@explicit_defaults_for_timestamp |
+-----------------------------------+
|                                 1 |
+-----------------------------------+
1 row in set (0.00 sec)

mysql [5.6.20-68.0-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

DB3

mysql [5.6.20-68.0-log]> SELECT @@explicit_defaults_for_timestamp;
+-----------------------------------+
| @@explicit_defaults_for_timestamp |
+-----------------------------------+
|                                 1 |
+-----------------------------------+
1 row in set (0.04 sec)

mysql [5.6.20-68.0-log]> SHOW CREATE TABLE t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `date` timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8
1 row in set (0.00 sec)

What can make a SQL Server transactional publication send foreign keys even when set to false

$
0
0

SQL Server 2012, 2014, 2016 transactional replication

  1. Publication is created. (copy Foreign Keys is false, the default)
  2. Subscription is created.
  3. Snapshot and sync.
  4. Turn off synchronization.
  5. Upgrade the publication database.
  6. Upgrade the subscriber database for tables affected by modified views.
  7. Set the snapshot to only gather information for changes.
  8. Restart sync.

There is now an error at the subscriber because the two new columns exist and the snapshot is trying to create them but with foreign keys.

Typically it hasn't cared but now it seems to because of the FK creation it wants to do. If I manually delete the two new columns the sync will now create them again but with FKs.

The same operation happens for other new fields but we've never run into this issue before.

Looking to understand why FKs are being sent and if there is a workaround or setting.


MySQL - Enabling Scheduled Event on Master and Slave Simultaneously

$
0
0

This may seem a bit strange, but I am trying to get a Scheduled Event to execute on both Master and Slave.

I have two databases set up in a Master (A) to Master (B) replication environment.

Master A is READ_ONLY=OFF

Master B is READ_ONLY=ON

I create a user on both Databases:GRANT INSERT, EVENT ON test.* TO 'user'@'localhost' IDENTIFIED BY 'Password';

I then create my Event on Master A:

DROP EVENT `e_test`;
DELIMITER $$
CREATE DEFINER=`user`@`localhost` 
EVENT `e_test` 
ON SCHEDULE EVERY 1 MINUTE 
STARTS NOW() 
ON COMPLETION PRESERVE 
ENABLE 
COMMENT 'Adds a new row to test.tab1 every 1 minute' 
DO 
    BEGIN
        INSERT INTO test.tab1 (`fname`) VALUES (NOW());
    END;
$$

So far so good. It executes every minute, and adds an entry to the table, which replicates to the other Database.

However, on the Master B, it is marked as SlaveSide Disabled, and so doesn't execute.

If I do:

ALTER DEFINER=user@localhost EVENT e_test ENABLE;

on Master B, it starts to execute on Master B, but on Master A it is now flagged as Slaveside_Disabled, and so doesn't execute.

If I then enable it on Master A, Master B is Slaveside_Disabled.

The reason for wanting this (in case you were wondering), is so that as part of my failover script I simply need to execute SET GLOBAL READ_ONLY = { ON | OFF } on each database accordingly, as opposed to having to enable / disable all my events (one command vs many commands).

Under normal circumstances, on Master A (READ_ONLY=OFF) the events execute as per normal and adds the entry; On Master B (READ_ONLY=ON) the events execute, but don't insert an entry as they don't have permission.

I looked at using SET GLOBAL EVENT_SCHEDULER = { ON | OFF } as the one command, but if I set it to OFF as the default, then we need to remember to enable it each server restart. Alternatively if we set it to ON as the default we need to remember to disable it every server restart. The use of READ_ONLY seemed a better option as it can be easily included in a failover script.

Any ideas?

Mongo DB sharded cluster fails when one shard is down

$
0
0

I am using Mongo Sharded cluster with two shards.
There was an issue on my one shard and and it was down for around 30 minutes.
It stopped my writes to other shards also.
Logically if one shard is down other shard must be able to take part of writes but allow rites got failed in that duration.

Command failed with error 133: 'could not find host matching read preference { mode: "primary", tags: [ {} ] } for set firstset' on server xxxx . The full response is { "code" : 133, "ok" : 0.0, "errmsg" : "could not find host matching read preference { mode: \"primary\", tags: [ {} ] } for set firstset" }

Could you please help me why this happened.

Mongo version : 3.2.9 Shard Key contentID : alphanumeric value

https://docs.mongodb.com/manual/sharding/#high-availability

Thanks
Virendra Agarwal

MySQL Semi-synchronous replication with Multi-Master

$
0
0

Is it possible to use semi-synchronous replication with a Multi-Master setup?

I've tried to follow this guide to setup a semi-synchronous replication for a master-slave setup: https://avdeo.com/2015/02/02/semi-synchronous-replication-in-mysql/

But I'm not sure how to implement this on a Multi-Master setup.

There are two plugins: one for the master and one for the slave. Since a Multi-Master act as a Master and Slave, does that mean I have to install both plugins on all servers?

I'm using MySQL 5.7

pg_repack slows down PostgreSQL replication

$
0
0

I have a master PostgreSQL 9.5 server and a standby server. For replication I use repmgr (WAL streaming). Typically the delay between master and standby is <5s:

$ psql -t -c "SELECT extract(epoch from now() - pg_last_xact_replay_timestamp());"
  0.044554

Periodically pg_repack is invoked on master in order to optimize indexes and tables. Repacking tables causes massive changes in WAL streaming and significantly slows down replication, so that the difference between master and standby could be more than 1h.

Is there a way how to reduce such delay? Is it possible to synchronize newly incoming data with higher priority than repack changes?

Unable to replicate a database

$
0
0

I setup a master slave replication, where the master has the following setup

datadir                = /mnt/DATADIR/

bind-address            = 0.0.0.0

binlog-ignore-db        = information_schema,mysql,performance_schema,sys
slave-skip-errors      = 1062,1452,1146
server-id              = 3
binlog-checksum        = none
log_bin                = /mnt/DATADIR/log-bin.log
expire_logs_days       = 10
max_binlog_size        = 100M

After restarting the master I check the binary log

mysql> SHOW MASTER STATUS;
+----------------+----------+--------------+-------------------------------------------------+-------------------+
| File           | Position | Binlog_Do_DB | Binlog_Ignore_DB                                | Executed_Gtid_Set |
+----------------+----------+--------------+-------------------------------------------------+-------------------+
| log-bin.000003 |      150 |              | information_schema,mysql,performance_schema,sys |                   |
+----------------+----------+--------------+-------------------------------------------------+-------------------+
1 row in set (0,00 sec)

I am running the following command on the slave to change the log position

SLAVE STOP; CHANGE MASTER TO MASTER_LOG_POS=150, MASTER_LOG_FILE='log-bin.000003'; SLAVE START;

The output on the slave

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: ...
                  Master_User: slave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: log-bin.000003
          Read_Master_Log_Pos: 150
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 266
        Relay_Master_Log_File: log-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 150
              Relay_Log_Space: 422
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error:

Now testing the replication, I create a dummy schema on the master

mysql> create schema test4;
Query OK, 1 row affected (0,02 sec)

However when I am checking the output of the slave I am getting some error

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: ...
                  Master_User: slave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: log-bin.000003
          Read_Master_Log_Pos: 302
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 266
        Relay_Master_Log_File: log-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 1594
                   Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 150
              Relay_Log_Space: 574
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 1594
               Last_SQL_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
1 row in set (0,00 sec)

However when I am checking the binary log file everything seems fine:

mysqlbinlog log-bin.000003
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#160714  8:17:56 server id 3  end_log_pos 123   Start: binlog v 4, server v 5.7.12-0ubuntu1.1-log created 160714  8:17:56
# Warning: this binlog is either in use or was not closed properly.
BINLOG '
lC6HVw8DAAAAdwAAAHsAAAABAAQANS43LjEyLTB1YnVudHUxLjEtbG9nAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA
AOAGcMo=
'/*!*/;
# at 123
#160714  8:17:56 server id 3  end_log_pos 150   Previous-GTIDs
# [empty]
# at 150
#160714  8:27:46 server id 3  end_log_pos 211   Anonymous_GTID  last_committed=0    sequence_number=1
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 211
#160714  8:27:46 server id 3  end_log_pos 302   Query   thread_id=3 exec_time=0 error_code=0
SET TIMESTAMP=1468477666/*!*/;
SET @@session.pseudo_thread_id=3/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=1436549152/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
create schema test4
/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

What could be the problem? The master version is

mysql> SHOW VARIABLES LIKE "%version%";                                                                                                                                                                     +-------------------------+-----------------------+
| Variable_name           | Value                 |
+-------------------------+-----------------------+
| innodb_version          | 5.7.12                |
| protocol_version        | 10                    |
| slave_type_conversions  |                       |
| tls_version             | TLSv1,TLSv1.1         |
| version                 | 5.7.12-0ubuntu1.1-log |
| version_comment         | (Ubuntu)              |
| version_compile_machine | x86_64                |
| version_compile_os      | Linux                 |
+-------------------------+-----------------------+

And the slave version is

mysql> SHOW VARIABLES LIKE "%version%";                                                                                                                                                                     +-------------------------+---------------------+
| Variable_name           | Value               |
+-------------------------+---------------------+
| protocol_version        | 10                  |
| version                 | 5.1.73-log          |
| version_comment         | Source distribution |
| version_compile_machine | x86_64              |
| version_compile_os      | redhat-linux-gnu    |
+-------------------------+---------------------+
5 rows in set (0,00 sec)

Is there some new setting which needs to be set i the 5.7 version to be compatible with 5.1?

A timeout occured after 30000ms selecting a server using CompositeServerSelector

$
0
0

I try to deploy my Mongo database in Mongolabs, everything works fine, and I create a new database. Please see my connectionstring.

    public DbHelper()
    {

        MongoClientSettings settings = new MongoClientSettings()
        {
            Credentials = new MongoCredential[] { MongoCredential.CreateCredential("dbname", "username", "password") },
            Server = new MongoServerAddress("ds011111.mongolab.com", 11111),
            //ConnectTimeout = new TimeSpan(30000)
        };

        Server = new MongoClient(settings).GetServer();

        DataBase = Server.GetDatabase(DatabaseName);

    }

but when I try to connect the database it's shows error like:

enter image description here

mongo replica is down

$
0
0

I have A mongo replica with 6 nodes, not sure why but 4 of them went down and two entered to recovery state. each time I tried to start the service of one of the down nodes it fails with error there is no member to sync from. and the 2 recovering nodes stuck in recovery can you please suggest what can be done? thanks


How do I fix a PostgreSQL 9.3 Slave that Cannot Keep Up with the Master?

$
0
0

We have a master-slave replication configuration as follows.

On the master:

postgresql.conf has replication configured as follows (commented line taken out for brevity):

max_wal_senders = 1            
wal_keep_segments = 8          

On the slave:

Same postgresql.conf as on the master. recovery.conf looks like this:

standby_mode = 'on'
primary_conninfo = 'host=master1 port=5432 user=replication password=replication'
trigger_file = '/tmp/postgresql.trigger.5432'

When this was initially setup, we performed some simple tests and confirmed the replication was working. However, when we did the initial data load, only some of the data made it to the slave.

Slave's log is now filled with messages that look like this:

< 2015-01-23 23:59:47.241 EST >LOG:  started streaming WAL from primary at F/52000000 on timeline 1< 2015-01-23 23:59:47.241 EST >FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 000000010000000F00000052 has already been removed< 2015-01-23 23:59:52.259 EST >LOG:  started streaming WAL from primary at F/52000000 on timeline 1< 2015-01-23 23:59:52.260 EST >FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 000000010000000F00000052 has already been removed< 2015-01-23 23:59:57.270 EST >LOG:  started streaming WAL from primary at F/52000000 on timeline 1< 2015-01-23 23:59:57.270 EST >FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 000000010000000F00000052 has already been removed

After some analysis and help on the #postgresql IRC channel, I've come to the conclusion that the slave cannot keep up with the master. My proposed solution is as follows.

On the master:

  1. Set max_wal_senders=5
  2. Set wal_keep_segments=4000 . Yes I know it is very high, but I'd like to monitor the situation and see what happens. I have room on the master.

On the slave:

  1. Save configuration files in the data directory (i.e. pg_hba.conf pg_ident.conf postgresql.conf recovery.conf)
  2. Clear out the data directory (rm -rf /var/lib/pgsql/9.3/data/*) . This seems to be required by pg_basebackup.
  3. Run the following command:pg_basebackup -h master -D /var/lib/pgsql/9.3/data --username=replication --password

Am I missing anything ? Is there a better way to bring the slave up-to-date w/o having to reload all the data ?

Any help is greatly appreciated.

Replicating from MySQL 5.1 master to 5.6 slave failing because 'INSERT ... VALUES (NOW())' results in 'Error_code: 1062'

$
0
0

I am migrating away from some old MySQL 5.1 servers to some new MySQL 5.6 servers. During this process, I'm creating a new MySQL 5.6 slave from an existing MySQL 5.1 slave, using the procedure in the mysqldump reference guide.

For example, if my MySQL 5.1 servers are named 'master1' and 'replica1' and I have a new MySQL 5.6 server named 'replica2', the following should make replica2 a second slave of 'master1':

replica2 % mysqldump --login-path=replica1 --all-databases --dump-slave --include-master-host-port --apply-slave-statements --lock-all-tables  --add-drop-database > all.sql
replica2 % mysql < all.sql

And this seems work well, but replication fails with the following error complaining about duplicate entries for the primary key.

2015-06-12 10:00:00 1234 [ERROR] Slave SQL: Worker 0 failed executing transaction '' at master log mysql-bin.009332, end_log_pos 12341234; Error 'Duplicate entry '8072' for key 'PRIMARY'' on query. Default database: 'DATABASE'. Query: 'INSERT INTO "Member" ("Created") VALUES (NOW())', Error_code: 1062

Can I assume that 'INSERT INTO "Member" ("Created") VALUES (NOW())' is triggering the error here? Can I get replication to work without skipping rows with SET GLOBAL sql_slave_skip_counter = 1;?

Some additional details:

  • I'm using classic MySQL replication, and GTIDs are currently disabled.
  • The MySQL 5.1 servers are using STATEMENT-based replication, but the new MySQL 5.6 servers are using ROW-based replication.
  • I don't own the application code, and I cannot change the SQL.

mysql - can a replica have a different primary key than the source?

$
0
0

I have a table with PRIMARY KEY (`id`) and I want to change it to PRIMARY KEY (`username`, `id`). These columns are defined as:

  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `username` varchar(20) NOT NULL DEFAULT '',

This table is within a primary/secondary MySQL topology with binary row replication. Can I get away with taking the replica offline, changing the primary key, and reconnect it to the source without changing the source? For clarity, only the primary key index would be different between the source/replica. All other columns/order of columns would be the same.

How to replication table MySQL cluster?

$
0
0

I am new in MySQL cluster. I have a question: is there a way to replication table between data nodes in the same cluster of MySQL ?
As follows: I have a cluster with 2 data node and 2 sql node:
node 1: 192.168.1.2 (data_1 and sql_1)
node 2: 192.168.1.3 (data_1 and sql_1)

I've created on one database cluster consists of four tables T1, T2, T3, T4. Including T1, T2 located in node 1, T3 and T4 located in the node 2. Now I need on node 2 creates one replica of the table T1 and on node 1 to create a replica of the table T3.
I do not know how to do it, please help me.
Best regards.

Create hot standby on top of Barman streaming backup

$
0
0

I set up a Barman server that uses pg_basebackup and pg_receivewal for streaming replication. It is pretty much the same setup as described in the Barman documentation (scenario 1).

Is there a straight-forward way to build a hot standby based on this setup? I would like to query the standby for data analyses.

I am using Barman 2.11 and PostgreSQL 12.4.

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>