Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Mongodb users in replica

$
0
0

1) I have mongo instances A, B and C, each having a different set of users with different roles. I have used mongo shell to connect with A through an Admin user. Now, to create a replica and add B and C, do I need to pass through auth on B and C. Can you please help me with the commands?

2) I have mongo instances A, B and C, each having a different set of users with different roles. After I create a replica with A, B and C, what happens to the users of individual nodes. Are all the users from each node available in the replica?


MariaDB slave replication stopped after slave upgrade to 10.1.41

$
0
0

After the recent MariaDB 10.1.41 update which happened on 01/Aug/2019, few of our slaves have stopped syncing relay logs from master which is on 10.1.40. Slaves which got automatically updated to 10.1.41 is now having the following status. Slave IO thread is in Preparing state and the logs are not getting written. I did a reset slave and initiated a change master with the positions again but still getting the same. The other slave servers which is in MariaDB 10.1.40, 10.1.33 etc are running as normal. These which got upgraded to 10.1.41 has issue?

Does anyone have any clue on this?

MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: NULL
                  Master_Host: a.b.c.d
                  Master_User: xxxxxxxx
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: xxxxxxxxxxxxxx-bin.001135
          Read_Master_Log_Pos: 262316421
               Relay_Log_File: xxxxxxxxxxxxxxxxxxxxx-bin.003410
                Relay_Log_Pos: 4
        Relay_Master_Log_File: xxxxxxxxx-bin.001135
             Slave_IO_Running: Preparing
            Slave_SQL_Running: Yes
              Replicate_Do_DB: xxxxxxxxx
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 262316421
                        Relay_Log_Space: 498
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 111
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
                   Using_Gtid: No
                  Gtid_IO_Pos: 
      Replicate_Do_Domain_Ids: 
  Replicate_Ignore_Domain_Ids: 
                Parallel_Mode: conservative

postgresql replication have too much size

$
0
0

postgresql 10.14@ubuntu18.04

I have primary database and replicated database. Problem is that the size of the directory that contains replication is too large. That is about 7TB on the other hand the directory that contains primary is 1.5TB.But the result of investigation by du of inside of these directories are look like same. Only the size of directory is different and I can not find what is the cause.

The result of du is below.

for primary database

du -sh database
1.5T    database

for primary inside database

du -sh *
4.0K    PG_VERSION
1.5T    base
456K    global
4.0K    pg_commit_ts
4.0K    pg_dynshmem
16K pg_logical
28K pg_multixact
12K pg_notify
12K pg_replslot
4.0K    pg_serial
4.0K    pg_snapshots
4.0K    pg_stat
4.0K    pg_stat_tmp
12K pg_subtrans
4.0K    pg_tblspc
4.0K    pg_twophase
273M    pg_wal
80K pg_xact
4.0K    postgresql.auto.conf
4.0K    postmaster.opts
4.0K    postmaster.pid

for replication database

du -sh database
7.2T    database

for replication

du -sh *
4.0K    PG_VERSION
1.5T    base
456K    global
4.0K    pg_commit_ts
4.0K    pg_dynshmem
16K pg_logical
28K pg_multixact
12K pg_notify
12K pg_replslot
4.0K    pg_serial
4.0K    pg_snapshots
4.0K    pg_stat
4.0K    pg_stat_tmp
12K pg_subtrans
4.0K    pg_tblspc
4.0K    pg_twophase
5.8T    pg_wal
80K pg_xact
4.0K    postgresql.auto.conf
4.0K    postmaster.opts
4.0K    postmaster.pid
4.0K    recovery.conf

Is there any way to find what is the reason? How can I reduce the size of data that replication has? thanks!

High-Availablity(HA) Software/Solution For EMails/Data/DataBase From Different Geo Location Mail/Web/DNS-Server

$
0
0

What SECURE & open-source & free (cluster/HA / sync/replication / load-balance, etc) software/solution will be better to make mail/web/dns/etc server data+emails+database, services reliably available from 3-nodes located in 3-different geo areas/locations ?


SERVER info:
i'm using 3-servers "s1", "s2", "s3" located in 3 different geo-location "zL1", "zL2", "zL3", to provide mail-services, web-services, etc,
( Hardware cfg not same : s1 & s2 each has 1GB ram, s3 has 2GB , storage: s1 - 15GB, s2 - 10GB, s3 - 20GB, etc.
Each uses same software : Debian v10/Buster, etc.
Network resource : each has 1 IPv4-address, 1 IPv6/64 subnet, 1 IPv6/48 subnet, so lots of IPv6-addresses but only one IPv4-adrs.
i'm using "example.com" domain-name )

OBJECTIVES: i must-have to keep 3-different GEO-LOCATION/Zone/Group of user's emails/database / private-data, etc SEPARATED in their own local zone server, AND make a 4th group of users whose data/emails, database, etc is accessible to them over High-Availability(HA) connection from any one of those 3-servers/nodes.


CURRENT CONFIG:
So i'm currently running manually configured mail-servers : "s1.example.com" (IPv4-adrs) in s1, "s2.example.com"(IPv4-adrs) in s2, "s3.example.com" (IPv4-adrs) in s3 , and also "m1.example.com" (IPv6-adrs) mail-server in s1, "m2.example.com" (IPv6-adrs) in s2, & "m3.example.com" (IPv6-adrs) in s3.
s1+m1 user's emails/db / private-data,etc must not be replicated to any other(s2/m2, s3/m3) servers, & s2+m2 user's data in s2 must-not be replicated to s1+m1, s3+m3, & so on.
And i need to create+run another/3rd mail-server "mx.example.com" (IPv6-adrs) in each server s1,s2,s3, where mx (mixed/multi-zone) mail-users must be able to access/send emails via any zone's primary mail servers : s1/m1, s2/m2 or s3/m3.
So that means "mx" mail-server must be available+run from each (s1,s2,s3) node/server , and s1/m1, s2/m2, s3/m3 can RELAY emails to "mx.example.com".
"mx" user's emails/db / private-data,etc must-be replicated to each (s1,s2,s3) server (becuz they have provided more than one address, in multiple local zone).

Also running web-server services, etc, etc which is configured to detect user's client-app IP-address & redirect them to their nearest local zone server+service s1, s2 or s3, etc.

WHICH (HA/REPLICATION,MANUAL-CONFIG,etc) SOLUTION FOR DNS:
To achieve above, i'm currently doing these initially, but based on correct/suitable suggestion i will (and want-to) change/adapt :
s1,s2,s3 needs to be my nameserver for my primary domain "example.com" , so FDNS/RDNS etc DNS entries are same on each server currently, (so currently dns data is available always, even when 2 servers are down, bcuz of round-robin . Can+should i move dns functionalities inside cluster/ha SSI(Single System Image) software ? or what else solution can be better ?


WHICH (HA/REPLICATION,MANUAL-CONFIG,etc) SOLUTION FOR MAIL:
currently multi-zone / multi-locality ("mx") mail-server functionality is not implemented & not working, as they have to use same location server s3 , as i have not loaded a cluster/ha software yet , so please read my next few sections (requirements+requests) & please suggest suitable software + config . can i load webmin+virtualmin inside cluster/ha SSI & create "mx.example.com" ? then emails/database will be auto-replicated (multi-master, master-to-master) by cluster/ha SSI software ? or what else software is suitable/usable ?
to me this(GaleraCluster+MariaDB) looks like a good option for "mx" users, but not sure about the other factor: running 2nd instance of mail-server is needed for "mx"-users ? or existing mail-server in each node can be configured further ?

as each host-node (s1,s2,s3) is already running various services, for-example: mail-server services , can a 2nd set of same deamons/services (with different config) be run inside the logical-node created by the (cluster/ha) / SSI type of software, without conflicting with existing services running in host-node ?


SECURE (HA/REPLICATION) SOFTWARE/SOLUTION:
Which tool/software or config can force all node-2-node(N2N) communication of cluster/ha / sync/replicate, etc software to use (stronger) encryption (AES-256/RSA-16384/etc cipher/algo,etc) ?
if openssh is suffice, then please show the config that will allow HA / replication software to use the SSH tunnel.

whatever software/solution you'll recommend, can you please show their HTTPS SECURE (docs/man/wiki) website/webpage and/or source-code site ?

To me, ALL USERS & my-own SECURITY & PRIVACY are MOST-IMPORTANT & HIGHEST-PRIORITY aspect+OBJECTIVE , i'm completely ok if something is slow, because of that.
If a software cannot use fully-safe (for next 10-to-20yrs) encrypted communication, for all N2N communications (and for all node to end-user-client communication),
Or cannot be configured to use (existing) encrypted-tunnels/connections ,
then PLEASE avoid to suggest/mention it, as that will be absolutely useless/unusable to me for my this case . Thanks in advance for your kind consideration . ( but, i think, that will be helpful? (or last-choice) for other users who does-not want (or could-not apply) security+privacy !


AT WHICH LEVEL (HA/REPLICATION) WILL WORK ?
please specify / point-out / link, in which level your mentioned software works, what changes needed to load/install & configure/use it.


SOFWARE BUILD/DEV PROGRAM: i will be ok with software thats based on C, C++, etc.

in these small/tiny-servers i want to avoid software that requires/uses itself large amount of memory, so no Java/JVM based or similar solution please.

SOFTWARE-INFO:
List of cluster software , Comparison of cluster software , Comparison of SSI , Virtual_synchrony.

Maximum number subscribers node in PostgreSQL logical replication?

$
0
0

Does anyone know what is the maximum of subscriber nodes with PostgreSQL 11 logical replication ?

Derby DB : Replication Error

$
0
0

I am trying to setup replication b/w two Derby databases. Its embedded with an application. Configuration for replication is done as per the documentation here

When the application tries to start master instance, I get to see the following message. There is some exchange happening on the replication port b/w master & slave. But master quits with the message. Per the documentation,before starting master, the database is copied to the slave location. And no transaction happens in master after that. What does the message ( mismatch in log instance) mean? What could I be missing?

 Lost DB connection URL= jdbc:derby:db/NMS;startMaster=true;slaveHost=10.200.10.66 + REASON= The log files on the master and slave are not in synch for replicated database 'db/NMS'. The master log instant is 1:200335, whereas the slave log instant is 1:196097. This is fatal for replication - replication will be stopped.:

FATAL: could not connect to the primary server: expected authentication request from server, but received S

$
0
0

I am trying to crate master slave replication scenario in Postgress server. I have create a master with the 5300 port and slave have the 5500 here is my postgresql.conf.

listen_addresses = '*'
port              = 5300
wal_level         = hot_standby
max_wal_senders   = 3
wal_keep_segments = 8
synchronous_standby_names = 'slave1'

I have created pg_hba.conf with following configurations.

# "local" is for Unix domain socket connections only
local   all             all                                        trust
# IPv4 local connections:
host    all             all                127.0.0.1/32            trust
# IPv6 local connections:
host    all             all                ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                        trust
host    replication     all                127.0.0.1/32            trust
host    replication     all                ::1/128                 trust
host    replication     <database-user>    127.0.0.1/32            md5
host    replication     <database-user>    172.16.217.185/32       md5
host    replication     <database-user>    172.16.217.187/32       md5

Then I create cluster with the following command.

initdb

and then I start my master cluster with the following command.

bin/pg_ctl -w -l master/logs -o "-p 5300" start

and that command is running fine. then I create the slave folder and copy all the master data in to it. and past this folder to the slave side. then after configuration I did create recovary.conf.

and put the following in it.

standby_mode = 'on'
primary_conninfo ='host=172.16.217.185 port=5300 user=<db_user> password=<Password> application_name=slave1'

then I start my slave.

pg_ctl -w -l slave/logs -o "-p 5500" start

that give me error when i see the slave/logs. I see the error i have shared above.

FATAL: could not connect to the primary server: expected authentication request from server, but received S and remember I am running all the scenario in the VMFusion.

and one more thing I have try this with firewall enable/disable/allow all them gives me the same so its not firewall that causing this error.

Thanks Advance.

Error while doing in replicaset in windows (MongoDB)

$
0
0

I tried doing the replicatset in windows, but im getting the below error. I checked the mongo process and killed the existing process. Even though the error exists.

help me on this pls

Thx

commands executed:

C:\MongoDB\bin>mongo
MongoDB shell version: 2.6.6
connecting to: test>> cfg = {
... _id : "cri",
... members : [
... { _id:0, host:"INline1.corp:27001"},
... { _id:1, host:"INline2.corp:27002"},
... { _id:2, host:"INLN3.corp:27003"}
... ]
... }
{
      "_id" : "cri",
      "members" : [
              {
                      "_id" : 0,
                      "host" : "INline1.corp:27001"
              },
              {
                      "_id" : 1,
                      "host" : "INline2.corp:27002"
              },
              {
                      "_id" : 2,
                      "host" : "INLN3.corp:27003"
              }
      ]
}> rs.help()> cfg
{
      "_id" : "cri",
      "members" : [
              {
                      "_id" : 0,
                      "host" : "INline1.corp:27001"
              },
              {
                      "_id" : 1,
                      "host" : "INline2.corp:27002"
              },
              {
                      "_id" : 2,
                      "host" : "INLN3.corp:27003"
              }
      ]
}> // rs.initiate(cfg)>> rs.initiate
function (c) { return db._adminCommand({ replSetInitiate: c }); }> rs.initiate( cfg )
{ "ok" : 0, "errmsg" : "server is not running with --replSet" }>>> rs.initiate()
C:\MongoDB\bin>

Error:

2015-02-22T00:48:57.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:48:58.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:48:59.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:00.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:01.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:02.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:03.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:04.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:05.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:06.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:07.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:08.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:09.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:10.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:11.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:12.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:13.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:14.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:15.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:16.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:17.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:18.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:19.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:20.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:21.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:22.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:23.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:24.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:25.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:25.916+0530 Ctrl-C signal
2015-02-22T00:49:25.916+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T00:49:25.916+0530 [consoleTerminate] now exiting
2015-02-22T00:49:25.917+0530 [consoleTerminate] dbexit:
2015-02-22T00:49:25.917+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T00:49:25.917+0530 [consoleTerminate] closing listening socket: 452
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T00:49:25.919+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T00:49:25.919+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T00:49:25.925+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T00:49:25.926+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T00:49:25.926+0530 [consoleTerminate] journalCleanup...
2015-02-22T00:49:25.927+0530 [consoleTerminate] removeJournalFiles
2015-02-22T00:49:25.929+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T00:49:25.929+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>mongo
MongoDB shell version: 2.6.6
connecting to: test
2015-02-22T00:52:06.973+0530 warning: Failed to connect to 127.0.0.1:27017, reason: errno:10061 No connection could be made because the target machine actively refused it.
2015-02-22T00:52:06.977+0530 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146 exception: connect failed

C:\MongoDB\bin>mongod
mongod --help for help and startup options
2015-02-22T00:52:14.995+0530 [initandlisten] MongoDB starting : pid=2668 port=27017 dbpath=\data\db\ 64-bit host=INLN50838607A
2015-02-22T00:52:14.996+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T00:52:14.996+0530 [initandlisten] db version v2.6.6
2015-02-22T00:52:14.997+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T00:52:14.997+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')BOOST_LIB_VERSION=1_49
2015-02-22T00:52:14.997+0530 [initandlisten] allocator: system
2015-02-22T00:52:14.997+0530 [initandlisten] options: {}
2015-02-22T00:52:15.004+0530 [initandlisten] journal dir=\data\db\journal
2015-02-22T00:52:15.004+0530 [initandlisten] recover : no journal files present, no recovery needed
2015-02-22T00:52:15.210+0530 [initandlisten] waiting for connections on port 27017
2015-02-22T00:53:00.605+0530 [initandlisten] connection accepted from 127.0.0.1:65413 #1 (1 connection now open)
2015-02-22T00:53:15.209+0530 [clientcursormon] mem (MB) res:135 virt:1270
2015-02-22T00:53:15.209+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T00:53:15.209+0530 [clientcursormon]  connections:1
2015-02-22T00:53:52.268+0530 [conn1] replSet replSetInitiate admin command received from client
2015-02-22T00:54:23.497+0530 [conn1] end connection 127.0.0.1:65413 (0 connections now open)
2015-02-22T00:57:02.580+0530 [initandlisten] connection accepted from 127.0.0.1:65504 #2 (1 connection now open)
2015-02-22T00:57:10.148+0530 [conn2] end connection 127.0.0.1:65504 (0 connections now open)
2015-02-22T00:58:15.227+0530 [clientcursormon] mem (MB) res:135 virt:1266
2015-02-22T00:58:15.227+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T00:58:15.227+0530 [clientcursormon]  connections:0
2015-02-22T00:58:51.650+0530 [initandlisten] connection accepted from 127.0.0.1:49203 #3 (1 connection now open)
2015-02-22T00:58:59.370+0530 [conn3] replSet replSetInitiate admin command received from client
2015-02-22T01:03:15.246+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:03:15.246+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:03:15.246+0530 [clientcursormon]  connections:1
2015-02-22T01:04:56.891+0530 [conn3] end connection 127.0.0.1:49203 (0 connections now open)
2015-02-22T01:05:26.677+0530 [initandlisten] connection accepted from 127.0.0.1:49531 #4 (1 connection now open)
2015-02-22T01:08:15.266+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:08:15.266+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:08:15.266+0530 [clientcursormon]  connections:1
2015-02-22T01:13:15.321+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:13:15.321+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:13:15.321+0530 [clientcursormon]  connections:1
2015-02-22T01:13:28.501+0530 [conn4] replSet replSetInitiate admin command received from client
2015-02-22T01:17:09.207+0530 Ctrl-C signal
2015-02-22T01:17:09.207+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:17:09.208+0530 [consoleTerminate] now exiting
2015-02-22T01:17:09.208+0530 [consoleTerminate] dbexit:
2015-02-22T01:17:09.208+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:17:09.208+0530 [consoleTerminate] closing listening socket: 476
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:17:09.210+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:17:09.210+0530 [conn4] end connection 127.0.0.1:49531 (0 connections now open)
2015-02-22T01:17:09.218+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:17:09.234+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:17:09.234+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:17:09.234+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:17:09.251+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:17:09.251+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>
C:\MongoDB\bin>
C:\MongoDB\bin>mongod --port 27001 --dbpath c:\data\aneesh1 --replSet cri
2015-02-22T01:18:11.597+0530 [initandlisten] MongoDB starting : pid=9936 port=27001 dbpath=c:\data\aneesh1 64-bit host=INLN50838607A
2015-02-22T01:18:11.598+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T01:18:11.598+0530 [initandlisten] db version v2.6.6
2015-02-22T01:18:11.599+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T01:18:11.599+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
2015-02-22T01:18:11.599+0530 [initandlisten] allocator: system
2015-02-22T01:18:11.599+0530 [initandlisten] options: { net: { port: 27001 }, replication: { replSet: "cri" }, storage: { dbPath: "c:\data\aneesh1" } }
2015-02-22T01:18:11.603+0530 [initandlisten] journal dir=c:\data\aneesh1\journal
2015-02-22T01:18:11.603+0530 [initandlisten] recover : no journal files present,no recovery needed
2015-02-22T01:18:11.629+0530 [initandlisten] waiting for connections on port 27001
2015-02-22T01:18:11.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:11.634+0530 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
2015-02-22T01:18:12.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:13.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:14.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:15.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:16.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:17.216+0530 Ctrl-C signal
2015-02-22T01:18:17.217+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:18:17.217+0530 [consoleTerminate] now exiting
2015-02-22T01:18:17.218+0530 [consoleTerminate] dbexit:
2015-02-22T01:18:17.218+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:18:17.218+0530 [consoleTerminate] closing listening socket: 472
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:18:17.220+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:18:17.220+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:18:17.229+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:18:17.230+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:18:17.230+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:18:17.230+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:18:17.232+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:18:17.232+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>
C:\MongoDB\bin>mongod --port 27001 --dbpath c:\data\aneesh1 --replSet cri
2015-02-22T01:23:27.978+0530 [initandlisten] MongoDB starting : pid=10800 port=27001 dbpath=c:\data\aneesh1 64-bit host=INLN50838607A
2015-02-22T01:23:27.979+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T01:23:27.979+0530 [initandlisten] db version v2.6.6
2015-02-22T01:23:27.979+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T01:23:27.979+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')BOOST_LIB_VERSION=1_49
2015-02-22T01:23:27.980+0530 [initandlisten] allocator: system
2015-02-22T01:23:27.980+0530 [initandlisten] options: { net: { port: 27001 }, replication: { replSet: "cri" }, storage: { dbPath: "c:\data\aneesh1" } }
2015-02-22T01:23:27.982+0530 [initandlisten] journal dir=c:\data\aneesh1\journal
2015-02-22T01:23:27.982+0530 [initandlisten] recover : no journal files present, no recovery needed
2015-02-22T01:23:28.009+0530 [initandlisten] waiting for connections on port 27001
2015-02-22T01:23:28.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:28.011+0530 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
2015-02-22T01:23:29.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:30.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:31.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:32.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:33.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:34.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:35.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:36.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:37.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:37.677+0530 Ctrl-C signal
2015-02-22T01:23:37.677+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:23:37.678+0530 [consoleTerminate] now exiting
2015-02-22T01:23:37.678+0530 [consoleTerminate] dbexit:
2015-02-22T01:23:37.678+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:23:37.679+0530 [consoleTerminate] closing listening socket: 476
2015-02-22T01:23:37.679+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:23:37.679+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:23:37.689+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:23:37.690+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:23:37.690+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:23:37.691+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:23:37.693+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:23:37.693+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>

Costs of backing up S3 bucket

$
0
0

I have a very large bucket. about 10 million files of 1MB, for a total of 10TB.

Continuously files are added to it (never modified). Let's say 1TB per month.

I backup this bucket to a different one on the same region using a Replication config.

I don't use Galcier for various availabilty and costs considerations.

I'm wondering if I should use Standard access or Infrequent Access storage. As there is a very large amount of files and I'm not sure how the COPY request cost will effect.

What is the difference of costs between the different options? The cost of storage is quite clear, but for copy and other operations, it's not very clear.

How do I get article properties from query in a merge replication

$
0
0

I would like to know if there are any queries to get article properties in a SQL Server merge replication. I'm able to get properties only by GUI. I select a publication properties then I get the page properties, then I select an article (a table) then if I want to check the Article Properties, I select Set Properties of Highlighted Table Article I get the four group of properties like as Copy Objects and Settings to Subscriber, Destination Object, Identification and Merging Changes. Anybody knows if there are any queries to get that properties or to get a script that create the article with of all of these properties?

Database syncronization

$
0
0

I am working on a student project and I need some guidance on what solution I can use. The goal of the project is to create a dummy webpage that would send dummy data to 2 databases(or read from one of them). If at any point one of them is unavaliable it should keep sending data to the one that still works. When the unavaliable database returns to function it should check with the one that was still working, synchronize all changes made to data and continue on. It should work bothways. Both DB1 and DB2 can shutdown but when they return to function they should synchronize with the one that was working. Professor recomended MS SQL server 2019 Developer, so far I managed to make database replication so any change made to DB1 is replicated on DB2, but I can only send data to DB1 because it doesnt work the other waY. I followed https://www.nakivo.com/blog/how-to-configure-ms-sql-server-replication-walkthrough/ this tutorial. Basicly it is transactional replication beetwen 2 SQL server dbs that are ran on 2 VMs.

I can't figure out the way to do this so I would appreciate if anyone has any suggestions or tutorials that could help me. One thing that i found that i think would work is MS SQL peer to peer replication but it is only available for Enterprise edition

MariaDB/MySQL SSL Replication Failure

$
0
0

After searching for a solution for the last 6 hours, I have come up dry in my attempt to add SSL to the replication. I managed to get it to connect with SSL via the mysql command line tool without issues, however I cannot seem to solve this replication issue. Based on the research I did find, this is an extremely generic catch-all SSL error.

System 1:

OS:             Fedora 30 Modular
Kernel:         5.0.16-300
Arch:           x86_64
MariaDB Server: 10.3.16
OpenSSL:        1.1.1c FIPS
MariaDB [(none)]> STATUS;
--------------
mysql  Ver 15.1 Distrib 10.3.16-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:      42
Current database:   
Current user:       root@localhost
SSL:            Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:      stdout
Using outfile:      ''
Using delimiter:    ;
Server:         MariaDB
Server version:     10.3.16-MariaDB-log MariaDB Server
Protocol version:   10
Connection:     Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:        /var/lib/mysql/mysql.sock
Uptime:         18 min 0 sec

Threads: 11  Questions: 32  Slow queries: 0  Opens: 17  Flush tables: 1  Open tables: 11  Queries per second avg: 0.029
--------------
MariaDB [(none)]> SHOW SLAVE STATUS \G;
*************************** 1. row ***************************
                Slave_IO_State: Connecting to master
                   Master_Host: REDACTED
                   Master_User: REDACTED
                   Master_Port: REDACTED
                 Connect_Retry: 60
               Master_Log_File: master1-bin.000012
           Read_Master_Log_Pos: 364174
                Relay_Log_File: master1-relay-bin.000001
                 Relay_Log_Pos: 4
         Relay_Master_Log_File: master1-bin.000012
              Slave_IO_Running: Connecting
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 364174
               Relay_Log_Space: 256
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: Yes
            Master_SSL_CA_File: /etc/pki/tls/certs/mariadb-chain.pem
            Master_SSL_CA_Path: /etc/pki/tls/certs/
               Master_SSL_Cert: /etc/pki/tls/certs/mariadb.pem
             Master_SSL_Cipher: TLS_AES_256_GCM_SHA384
                Master_SSL_Key: /etc/pki/tls/private/mariadb.pem
         Seconds_Behind_Master: NULL
 Master_SSL_Verify_Server_Cert: Yes
                 Last_IO_Errno: 2026
                 Last_IO_Error: error connecting to master 'REDACTED@REDACTED:REDACTED' - retry-time: 60  maximum-retries: 86400  message: SSL connection error: error:00000000:lib(0):func(0):reason(0)
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 0
                Master_SSL_Crl: /etc/pki/tls/certs/mariadb-chain.pem
            Master_SSL_Crlpath: /etc/pki/tls/certs/
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: conservative
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
              Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
1 row in set (0.000 sec)

ERROR: No query specified

MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE '%ssl%';
+---------------------+-------------------------------------------+
| Variable_name       | Value                                     |
+---------------------+-------------------------------------------+
| have_openssl        | YES                                       |
| have_ssl            | YES                                       |
| ssl_ca              | /etc/pki/tls/certs/mariadb-chain-x509.pem |
| ssl_capath          |                                           |
| ssl_cert            | /etc/pki/tls/certs/mariadb-x509.pem       |
| ssl_cipher          | TLS_AES_256_GCM_SHA384                    |
| ssl_crl             |                                           |
| ssl_crlpath         |                                           |
| ssl_key             | /etc/pki/tls/private/mariadb.pem          |
| version_ssl_library | OpenSSL 1.1.1c FIPS  28 May 2019          |
+---------------------+-------------------------------------------+
10 rows in set (0.002 sec)

System 2:

OS:             Fedora 30 Modular
Kernel:         5.0.16-300
Arch:           x86_64
MariaDB Server: 10.3.16
OpenSSL:        1.1.1c FIPS
MariaDB [(none)]> STATUS;
--------------
mysql  Ver 15.1 Distrib 10.3.16-MariaDB, for Linux (x86_64) using readline 5.1

Connection id:      60
Current database:   
Current user:       root@localhost
SSL:            Cipher in use is TLS_AES_256_GCM_SHA384
Current pager:      stdout
Using outfile:      ''
Using delimiter:    ;
Server:         MariaDB
Server version:     10.3.16-MariaDB-log MariaDB Server
Protocol version:   10
Connection:     Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:        /var/lib/mysql/mysql.sock
Uptime:         40 min 44 sec

Threads: 12  Questions: 623  Slow queries: 0  Opens: 48  Flush tables: 1  Open tables: 42  Queries per second avg: 0.254
--------------

MariaDB [(none)]> SHOW SLAVE STATUS \G;
*************************** 1. row ***************************
                Slave_IO_State: Connecting to master
                   Master_Host: REDACTED
                   Master_User: REDACTED
                   Master_Port: REDACTED
                 Connect_Retry: 60
               Master_Log_File: master1-bin.000007
           Read_Master_Log_Pos: 344
                Relay_Log_File: master1-relay-bin.000006
                 Relay_Log_Pos: 4
         Relay_Master_Log_File: master1-bin.000007
              Slave_IO_Running: Connecting
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 344
               Relay_Log_Space: 256
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: Yes
            Master_SSL_CA_File: /etc/pki/tls/certs/mariadb-chain.pem
            Master_SSL_CA_Path: 
               Master_SSL_Cert: /etc/pki/tls/certs/mariadb.pem
             Master_SSL_Cipher: 
                Master_SSL_Key: /etc/pki/tls/private/mariadb.pem
         Seconds_Behind_Master: NULL
 Master_SSL_Verify_Server_Cert: Yes
                 Last_IO_Errno: 2026
                 Last_IO_Error: error connecting to master 'REDACTED@REDACTED:REDACTED' - retry-time: 60  maximum-retries: 86400  message: SSL connection error: error:00000000:lib(0):func(0):reason(0)
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 0
                Master_SSL_Crl: /etc/pki/tls/certs/mariadb-chain.pem
            Master_SSL_Crlpath: 
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: conservative
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
              Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
1 row in set (0.000 sec)

ERROR: No query specified

MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE '%ssl%';
+---------------------+--------------------------------------+
| Variable_name       | Value                                |
+---------------------+--------------------------------------+
| have_openssl        | YES                                  |
| have_ssl            | YES                                  |
| ssl_ca              | /etc/pki/tls/certs/mariadb-chain.pem |
| ssl_capath          |                                      |
| ssl_cert            | /etc/pki/tls/certs/mariadb.pem       |
| ssl_cipher          |                                      |
| ssl_crl             |                                      |
| ssl_crlpath         |                                      |
| ssl_key             | /etc/pki/tls/private/mariadb.pem     |
| version_ssl_library | OpenSSL 1.1.1c FIPS  28 May 2019     |
+---------------------+--------------------------------------+
10 rows in set (0.005 sec)

I'm trying to setup both servers as master and slave for full replication. It was working until I went to implement the SSL. I'm trying to use Let's Encrypt certificates. I have already converted the private key to RSA and made a full copy of the certificate and chain, so it's not just a symlink. Both servers are running on the same port (non-standard) and have the same users and passwords. I have completely disabled SELinux, to no avail.

the permissions should be fine...

ls -l /etc/pki/tls/*/mariadb*.pem
-rw-r--r--+ 1 mysql mysql 3566 Aug 11 02:17 /etc/pki/tls/certs/mariadb-chain.pem
-rw-r--r--+ 1 mysql mysql 1919 Aug 11 02:17 /etc/pki/tls/certs/mariadb.pem
-rw-r--r--+ 1 mysql mysql 1679 Aug 11 02:17 /etc/pki/tls/private/mariadb.pem

Thanks for your time.

UPDATE: I tried changing the permissions on the PEM files to 600, but it did not fix it. I managed to get it logging at maximum verbosity and this is the section pertinent to the error:

2019-08-14 16:42:53 10 [ERROR] Slave I/O: error connecting to master 'REDACTED@REDACTED:REDACTED' - retry-time: 60  maximum-retries: 86400  message: SSL connection error: error:00000000:lib(0):func(0):reason(0), Internal MariaDB error code: 2026
2019-08-14 16:43:54 12 [Warning] IP address 'REDACTED' could not be resolved: Name or service not known
2019-08-14 16:43:54 12 [Warning] Aborted connection 12 to db: 'unconnected' user: 'unauthenticated' host: 'REDACTED' (CLOSE_CONNECTION)

I also removed the ssl_cipher option from the server I forgot to remove it from, so the cipher configs match.

db2 Q Replication 'LOGARCHMETH1' error

$
0
0

i'm trying to create a Q replication between 2 db2 databases using this tutorial. at this step Lesson 2.2: Enabling the source database for replication i got an error:

  A DB2 Administration Server communication error has been 
  detected.  Client system: "my mac address%9". Server 
  system "my ip address".

i sped hours to solve this error but i had no success so i did what was written in tutorial manualy and did other steps. in the last step Lesson 3.1: Starting replication between the source and target i get this error:

  Database test1 needs to be configured with LOGARCHMETH1=LOGRETAIN. Use the Enable Database for 
  Replication window to set the LOGARCHMETH1 value.  

database parameter is correct i created it with LOGARCHMETH1=LOGRETAIN.

how can i solve these errors?

Replication stops with GTID_NEXT error after creation/drop of memory table in mysql5.6

$
0
0

We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster. Below is a brief snapshot of our architecture.enter image description here

Since we have upgraded and enabled gtid-mode we have been intermittently getting slave errors similar to :

Last_SQL_Error: Error 'When @@SESSION.GTID_NEXT is set to a GTID, you must explicitly set it to a different value after a COMMIT or ROLLBACK. Please check GTID_NEXT variable manual page for detailed explanation. Current @@SESSION.GTID_NEXT is 'd7e8990d-3a9e-11e5-8bc7-22000aa63d47:1466'.' on query. Default database: 'adplatform'. Query: 'create table X_new like X'

Our observations are as below..

  • These slave errors are resolved simply by restarting the slave.
  • Such errors are always with Create/Drop of tables which have Memory Storage Engine.
  • Errors on Complete-Slave(B) show up continuously at a fixed minute (39th) of the hour and have been repeating since we have upgraded, almost a week.
  • Errors on Complete-Slave as well as Partial slave are observed whenever its master is restarted.
  • Cluster-1 and Cluster-2 have centos machines and Cluster-3 have ubuntu-machines. Slaves on centos machines also fail with the same error whenever its master(C/D) is restarted, but slave on ubuntu machines do not fail!!.

We have temporarily been able to live with this issue by setting up an action-script on our monitoring system which fires on slave error alert on any machine.

A look into gtid_next section in replication-options doc of mysql tells following

Prior to MySQL 5.6.20, when GTIDs were enabled but gtid_next was not AUTOMATIC, DROP TABLE did not work correctly when used on a combination of nontemporary tables with temporary tables, or of temporary tables using transactional storage engines with temporary tables using nontransactional storage engines. In MySQL 5.6.20 and later, DROP TABLE or DROP TEMPORARY TABLE fails with an explicit error when used with either of these combinations of tables. (Bug #17620053)

This seems related to my issue but still doesn't not explain my scenario. Any hints/direction to solve the issue would be greatly appreciated...

EDIT : I managed to find a similar recently reported bug in mysql(#77729), description of which is as follows :

https://bugs.mysql.com/bug.php?id=77729

When you have table with Engine MEMORY working on replication master, mysqld injects "DELETE" statement in binary logs on first access query to this table. This insures consistency of data on replicating slaves.

If replication is GTID ROW based, this inserted "DELETE" breaks replication. Logged event is in STATEMENT format and do not generate correct SET GTID_NEXT statements in binary log.

Unfortunately, the status of this bug is marked as Can't Repeat...

Patroni : How to handle a slave which has been disconnected from master for long time?

$
0
0

Let's say if I am using asynchronous streaming replication with the below configuration in a 3 node cluster with Postgres 10.4 and Patroni 1.4.4

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      parameters:
        archive_mode: "off"
        wal_level: hot_standby
        max_wal_senders: 10
        wal_keep_segments: 100
        max_replication_slots: 10
        hot_standby: "on"
        wal_log_hints: "on"
        unix_socket_directories: '/tmp'
        max_connections: 400
        shared_buffers: 250MB
        autovacuum_analyze_scale_factor: 0.05
        autovacuum_vacuum_scale_factor: 0.10
        log_autovacuum_min_duration: 0
        autovacuum_naptime: 15s
        autovacuum_max_workers: 6

Let's say one of the slave node suddenly lose its connection to master for long time.

  1. In this case I think the size of XLOG on master will keep on building as the the XLOG are not being consumed by the disconnected slave's replication slot. So is there any setting in patroni configuration which will remove the slave and remove its replication slot if it is disconnected from master for x time duration?
  2. What is the recommended way to handle this case?

why name node allocates only two blocks instead of 3, although replicator setting is set to 3?

$
0
0

I have noticed this behaviour that even though my block replicator setting is set to 3, during the upload from client, sometimes 2 blocks are allocated by namenode and sometime 3 blocks. is there a way to enforce 3 blocks all the time? I found that dfs.replication.min property is deprecated in hadoop version 2.7.3.

Also can I just set it on my hdfs-client or do I need to set it on client, namenode and snamenode and restart nn and sn?

In my hdfs-site.xml , I have set it to 3 on both Namenode, Snamenode and hdfs-client machine(local machine).

<property><name>dfs.replication</name><value>3</value></property>

❯ hadoop version 20/08/26 10:57:36 DEBUG util.VersionInfo: version: 2.7.3 Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0

I have seen the same behaviour when I set dfs.replication=2, sometimes only 1 block is allocated for write and sometimes 2.

Btw, I am checking the blocks and locations using fsck command hdfs fsck /tmp/file1.txt -files -locations -blocks

MongoDB unrecoverable replication error

$
0
0

One of member of my MongoDB replica set decided it would not restart, with the following error (reformatted for readability):

Starting rollback due to OplogStartMissing: 
our last op time fetched: (term: 30, timestamp: Jul 28 07:45:11:6) 
source's GTE:             (term: 31, timestamp: Jul 28 07:45:11:7)

Fatal assertion 18750 UnrecoverableRollbackError
                          (term: 31, timestamp: Jul 28 07:45:12:2) > our last optime: 
                          (term: 30, timestamp: Jul 28 07:45:11:6)

Let's call the instance where this happens M1, and the source its trying to sync M2. M1 used to be primary, then the primary switched to M2, and M1 restarted.

The naive interpretation of these log messages is that the first operation from M2's oplog is exactly the next operation after what we have applied in M1. So, we should just happily apply operations from M2, but MongoDB tries to rollback some operations, finds an operation in future relative to both what we've applied and what's next on M2, and dies.

I have two questions: first, why is MongoDB trying rollback in the first place, and second, where is operation with timestamp of Jul 28 07:45:12:2 is coming from?

Blue Green deployment in SQL server database

$
0
0

We are having a SQL server 2014 database and we are planning to do a blue green deployment to this database so that the downtime for this database can be reduced during the deployment window. Are there any options that can be leveraged for implementing this solution.

Where is SQL Server merge replication snapshot synchronization log location?

$
0
0

I have an issue with SQL Server 15 -> 13 merge replication process. It got stuck locking MSsnapshotdeliveryprogress table. So I have to look into logs to be able to determine what is causing this, but hey! where are logs located?

Where is SQL Server merge replication snapshot synchronization log location?

$
0
0

I have an issue with SQL Server 15 -> 13 merge replication process. It got stuck locking MSsnapshotdeliveryprogress table. So I have to look into logs to be able to determine what is causing this, but hey! where are logs located?

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>