Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Is it possible to reshard the data in an arangodb collection when add a new dbserver node?

$
0
0

Since I am not sure how big my collection might grow in the future, I would like to know if I can add new dbserver node when required. there are two questions: 1. whether the data of collection will auto reshard from old dbaservers to the new one? 2. If so, are the resharding log appear in _api/replication/logger-follow?


Transactinal replication between two SQL server 2012 with standard edition ( both way)

$
0
0

can you please help me to do a transactional replication between two servers ( Production site & DR site ) .

Non-Replicating commands from (master) Mysql 5.5.40 to (slave) MariaDB 10.0.19

$
0
0

CRUD replicates just fine. But we noticed that DB structure altering commands such as ALTER TABLE and/or adding/updating columns to a table and even creating new ones do not replicate to the slave at all.


Setup

This is the content of our my.cnf files:

Replication Setup A

  • Where: (master) Mysql 5.5.40> (slave) MariaDB 10.0.19

Master:

server-id = 1002
binlog-do-db = db_1
binlog-do-db = db_2
binlog-do-db = db_3
sync_binlog = 1
expire_logs_days = 30
innodb_flush_log_at_trx_commit = 1

Slave:

server-id = 1005
replicate-do-db = db_1
replicate-do-db = db_2
replicate-do-db = db_3
slave-parallel-threads=4
slave_compressed_protocol = ON
slave-skip-errors = all
log-slave-updates = ON

Strangely, DB structure altering commands works just fine on a different server pair with an *almost similar replication setup:

Replication Setup B

  • Where: (master) MariaDB 10.0.17> (slave) MariaDB 5.5.41

Master:

server-id = 1004
bind-address = "0.0.0.0"
binlog-do-db = db_a
binlog-do-db = db_b
binlog-do-db = db_c
binlog-do-db = db_d
binlog-do-db = db_e
binlog-do-db = db_f
binlog-do-db = db_g
sync_binlog = 1
expire_logs_days = 30
innodb_flush_log_at_trx_commit = 1

Slave:

server-id = 1003
bind-address = "0.0.0.0"
replicate-do-db = db_a
replicate-do-db = db_b
replicate-do-db = db_c
replicate-do-db = db_d
replicate-do-db = db_e
replicate-do-db = db_f
replicate-do-db = db_g
slave_compressed_protocol = ON
slave-skip-errors = all

Notes for both replication pairs:

  • Names of the databases were obscured.
  • Parts of the my.cnf file where i think replication is configured are only posted.

Questions

  1. Is this a bug or an undocumented feature?
  2. Did i do something wrong on my setup?
  3. What could be other factors that might effect to this behaviour?
  4. If there really are (query)commands that aren't supported for replication, what are these?
    • 4.a) Mysql/MariaDB version setups that has the same effect?

I tried looking at the error logs of the slave for any hints and used structure altering query commands as search keywords but to no avail. Maybe I'm not looking in the right direction?

Update (2015-06-04)

Further testing ensued. As suggested by this SO post, I might have to remove or comment out replicate-do-db from my.cnf; So I did and I found out that:

  • Creating a new table is replicated.
  • Altering a table is not replicated.
  • CRUD transactions were not replicated.

Also, things just got stranger for Replication Setup B where everything was just supposed to work.

  • One DB failed to replicate an ALTER TABLE command just yesterday and we only found out about it now.
  • After the incident, I tried testing executing ALTER TABLE commands to all other DBs of the same server and it replicated just fine. In this test, I am unable to remove replicate-do-db from my.cnf as restarting the server is currently not an option.

How did you setup MySQL Replication with autofailover that is app transparent?

$
0
0

Recently told MySQL is shelving MySQL Fabric. Interested how others have implemented a MySQL replicated environment that is app transparent.

I am considering using HA Proxy to host virtual IP address for Master, one for slave pool. The using MySQL Failover to monitor replication cluster, auto promote a slave to master. LinuxHA or HAProxy would change which server the virtual IP address points to. Should work for both Master and Slaves.

We're primarily a PHP shop running MySQL DB's on Centos7 Linux.

Transaction Replication SQL server

$
0
0

Here is the requirement :

Need to replicate non-partitioned table on publisher to a partitioned table on subscriber which is partitioned on clientid. Please note that articles at the publisher doesn't have clientid but can we joined to an mapping table to get the clientid.

Any help is much appreciated.

Postgres 9.1 replication log streaming

$
0
0

I have set up streaming log replication and i am stuck on the last step. Master setup is done, backup and rsync has been successful. I can ssh from slave to master and from master to slave successfully. i followed the tutorial here http://blog.3dtin.com/2012/07/26/postgresql-replication-and-hot-standby-in-practic/

however, when i restart the db on the slave side after running rsync and creating the recovery.conf file as mentioned in the tutorial the slave is unable to connect with master and as you can see in the log below keeps on throwing could not connect to primary server error.

It's really strange for a noob like me that ssh is working and machines can talk to each other through ssh but TCP is erroring out ???? spent quite a lot of time on this, could not find the answer anywhere thats why posting here.

I am using noip2 as a work around for public IP and both master and slave are behind a router and ssh port has been forwarded. I also tried with firewall off on the master side but still didnt work. Please help resolve!

postgres@saro:~$ cat /var/log/postgresql/postgresql-9.1-main.log 2014-08-15 21:22:06 IST LOG: database system was shut down at 2014-08-15 21:19:48 IST 2014-08-15 21:22:06 IST LOG: entering standby mode 2014-08-15 21:22:06 IST LOG: incomplete startup packet 2014-08-15 21:22:06 IST LOG: consistent recovery state reached at 0/35000020 2014-08-15 21:22:06 IST LOG: record with zero length at 0/35000070 2014-08-15 21:22:06 IST LOG: database system is ready to accept read only connections 2014-08-15 21:24:13 IST FATAL: could not connect to the primary server: could not connect to server: Connection timed out Is the server running on host "mis-master" () and accepting TCP/IP connections on port 5432?

2014-08-15 21:26:23 IST FATAL: could not connect to the primary server: could not connect to server: Connection timed out Is the server running on host "mis" () and accepting TCP/IP connections on port 5432?

Disable mysql replication on rds

$
0
0

I want set up replication between an Amazon RDS MySQL DB instance and a MySQL instance that is external to Amazon RDS.

but Slave_IO_Running: Connecting how to solve this problem?

I have already done the following commands.

mysql>CALL mysql.rds_set_external_master ('192.168.xx.xxx', 3306,
    'repl2', '111111', 'mysql-bin-changelog.000003', 598, 0); 

mysql> CALL mysql.rds_start_replication; 

+-------------------------+
| Message                 |
+-------------------------+
| Slave running normally. |
+-------------------------+
1 row in set (1.01 sec)

enter image description here

MongoDB can't add new replica set member [rsSync] SEVERE: Got signal: 6

$
0
0

I have a replica set and I added a new member to the set. The initialSync begins for the new member and rs.status (on primary) shows STARTUP2 as status. However, after a long enough time, there's a cryptic fassert error coming on the new instance.

Log dump is as following:

2014-11-02T22:53:23.995+0000 [clientcursormon] mem (MB) res:330 virt:45842
2014-11-02T22:53:23.995+0000 [clientcursormon]  mapped (incl journal view):45038
2014-11-02T22:53:23.995+0000 [clientcursormon]  connections:27
2014-11-02T22:53:23.995+0000 [clientcursormon]  replication threads:32
2014-11-02T22:53:25.427+0000 [conn2012] end connection xx.xx.xx.xx:1201 (26 connections now open)
2014-11-02T22:53:25.433+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1200 #2014 (27 connections now open)
2014-11-02T22:53:25.436+0000 [conn2014]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:26.775+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1058 #2015 (28 connections now open)
2014-11-02T22:53:26.864+0000 [conn1993] end connection xx.xx.xx.xx:1059 (27 connections now open)
2014-11-02T22:53:29.090+0000 [rsSync] Socket recv() errno:110 Connection timed out xx.xx.xx.xx:27017
2014-11-02T22:53:29.096+0000 [rsSync] SocketException: remote: xx.xx.xx.xx:27017 error: 9001 socket exception [RECV_ERROR] server [168.63.252.61:27017] 
2014-11-02T22:53:29.099+0000 [rsSync] DBClientCursor::init call() failed
2014-11-02T22:53:29.307+0000 [rsSync] replSet initial sync exception: 13386 socket error for mapping query 0 attempts remaining
2014-11-02T22:53:36.113+0000 [conn2013] end connection xx.xx.xx.xx:1056 (26 connections now open)
2014-11-02T22:53:36.153+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1137 #2016 (27 connections now open)
2014-11-02T22:53:36.154+0000 [conn2016]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:55.541+0000 [conn2014] end connection xx.xx.xx.xx:1200 (26 connections now open)
2014-11-02T22:53:55.578+0000 [initandlisten] connection accepted from xx.xx.xx.xx:1201 #2017 (27 connections now open)
2014-11-02T22:53:55.580+0000 [conn2017]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2014-11-02T22:53:56.861+0000 [conn2015]  authenticate db: admin { authenticate: 1, user: "root", nonce: "xxx", key: "xxx" }
2014-11-02T22:53:59.310+0000 [rsSync] Fatal Assertion 16233
2014-11-02T22:53:59.723+0000 [rsSync] 0x11c0e91 0x1163109 0x114576d 0xe84c1f 0xea770e 0xea7800 0xea7af8 0x1205829 0x7ff728cf8e9a 0x7ff72800b3fd 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11c0e91]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x1163109]
 /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x114576d]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xe84c1f]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x18e) [0xea770e]
 /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x30) [0xea7800]
 /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0xa8) [0xea7af8]
 /usr/bin/mongod() [0x1205829]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7ff728cf8e9a]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ff72800b3fd]
2014-11-02T22:53:59.723+0000 [rsSync] 

***aborting after fassert() failure


2014-11-02T22:53:59.728+0000 [rsSync] SEVERE: Got signal: 6 (Aborted)

.

The worst part is that when I try to re-start the mongod service, the replication begins afresh trying to reSync all the files which are already there. This seems bizarre and useless.

Can someone please help me understand what is going on?


Why does replication with SSL fail on my MySQL database in AWS RDS?

$
0
0

I'm trying to replicate from AWS RDS to my own server. It works without SSL. Whenever I include the SSL property to the slave, it breaks with this error:

error connecting to master 'user@xxxxx.us-west-2.rds.amazonaws.com:3306' - retry-time: 60 retries: 86400

I can log in with SSL to RDS using mysql client without problems:

mysql -h xx.rds.amazon -u user -p --ssl-ca=rds-ca-2015-root.pem --ssl-verify-server-cert

This is the STATUS:

     Slave_IO_State: Connecting to master
                  Master_Host: xxxxx.us-west-2.rds.amazonaws.com
                  Master_User: user
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin-changelog.001453
          Read_Master_Log_Pos: 120
               Relay_Log_File: mysqld-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: mysql-bin-changelog.001453
             Slave_IO_Running: Connecting
            Slave_SQL_Running: Yes
              Replicate_Do_DB: DB
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 120
              Relay_Log_Space: 107
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: Yes
           Master_SSL_CA_File: /etc/mysql/ssl/rds-ca-2015-root.pem
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 2026
                Last_IO_Error: error connecting to master 'user@xxxx.us-west-2.rds.amazonaws.com:3306' - retry-time: 60  retries: 86400
               Last_SQL_Errno: 0
               Last_SQL_Error: 

Again, I want to highlight that REPLICATION works well without using SSL, and SSL works well without using REPLICATION.

database design question with Partition and Replication

$
0
0

I have a main database that runs on a server with good resources and, let's say, a dozen satellites that replicate data. Satellites have less resources than main server.

In that database there is a BIG table, that I want partitioned in ranges because in fact each satellite cares about his own specific range.

PARTITION BY RANGE( ID ) (
    PARTITION p0 VALUES LESS THAN (1000000),
    PARTITION p1 VALUES LESS THAN (2000000),
    PARTITION p2 VALUES LESS THAN (3000000),
    PARTITION p3 VALUES LESS THAN (4000000),
    PARTITION p4 VALUES LESS THAN ...
);

Let's say satellite 1 cares about table IDs that start with 1, satellite 2 cares about IDs that start with 2 and so on. (all IDs are numbered in millions).

Is it possible to replicate on each satellite only data that are relevant to the satellite? I mean I don't want p2 and p3 on satellite 1, only p1 is relevant for satellite 1. But I don't know how to say to the replication engine that I want only such a part of the data.

Is it only possible to replicate only one partition of the partition scheme?

CouchDB parallel replications causes high cpu usage

$
0
0

I have a per-user DB architecture like so:enter image description here

There is around 200 user DBs and each has a continuous replication link to the master couch. ( All within the same couch instance) The problem is the CPU usage at any given time is always close to 100%.

The DBs are idle, so no data is getting written/read from them.There's only a few KB of data per DB so I don't think the load is an issue at this point. The master DB size is less than 10 MB.

How can I go about debugging this performance issue?

Easiest way to do one way Push to (Readonly) Secondary on EC2 From Onsite DB

$
0
0

I'm doing research to figure out the best way to accomplish this. We have an onsite non public facing SQL Server (2014 Enterprise) running a production app (DB<10gb). We will be building a readonly reporting application (on EC2 IIS instance) that will need to access the data (does not need to be concurrent, 15-30 minute delay from production is fine).

Rather than opening up a hole in the LAN to the database and internal network we want to sync the local DB up to the EC2 DB. I know this can be done with log shipping to S3, however there appears to be a lot of single points of failure with this method, and the date issue on S3 storage adds more problems.

Is this a problem that I can leverage AWS database migration services for? Or is that only good for EC2 to EC2 migrations? Is log shipping viable / reliable for this type of use case? Is there a simpler means of accomplishing this?

As to the closure of the question as a duplicate - the answers from that question are from 2014, many of which are no longer accurate. Also I am asking about the viability of using log shipping. Also I would like to know other possible solutions outside of RDS and log shipping. This is not a duplicate!

JFrog replication option dissabled

$
0
0

When I try to set up replication I see a blue start and I'm not able to click on the replication link.

Any ideas?

Screenshot Image

SQL server 2012 Transactional Replication

$
0
0

We have a requirement to set up multiple publishers (in our case two publishers) and single subscriber model(all the publishers have articles with same structure/definition with identity reseeded for articles,so that there is not conflict at subscriber.)

Question here is,how do we propagate deployment(schema)changes to subscriber.Once a schema change (alter column) is applied on the publishers, replication breaks and that is how it works as commands from multiple publishers are trying to sync single subscriber with the same change.

How to get around this issue ? Please let me know the best approach to handle any deployment/schema changes for such set-up.

Transactional Replication breaks when schema is changed in Publisher

$
0
0

We have set up Transactional replication where in two Publishers feed into one Subscriber , now if try to change the schema of any one Publishers or both publishers, replication breaks. We have tried to pause the replication and change the schema in publisher as well as subscriber , but as soon as we re-start the replication it breaks.

Only option that we found is, we have to drop the replication, sync the schema and the start the snapshot replication.

Is there a better way of doing this?


PostgreSQL 9.5 Cascade streaming replication

$
0
0

i have one master and two slaves - cascading replication. Master -> Slave1 -> Slave2 on postgresql 9.5

Slave2 crashed two days ago. Problem is on Slave1, where are comulating WAL files. Archive mode is turn off. There is about 8.500 WAL Files, but in folder archive_status are files with .done name.

How can I remove WAL files, which are "waiting" for Slave2? I changed configration on Slave1 (i commented this options "wal level = hot_standby" and "max_wal_senders = 1"). But without effect.

I have set up wal_keep_segments = 100 on Slave1, but no effect. I tried comment wal_keep_segments, no effect too.. I tried drop replication slot on Slave1, no effect.

What did i do wrong? I need stop comulating WAL files and remove old WAL files, which are processed and are only "waiting".. Is possible to manually remove WAL files?

Thank you for your tips. Regards,

Jakub

RDS Replication Error (Apply Error 1406 / Truncation)

$
0
0

I have a MySQL RDS instance as a master, created a Read Replica from it, and ran some schema change operations on it. To be specific, I changed the charset and collation of all the tables and columns from utf8 to utf8mb4. Things were replicating fine, but an error just occurred.

Apply Error 1406: Error; Data too long for column... etc

This is due to lowering the varchar length on some columns from 255 to 191.

I read that you can run some commands to skip replication errors, as described here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_skip_repl_error.html

However, would this "skip" the insert, or, just truncate the data and proceed with the insert?

I'd like the data to be truncated and still added to the table rather than aborting the entire operation, but I'm not sure if that is going to happen or not. Any suggestions would be welcome!

Database replication or others concepts?

$
0
0

I'm new in database replication and distributed database topics. I have a desktop system that use a PostgreSQL database.

This system is focused to manage some process in one Entity or Enterprise. Now appear other business model. Now there is many entities and one of them controls the others. So, it's mandatory that the Controller Entity manages the information of all children entities. Each entity has its own database server and must tribute information to the controller entity server.

The flow of information must be bidirectional, so each action over the controller database must be reflected in one o more entities databases.Each action over one entity database must be reflected in controller database.enter image description here

Galera not working with Replication

$
0
0

I am using MariaDB 10.2

I have node2 (2.2.2.2) running as a slave to some external database. I bootstrapped node2 as the first member of a Galera cluster "my_cluster".

Node1 (1.1.1.1) joined with rsync SST but after an hour I see that it's behind node2.

I tried running some inserts manually and it works both ways. However anything added by the replication to node2 in completely ignored by node1.

node1:

MariaDB [db1]> SHOW STATUS LIKE 'wsrep%';
+------------------------------+--------------------------------------+
| Variable_name                | Value                                |
+------------------------------+--------------------------------------+
| wsrep_apply_oooe             | 0.000000                             |
| wsrep_apply_oool             | 0.000000                             |
| wsrep_apply_window           | 0.000000                             |
| wsrep_causal_reads           | 0                                    |
| wsrep_cert_deps_distance     | 0.000000                             |
| wsrep_cert_index_size        | 0                                    |
| wsrep_cert_interval          | 0.000000                             |
| wsrep_cluster_conf_id        | 6                                    |
| wsrep_cluster_size           | 2                                    |
| wsrep_cluster_state_uuid     | 09e3b6c8-343c-11e8-87cf-07a9813fdf95 |
| wsrep_cluster_status         | Primary                              |
| wsrep_commit_oooe            | 0.000000                             |
| wsrep_commit_oool            | 0.000000                             |
| wsrep_commit_window          | 0.000000                             |
| wsrep_connected              | ON                                   |
| wsrep_desync_count           | 0                                    |
| wsrep_evs_delayed            |                                      |
| wsrep_evs_evict_list         |                                      |
| wsrep_evs_repl_latency       | 0/0/0/0/0                            |
| wsrep_evs_state              | OPERATIONAL                          |
| wsrep_flow_control_paused    | 0.000000                             |
| wsrep_flow_control_paused_ns | 0                                    |
| wsrep_flow_control_recv      | 0                                    |
| wsrep_flow_control_sent      | 0                                    |
| wsrep_gcomm_uuid             | 8854b393-3713-11e8-8cfd-f7a101a4c6bf |
| wsrep_incoming_addresses     | 1.1.1.1:3306,2.2.2.2:3306            |
| wsrep_last_committed         | 0                                    |
| wsrep_local_bf_aborts        | 0                                    |
| wsrep_local_cached_downto    | 18446744073709551615                 |
| wsrep_local_cert_failures    | 0                                    |
| wsrep_local_commits          | 0                                    |
| wsrep_local_index            | 0                                    |
| wsrep_local_recv_queue       | 0                                    |
| wsrep_local_recv_queue_avg   | 0.000000                             |
| wsrep_local_recv_queue_max   | 1                                    |
| wsrep_local_recv_queue_min   | 0                                    |
| wsrep_local_replays          | 0                                    |
| wsrep_local_send_queue       | 0                                    |
| wsrep_local_send_queue_avg   | 0.000000                             |
| wsrep_local_send_queue_max   | 1                                    |
| wsrep_local_send_queue_min   | 0                                    |
| wsrep_local_state            | 4                                    |
| wsrep_local_state_comment    | Synced                               |
| wsrep_local_state_uuid       | 09e3b6c8-343c-11e8-87cf-07a9813fdf95 |
| wsrep_protocol_version       | 8                                    |
| wsrep_provider_name          | Galera                               |
| wsrep_provider_vendor        | Codership Oy <info@codership.com>    |
| wsrep_provider_version       | 25.3.23(r3789)                       |
| wsrep_ready                  | ON                                   |
| wsrep_received               | 3                                    |
| wsrep_received_bytes         | 219                                  |
| wsrep_repl_data_bytes        | 0                                    |
| wsrep_repl_keys              | 0                                    |
| wsrep_repl_keys_bytes        | 0                                    |
| wsrep_repl_other_bytes       | 0                                    |
| wsrep_replicated             | 0                                    |
| wsrep_replicated_bytes       | 0                                    |
| wsrep_thread_count           | 2                                    |
+------------------------------+--------------------------------------+

node2:

MariaDB [db1]> SHOW STATUS LIKE 'wsrep%';
+------------------------------+--------------------------------------+
| Variable_name                | Value                                |
+------------------------------+--------------------------------------+
| wsrep_apply_oooe             | 0.000000                             |
| wsrep_apply_oool             | 0.000000                             |
| wsrep_apply_window           | 0.000000                             |
| wsrep_causal_reads           | 0                                    |
| wsrep_cert_deps_distance     | 0.000000                             |
| wsrep_cert_index_size        | 0                                    |
| wsrep_cert_interval          | 0.000000                             |
| wsrep_cluster_conf_id        | 6                                    |
| wsrep_cluster_size           | 2                                    |
| wsrep_cluster_state_uuid     | 09e3b6c8-343c-11e8-87cf-07a9813fdf95 |
| wsrep_cluster_status         | Primary                              |
| wsrep_commit_oooe            | 0.000000                             |
| wsrep_commit_oool            | 0.000000                             |
| wsrep_commit_window          | 0.000000                             |
| wsrep_connected              | ON                                   |
| wsrep_desync_count           | 0                                    |
| wsrep_evs_delayed            |                                      |
| wsrep_evs_evict_list         |                                      |
| wsrep_evs_repl_latency       | 0/0/0/0/0                            |
| wsrep_evs_state              | OPERATIONAL                          |
| wsrep_flow_control_paused    | 0.000000                             |
| wsrep_flow_control_paused_ns | 0                                    |
| wsrep_flow_control_recv      | 0                                    |
| wsrep_flow_control_sent      | 0                                    |
| wsrep_gcomm_uuid             | d1198d28-367a-11e8-a0ac-2382228e259f |
| wsrep_incoming_addresses     | 1.1.1.1:3306,2.2.2.2:3306            |
| wsrep_last_committed         | 0                                    |
| wsrep_local_bf_aborts        | 0                                    |
| wsrep_local_cached_downto    | 18446744073709551615                 |
| wsrep_local_cert_failures    | 0                                    |
| wsrep_local_commits          | 0                                    |
| wsrep_local_index            | 1                                    |
| wsrep_local_recv_queue       | 0                                    |
| wsrep_local_recv_queue_avg   | 0.100000                             |
| wsrep_local_recv_queue_max   | 2                                    |
| wsrep_local_recv_queue_min   | 0                                    |
| wsrep_local_replays          | 0                                    |
| wsrep_local_send_queue       | 0                                    |
| wsrep_local_send_queue_avg   | 0.000000                             |
| wsrep_local_send_queue_max   | 1                                    |
| wsrep_local_send_queue_min   | 0                                    |
| wsrep_local_state            | 4                                    |
| wsrep_local_state_comment    | Synced                               |
| wsrep_local_state_uuid       | 09e3b6c8-343c-11e8-87cf-07a9813fdf95 |
| wsrep_protocol_version       | 8                                    |
| wsrep_provider_name          | Galera                               |
| wsrep_provider_vendor        | Codership Oy <info@codership.com>    |
| wsrep_provider_version       | 25.3.23(r3789)                       |
| wsrep_ready                  | ON                                   |
| wsrep_received               | 10                                   |
| wsrep_received_bytes         | 1081                                 |
| wsrep_repl_data_bytes        | 0                                    |
| wsrep_repl_keys              | 0                                    |
| wsrep_repl_keys_bytes        | 0                                    |
| wsrep_repl_other_bytes       | 0                                    |
| wsrep_replicated             | 0                                    |
| wsrep_replicated_bytes       | 0                                    |
| wsrep_thread_count           | 2                                    |
+------------------------------+--------------------------------------+

node1 my.cnf:

log_bin = /var/mysql/log/mysql-bin.log
max_binlog_size = 100M
expire_logs_days=3
max_binlog_cache_size = 2G
binlog_cache_size = 32K
max_binlog_stmt_cache_size = 2G
binlog_stmt_cache_size = 32K
binlog_format=row
default-storage-engine=InnoDB
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
innodb_flush_log_at_trx_commit=0
wsrep_on=ON
wsrep_slave_threads=1
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_cluster"
wsrep_cluster_address="gcomm://2.2.2.2"
# also tried below
#wsrep_cluster_address="gcomm://2.2.2.2,1.1.1.1"
wsrep_sst_receive_address=1.1.1.1:4444
wsrep_provider_options='ist.recv_addr=1.1.1.1:4568;'
wsrep_sst_method=rsync
wsrep_sst_donor="node2,"
wsrep_node_address="1.1.1.1"
wsrep_node_name="node1"

node2 my.cnf:

log_bin = /var/mysql/log/mysql-bin.log
max_binlog_size = 100M
expire_logs_days=3
max_binlog_cache_size = 2G
binlog_cache_size = 32K
max_binlog_stmt_cache_size = 2G
binlog_stmt_cache_size = 32K
server-id = 10
relay-log = /var/mysql/log/mysql-relay-bin.log
replicate-ignore-db = mysql
binlog_format=row
default-storage-engine=InnoDB
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
innodb_flush_log_at_trx_commit=0
wsrep_on=ON
wsrep_slave_threads=1
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="my_cluster"
# for bootstrapping
wsrep_cluster_address="gcomm://"
#wsrep_cluster_address="gcomm://2.2.2.2,1.1.1.1"
wsrep_sst_receive_address=2.2.2.2:4444
wsrep_provider_options='ist.recv_addr=2.2.2.2:4568;'
wsrep_sst_method=rsync
wsrep_node_address="2.2.2.2"
wsrep_node_name="node2"

Error while doing in replicaset in windows (MongoDB)

$
0
0

I tried doing the replicatset in windows, but im getting the below error. I checked the mongo process and killed the existing process. Even though the error exists.

help me on this pls

Thx

commands executed:

C:\MongoDB\bin>mongo
MongoDB shell version: 2.6.6
connecting to: test>> cfg = {
... _id : "cri",
... members : [
... { _id:0, host:"INline1.corp:27001"},
... { _id:1, host:"INline2.corp:27002"},
... { _id:2, host:"INLN3.corp:27003"}
... ]
... }
{
      "_id" : "cri",
      "members" : [
              {
                      "_id" : 0,
                      "host" : "INline1.corp:27001"
              },
              {
                      "_id" : 1,
                      "host" : "INline2.corp:27002"
              },
              {
                      "_id" : 2,
                      "host" : "INLN3.corp:27003"
              }
      ]
}> rs.help()> cfg
{
      "_id" : "cri",
      "members" : [
              {
                      "_id" : 0,
                      "host" : "INline1.corp:27001"
              },
              {
                      "_id" : 1,
                      "host" : "INline2.corp:27002"
              },
              {
                      "_id" : 2,
                      "host" : "INLN3.corp:27003"
              }
      ]
}> // rs.initiate(cfg)>> rs.initiate
function (c) { return db._adminCommand({ replSetInitiate: c }); }> rs.initiate( cfg )
{ "ok" : 0, "errmsg" : "server is not running with --replSet" }>>> rs.initiate()
C:\MongoDB\bin>

Error:

2015-02-22T00:48:57.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:48:58.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:48:59.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:00.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:01.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:02.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:03.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:04.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:05.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:06.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:07.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:08.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:09.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:10.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:11.331+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:12.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:13.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:14.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:15.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:16.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:17.332+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:18.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:19.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:20.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:21.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:22.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:23.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:24.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:25.333+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T00:49:25.916+0530 Ctrl-C signal
2015-02-22T00:49:25.916+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T00:49:25.916+0530 [consoleTerminate] now exiting
2015-02-22T00:49:25.917+0530 [consoleTerminate] dbexit:
2015-02-22T00:49:25.917+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T00:49:25.917+0530 [consoleTerminate] closing listening socket: 452
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T00:49:25.918+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T00:49:25.919+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T00:49:25.919+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T00:49:25.925+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T00:49:25.926+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T00:49:25.926+0530 [consoleTerminate] journalCleanup...
2015-02-22T00:49:25.927+0530 [consoleTerminate] removeJournalFiles
2015-02-22T00:49:25.929+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T00:49:25.929+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>mongo
MongoDB shell version: 2.6.6
connecting to: test
2015-02-22T00:52:06.973+0530 warning: Failed to connect to 127.0.0.1:27017, reason: errno:10061 No connection could be made because the target machine actively refused it.
2015-02-22T00:52:06.977+0530 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146 exception: connect failed

C:\MongoDB\bin>mongod
mongod --help for help and startup options
2015-02-22T00:52:14.995+0530 [initandlisten] MongoDB starting : pid=2668 port=27017 dbpath=\data\db\ 64-bit host=INLN50838607A
2015-02-22T00:52:14.996+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T00:52:14.996+0530 [initandlisten] db version v2.6.6
2015-02-22T00:52:14.997+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T00:52:14.997+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')BOOST_LIB_VERSION=1_49
2015-02-22T00:52:14.997+0530 [initandlisten] allocator: system
2015-02-22T00:52:14.997+0530 [initandlisten] options: {}
2015-02-22T00:52:15.004+0530 [initandlisten] journal dir=\data\db\journal
2015-02-22T00:52:15.004+0530 [initandlisten] recover : no journal files present, no recovery needed
2015-02-22T00:52:15.210+0530 [initandlisten] waiting for connections on port 27017
2015-02-22T00:53:00.605+0530 [initandlisten] connection accepted from 127.0.0.1:65413 #1 (1 connection now open)
2015-02-22T00:53:15.209+0530 [clientcursormon] mem (MB) res:135 virt:1270
2015-02-22T00:53:15.209+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T00:53:15.209+0530 [clientcursormon]  connections:1
2015-02-22T00:53:52.268+0530 [conn1] replSet replSetInitiate admin command received from client
2015-02-22T00:54:23.497+0530 [conn1] end connection 127.0.0.1:65413 (0 connections now open)
2015-02-22T00:57:02.580+0530 [initandlisten] connection accepted from 127.0.0.1:65504 #2 (1 connection now open)
2015-02-22T00:57:10.148+0530 [conn2] end connection 127.0.0.1:65504 (0 connections now open)
2015-02-22T00:58:15.227+0530 [clientcursormon] mem (MB) res:135 virt:1266
2015-02-22T00:58:15.227+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T00:58:15.227+0530 [clientcursormon]  connections:0
2015-02-22T00:58:51.650+0530 [initandlisten] connection accepted from 127.0.0.1:49203 #3 (1 connection now open)
2015-02-22T00:58:59.370+0530 [conn3] replSet replSetInitiate admin command received from client
2015-02-22T01:03:15.246+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:03:15.246+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:03:15.246+0530 [clientcursormon]  connections:1
2015-02-22T01:04:56.891+0530 [conn3] end connection 127.0.0.1:49203 (0 connections now open)
2015-02-22T01:05:26.677+0530 [initandlisten] connection accepted from 127.0.0.1:49531 #4 (1 connection now open)
2015-02-22T01:08:15.266+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:08:15.266+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:08:15.266+0530 [clientcursormon]  connections:1
2015-02-22T01:13:15.321+0530 [clientcursormon] mem (MB) res:135 virt:1267
2015-02-22T01:13:15.321+0530 [clientcursormon]  mapped (incl journal view):1120
2015-02-22T01:13:15.321+0530 [clientcursormon]  connections:1
2015-02-22T01:13:28.501+0530 [conn4] replSet replSetInitiate admin command received from client
2015-02-22T01:17:09.207+0530 Ctrl-C signal
2015-02-22T01:17:09.207+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:17:09.208+0530 [consoleTerminate] now exiting
2015-02-22T01:17:09.208+0530 [consoleTerminate] dbexit:
2015-02-22T01:17:09.208+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:17:09.208+0530 [consoleTerminate] closing listening socket: 476
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:17:09.209+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:17:09.210+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:17:09.210+0530 [conn4] end connection 127.0.0.1:49531 (0 connections now open)
2015-02-22T01:17:09.218+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:17:09.234+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:17:09.234+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:17:09.234+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:17:09.251+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:17:09.251+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>
C:\MongoDB\bin>
C:\MongoDB\bin>mongod --port 27001 --dbpath c:\data\aneesh1 --replSet cri
2015-02-22T01:18:11.597+0530 [initandlisten] MongoDB starting : pid=9936 port=27001 dbpath=c:\data\aneesh1 64-bit host=INLN50838607A
2015-02-22T01:18:11.598+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T01:18:11.598+0530 [initandlisten] db version v2.6.6
2015-02-22T01:18:11.599+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T01:18:11.599+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
2015-02-22T01:18:11.599+0530 [initandlisten] allocator: system
2015-02-22T01:18:11.599+0530 [initandlisten] options: { net: { port: 27001 }, replication: { replSet: "cri" }, storage: { dbPath: "c:\data\aneesh1" } }
2015-02-22T01:18:11.603+0530 [initandlisten] journal dir=c:\data\aneesh1\journal
2015-02-22T01:18:11.603+0530 [initandlisten] recover : no journal files present,no recovery needed
2015-02-22T01:18:11.629+0530 [initandlisten] waiting for connections on port 27001
2015-02-22T01:18:11.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:11.634+0530 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
2015-02-22T01:18:12.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:13.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:14.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:15.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:16.634+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:18:17.216+0530 Ctrl-C signal
2015-02-22T01:18:17.217+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:18:17.217+0530 [consoleTerminate] now exiting
2015-02-22T01:18:17.218+0530 [consoleTerminate] dbexit:
2015-02-22T01:18:17.218+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:18:17.218+0530 [consoleTerminate] closing listening socket: 472
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:18:17.219+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:18:17.220+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:18:17.220+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:18:17.229+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:18:17.230+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:18:17.230+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:18:17.230+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:18:17.232+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:18:17.232+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>
C:\MongoDB\bin>mongod --port 27001 --dbpath c:\data\aneesh1 --replSet cri
2015-02-22T01:23:27.978+0530 [initandlisten] MongoDB starting : pid=10800 port=27001 dbpath=c:\data\aneesh1 64-bit host=INLN50838607A
2015-02-22T01:23:27.979+0530 [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2
2015-02-22T01:23:27.979+0530 [initandlisten] db version v2.6.6
2015-02-22T01:23:27.979+0530 [initandlisten] git version: 608e8bc319627693b04cc7da29ecc300a5f45a1f
2015-02-22T01:23:27.979+0530 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1')BOOST_LIB_VERSION=1_49
2015-02-22T01:23:27.980+0530 [initandlisten] allocator: system
2015-02-22T01:23:27.980+0530 [initandlisten] options: { net: { port: 27001 }, replication: { replSet: "cri" }, storage: { dbPath: "c:\data\aneesh1" } }
2015-02-22T01:23:27.982+0530 [initandlisten] journal dir=c:\data\aneesh1\journal
2015-02-22T01:23:27.982+0530 [initandlisten] recover : no journal files present, no recovery needed
2015-02-22T01:23:28.009+0530 [initandlisten] waiting for connections on port 27001
2015-02-22T01:23:28.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:28.011+0530 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
2015-02-22T01:23:29.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:30.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:31.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:32.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:33.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:34.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:35.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:36.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:37.011+0530 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
2015-02-22T01:23:37.677+0530 Ctrl-C signal
2015-02-22T01:23:37.677+0530 [consoleTerminate] got CTRL_C_EVENT, will terminate after current cmd ends
2015-02-22T01:23:37.678+0530 [consoleTerminate] now exiting
2015-02-22T01:23:37.678+0530 [consoleTerminate] dbexit:
2015-02-22T01:23:37.678+0530 [consoleTerminate] shutdown: going to close listening sockets...
2015-02-22T01:23:37.679+0530 [consoleTerminate] closing listening socket: 476
2015-02-22T01:23:37.679+0530 [consoleTerminate] shutdown: going to flush diaglog...
2015-02-22T01:23:37.679+0530 [consoleTerminate] shutdown: going to close sockets...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: waiting for fs preallocator...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: lock for final commit...
2015-02-22T01:23:37.680+0530 [consoleTerminate] shutdown: final commit...
2015-02-22T01:23:37.689+0530 [consoleTerminate] shutdown: closing all files...
2015-02-22T01:23:37.690+0530 [consoleTerminate] closeAllFiles() finished
2015-02-22T01:23:37.690+0530 [consoleTerminate] journalCleanup...
2015-02-22T01:23:37.691+0530 [consoleTerminate] removeJournalFiles
2015-02-22T01:23:37.693+0530 [consoleTerminate] shutdown: removing fs lock...
2015-02-22T01:23:37.693+0530 [consoleTerminate] dbexit: really exiting now

C:\MongoDB\bin>
Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>