Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

MongoDB stale replication, odd timestamp after data flush

$
0
0

Right before my secondary goes stale, it first has this written in the log:

2017-04-25T06:48:02.991+0000 I STORAGE  [DataFileSync] flushing mmaps took 13530ms  for 3150 files

Then it gives me a odd stale message, note the "b" in the oldest available.

2017-04-25T06:50:03.815+0000 I REPL     [ReplicationExecutor] could not find member to sync from
2017-04-25T06:50:03.815+0000 E REPL     [rsBackgroundSync] too stale to catch up -- entering maintenance mode
2017-04-25T06:50:03.815+0000 I REPL     [rsBackgroundSync] our last optime : (term: 6, timestamp: Apr 25 06:32:08:333)
2017-04-25T06:50:03.815+0000 I REPL     [rsBackgroundSync] oldest available is (term: 6, timestamp: Apr 25 06:32:08:3b1)
2017-04-25T06:50:03.815+0000 I REPL     [rsBackgroundSync] See http://dochub.mongodb.org/core/resyncingaverystalereplicasetmember

I am running mongodb 3.2.8. Is that b expected to be in there? Is this expected?


creating replica set between three similar mongo databases on my local

$
0
0

I am creating MongoDB replication on local server by first manually creating three different databases folders with names like MongoDB, MongoDB1 and MongoDB2. Each one of them have data, bin and log sub folders and an individual mongod.cfg file. The data sub folders also have db sub folder and that is where I have all my database files.

For me MongoDB, MongoDB1 and MongoDB2 are all database instances and I am adding configuration to my first Mongo database folder (MongoDB) in mongod.cfg file as:

processManagement:
   fork: true
net:
   bindIp: 127.0.0.1
   port: 27017
storage:
   dbPath: C:\MongoDB\data\db
systemLog:
   destination: file
   path: "C:\MongoDB\log\mongo.log"
   logAppend: true
storage:
   journal:
      enabled: true
security:
   authorization: enabled
replication:
   replSetName: repl1   

I then change the parameters replSetName, dbpath, systemlog path and port number in each of the configuration files for the other two Mongo database folders (MongoDB1 and MongoDB2).

I am still not able to create replicaset between these three database folders.

  • What step is wrong?
  • Am I creating database folder in a wrong way?
  • May I know which step is wrong and how to fix it?

After I start the Mongodb service of first instance and use Windows CMD Prompt to enter the command like written below to start creating replica set, it continues running and never stops and creates a lock file in my folder.

Input

mongod --port 27017 --dbpath "C:\Mongodb\data\db" --replSet repl1

Output

Fri Jul 07 09:07:36.787 [initandlisten] MongoDB starting : pid=15892 port=27017 dbpath=\data\db\ 64-bit host=LT-SRA-EB-27ZL
Fri Jul 07 09:07:36.789 [initandlisten] db version v2.4.9
Fri Jul 07 09:07:36.791 [initandlisten] git version: 52fe0d21959e32a5bdbecdc62057db386e4e029c
Fri Jul 07 09:07:36.792 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Fri Jul 07 09:07:36.792 [initandlisten] allocator: system
Fri Jul 07 09:07:36.792 [initandlisten] options: { replSet: "rpl1" }
Fri Jul 07 09:07:36.798 [initandlisten] journal dir=\data\db\journal
Fri Jul 07 09:07:36.799 [initandlisten] recover : no journal files present, no recovery needed
Fri Jul 07 09:07:36.854 [websvr] admin web console waiting for connections on port 28017
Fri Jul 07 09:07:36.854 [initandlisten] waiting for connections on port 27017
Fri Jul 07 09:07:36.867 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Fri Jul 07 09:07:36.867 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Fri Jul 07 09:07:41.517 [initandlisten] connection accepted from 127.0.0.1:56889 #1 (1 connection now open)
Fri Jul 07 09:07:41.526 [conn1] end connection 127.0.0.1:56889 (0 connections now open)
Fri Jul 07 09:07:44.513 [initandlisten] connection accepted from 127.0.0.1:56890 #2 (1 connection now open)
Fri Jul 07 09:07:44.525 [conn2] assertion 13435 not master and slaveOk=false ns:sc82rev161221_tracking_contact.ProcessingPool query:{ $query: { Scheduled: { $lte: new Date(1499432864509) } }, $orderby: { Scheduled: 1 } }
Fri Jul 07 09:07:44.525 [conn2]  ntoskip:0 ntoreturn:16
Fri Jul 07 09:07:44.525 [conn2] end connection 127.0.0.1:56890 (0 connections now open)
Fri Jul 07 09:07:46.529 [initandlisten] connection accepted from 127.0.0.1:56891 #3 (1 connection now open)
Fri Jul 07 09:07:46.870 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Fri Jul 07 09:07:56.505 [initandlisten] connection accepted from 127.0.0.1:56892 #4 (2 connections now open)
Fri Jul 07 09:07:56.515 [conn4] assertion 13435 not master and slaveOk=false ns:sc82rev161221_tracking_live.ProcessingPool query:{ $query: { Scheduled: { $lte: new D
  • Before ReplicaSet creation, how do I create three database instances/servers with database in them?

  • Is my method above is correct in regards to creating three databases instances/server?

  • May I know all the steps to create ReplicaSet between these three databases on my local, using windows cmd commands?

I know I am making mistake somewhere, but I do not know where. Your feedback will help.

SQL Replication using Two Availability Group

$
0
0

Is it possible to setup SQL Replication with Two Availability Group Cluster? one is Publisher(or Distributor if not in different server) and the other AG is Subscriber?

PostgreSQL synchronous replication consistency

$
0
0

If we compare multiple types of replication (Single-leader, Multi-leader or Leaderless), Single-leader replication has the possibility to be Linearizable. In my understanding, Linearizability means that once a write completes, all later reads should return that value, or of a later write. Or said in other words, there should be an impression if there is only one database, but no more. So i guess, no stale reads.

PostgreSQL in his streaming replication, has the ability to make all it's replicas synchronous using the synchronous_standby_names and it also has the ability to fine tune with the synchronous_commit option, where it can be set to remote_apply, so the leader waits until the transaction is replayed on the standby (making it visible to queries). In the documentation, in the paragraph where it talks about the remote_apply option, it states that this allows load-balancing in simple cases with causal consistency.

Few pages back, it says this:

,,Some solutions are synchronous, meaning that a data-modifying transaction is not considered committed until all servers have committed the transaction. This guarantees that a failover will not lose any data and that all load-balanced servers will return consistent results no matter which server is queried,,

So i'm struggling to understand what can there be guaranteed, and what anomalies can happen if we load-balance read queries to the read replicas. Can still there be stale reads? Can it happen when i query different replicas to get different results even no write after happend on the leader? My impression is yes, but i'm not really sure. If no, how PostgreSQL prevents stale reads? I did not find anything with more details how it fully works under the hood. Does it use two-phase commit, or some modification of it, or it uses some other algorithm to prevent stale reads?

If it does not provide option of no stale reads, is there a way to accomplish that? I saw, PgPool has to option to load-balance to the replicas that are behind no more than a defined threshold, but i did not understand if it could be defined to load-balance to replicas that are up with the leader.

It's really confusing to me to really understand if there anomalies can happen in a fully synchronous replication in PostgreSQL.

I understand that setup like this has problems with availability, but that is now not a concern.

Managing failover for MySQL nodes using HA Proxy

$
0
0

We have an S1<-M1<->M2->S2 setup of MySQL replicated nodes. These are now to be brought to the back-end of an HA Proxy server to split read from writes. We also intend to achieve automatic fail-over with this. However, write requests to be routed to M2 only when M1 fails. In usual scenario, we will be good with not touching M2 at all. There seems to be no "balance" option in HA Proxy that switches to M2 only when M1 fails. Please suggest how this can be achieved using HA Proxy.

Writing round-robin across M1 and M2 is a time consuming solution to be taken up at this point in time.

Restoring a MySQL database to a failed master

$
0
0

I have a master-master configuration in MySQL with two servers. One server should stay live on the network to serve requests (call it server A) and the other should be taken offline to push new code changes (server B).

My idea originally was that after running STOP SLAVE on both servers, that server B could be shut down, updated, and even have a new database schema put in. After this, I thought that I could simply START SLAVE on server B and have the entire database from server A replicated/mirrored back over to server B. However, this is not the case. Restarting the slave and doing a CHANGE MASTER TO (...) and syncing up the log files does not replicate old changes like I want it to: it only replicates new writes from that point on.

I am looking for a way to bring server B up to speed with the latest database from server A, and then have server B continue to replicate changes in a master-master setup. Then I can continue the sequence of server upgrades by doing the same process but keeping server B online only.

Any solutions which require locking the tables will not work since I need to do this change without any downtime. Is this possible?

Replication Subscriber show Expiring soon/Expired in sql server

$
0
0

Can you please help me regarding transnational-replication Post your question .sir in my SQL server i am using transnational-replication Post your question and i have four subscribers and it was working fine .but now when i am seeing my replication through replication all subscriber status are showing 'Expiring Soon/Expired' when i am double clicking on subscriber error message is 'no replicated transactions are available' .please suggest me what to. and you can check the image in attached .

MySQL Replication : Duplicate entries - SQL_SLAVE_SKIP_COUNTER or delete row?

$
0
0

I have a system which replicates one database to four slave servers. Every now and then, when traffic is high, one or more of the slave servers hits a duplicate insert error and the slave stops running.

When this happens, I have two choices - I can either SET GLOBAL SQL_SLAVE_SKIP_COUNTER or I can delete the offending row in the slave. Both seem to work and my logic says that given something happened to cause this problem, there is a possibility that the data in the slave is corrupted. Given that this can only happen on INSERTs, by deleting the row I guarantee the slave data will match the master once replication resumes. By skipping, if the data for that row is corrupted in the slave, it will remain corrupted.

Am I missing anything?

Further, given that this happens once every couple of months on two specific tables, is there any reason I shouldn't automate a process that triggers when this error is encountered, deletes that row in the slave and re-starts the slave?

EDIT: MySQL 5.5.29 and statement replication I believe.


Add column to database, used with logical replication generates large WAL files

$
0
0

I have a PostgreSQL 10 master database with a logical replication, but I faced with an issue. I added one more column (int, without default value) to a big table in master (weight 39gb, over 100 million entries) within a transaction, and updated its value with a same value in another column.

begin;
alter table test add column "onecolumn" int;
update table test set "onecolumn"="secondcolumn"
commit;

Finally it generated large WAL files with weight 39gb, so Postgres replicated whole table instead of only a column.

Why Postgres generated so big WAL files? Because real weight of adding one column with int should be much less.

Replica identity of the table is set to default

Postgres Logical Replication For Specific tables

$
0
0

I'm using logical replication to move data from postgres to a search engine via some background process, but I am only concerned with a small set of tables. Is there a way to specify which tables a replication slot is concerned with

I have a replication user set up and am able to receive the changes just fine. I've implemented little script to handle the binary protocol and forward to my search engine with:

START_REPLICATION SLOT <SLOT NAME> LOGICAL 0/00000000

But this gives me every change for every table. I see that this is possible between postgres servers via publications and subscriptions, but this doesn't seem to work from an application client. Maybe I am missing something?

Is there a way to whitelist tables for replication slots this way?

Created RDS db-instance Replica successfully, now how to connect it with server

$
0
0

I had created AWS RDS, MySQL db-instance successfully with the help of Github Pages Documentation. Thanks a lot 18F/OpenFEC Project.

Now, Instead of Route 53 configuration for network connectivity, I want to configure directly through Godaddy DNS Manager. Because already a subdomain is configured for a production server with RDS db-instance endpoints.

I can create another subdomain at DNS Manager with Replica db-instance's end points. But, how to achieve connection on both db-instance as well as replica-db-instance.

Please guide me, to manage traffic of db query without updating that production app.

Facing issue while Setting up MySQL Group Replication with MySQL Docker images

$
0
0

log og the node:

root@worker01:~# docker logs node1 [Entrypoint] MySQL Docker Image 8.0.21-1.1.17 [Entrypoint] Initializing database 2020-08-05T13:14:59.546377Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.21) initializing of server in progress as process 23 2020-08-05T13:14:59.655627Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-08-05T13:15:35.222094Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2020-08-05T13:16:56.203484Z 0 [Warning] [MY-013501] [Server] Ignoring --plugin-load[_add] list as the server is running with --initialize(-insecure). 2020-08-05T13:17:56.492555Z 0 [ERROR] [MY-000067] [Server] unknown variable 'group-replication-start-on-boot=OFF'. 2020-08-05T13:17:56.493399Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it. 2020-08-05T13:17:56.494797Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-08-05T13:18:46.000320Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.21-1.1.17 [Entrypoint] Starting MySQL 8.0.21-1.1.17 2020-08-05T13:19:32.543042Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.21) starting as process 22 2020-08-05T13:19:32.580976Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2020-08-05T13:19:35.142750Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. mysqld: Table 'mysql.plugin' doesn't exist 2020-08-05T13:19:35.445862Z 0 [ERROR] [MY-010735] [Server] Could not open the mysql.plugin table. Please perform the MySQL upgrade procedure. 2020-08-05T13:19:35.502671Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2020-08-05T13:19:35.894268Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2020-08-05T13:19:36.590170Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2020-08-05T13:19:36.733128Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2020-08-05T13:19:36.734145Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2020-08-05T13:19:37.088526Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2020-08-05T13:19:37.090739Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist 2020-08-05T13:19:37.092141Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition. 2020-08-05T13:19:37.095750Z 0 [ERROR] [MY-010326] [Server] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist 2020-08-05T13:19:37.096903Z 0 [ERROR] [MY-010952] [Server] The privilege system failed to initialize correctly. For complete instructions on how to upgrade MySQL to a new version please see the 'Upgrading MySQL' section from the MySQL manual. 2020-08-05T13:19:37.099193Z 0 [ERROR] [MY-010119] [Server] Aborting 2020-08-05T13:19:38.715864Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL. root@worker01:~#

Add column to database, used with logical replication generates large WAL files

$
0
0

I have a PostgreSQL 10 master database with a logical replication, but I faced with an issue. I added one more column (int, without default value) to a big table in master (weight 39gb, over 100 million entries) within a transaction, and updated its value with a same value in another column.

begin;
alter table test add column "onecolumn" int;
update table test set "onecolumn"="secondcolumn"
commit;

Finally it generated large WAL files with weight 39gb, so Postgres replicated whole table instead of only a column.

Why Postgres generated so big WAL files? Because real weight of adding one column with int should be much less.

Replica identity of the table is set to default

mongo replicated shard member not able to recover, stuck in STARTUP2 mode

$
0
0

I have following setup for a sharded replica set in Amazon VPC:

mongo1: 8G RAM Duo core (Primary)

mongo2: 8G RAM Duo core (Secondary)

mongo3: 4G RAM (Arbiter)

mongo1 is the primary member in the replica set with a 2 shard setup:

 mongod --port 27000 --dbpath /mongo/config -- configsvr 

 mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1

 mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2

Mongo2 is the secondary member in the replica set, mirrors mongo1 exactly:

 mongod --port 27000 --dbpath /mongo/config -- configsvr 

 mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1   # Faulty process

 mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2

Then for some reason, the 27001 process on mongo2 experienced a crash due to out of memory (cause unknown) last week. When I discovered the issue (the application still works getting data from the primary) and restarted the 27001 process, it was too late to catch up with the shard1 on mongo1. So I followed 10gen's recommendation:

  • emptied directory /mongo/shard1
  • restart the 27001 process using command

    mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1

However it's 24+ hours now, the node is still in STARTUP2 mode, I have about 200G data in the shard1 and it appears that it got about 160G over to /mongo/shard1 on mongo2. Following is the replica set status command output(run on mongo2)

rssh1:STARTUP2> rs.status()
{
     "set" : "rssh1",
     "date" : ISODate("2012-10-29T19:28:49Z"),
     "myState" : 5,
     "syncingTo" : "mongo1:27001",
     "members" : [
          {
               "_id" : 1,
               "name" : "mongo1:27001",
               "health" : 1,
               "state" : 1,
               "stateStr" : "PRIMARY",
               "uptime" : 99508,
               "optime" : Timestamp(1351538896000, 3),
               "optimeDate" : ISODate("2012-10-29T19:28:16Z"),
               "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),
               "pingMs" : 0
          },
          {
               "_id" : 2,
               "name" : "mongo2:27001",
               "health" : 1,
               "state" : 5,
               "stateStr" : "STARTUP2",
               "uptime" : 99598,
               "optime" : Timestamp(1351442134000, 1),
               "optimeDate" : ISODate("2012-10-28T16:35:34Z"),
               "self" : true
          },
          {
               "_id" : 3,  
               "name" : "mongoa:27901",
               "health" : 1,
               "state" : 7,
               "stateStr" : "ARBITER",
               "uptime" : 99508,
               "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),
               "pingMs" : 0
          }
     ],
     "ok" : 1
}

rssh1:STARTUP2> 

It would appear most of the data from primary was replicated, but not all. The logs shows some error but I don't know if it's related:

Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:config.system.indexes query:{ expireAfterSeconds: { $exists: true } }

Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over config.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }

Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: config 10065 invalid parameter: expected an object ()

Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:gf2.system.indexes query:{ expireAfterSeconds: { $exists: true } }

Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over gf2.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }

Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: gf2 10065 invalid parameter: expected an object ()

Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:kombu_default.system.indexes query:{ expireAfterSeconds: { $exists: true } }

Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over kombu_default.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }

Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: kombu_default 10065 invalid parameter: expected an object ()

Everything on primary appeared to be fine. No errors in the log.

I tried the steps twice, one with the mongo config server running and one with mongo config server down, both are same results.

This is a production setup and I really need to get the replica set back up working, any help is much much appreciated.

Tomcat 8 DeltaManager vs BackupManager session replication

$
0
0

I'm going to configure 2 nodes cluster with a separated AWS EC2 instances with Tomcat 8 installed.

I need to configure Tomcat session replication.

According to Tomcat 8 documentation Clustering/Session Replication HOW-TO:

In this release of session replication, Tomcat can perform an all-to-all replication of session state using the DeltaManager or perform backup replication to only one node using the BackupManager. The all-to-all replication is an algorithm that is only efficient when the clusters are small. For larger clusters, to use a primary-secondary session replication where the session will only be stored at one backup server simply setup the BackupManager.

Could you please tell me what does it mean - clusters are small ?

Is it 2.. 5..100... 1000 nodes or what ?


Error 20598 - The row was not found at the Subscriber when applying the replicated command

$
0
0

I am facing the error 20598 with Transactional Replication:

The row was not found at the Subscriber when applying the replicated command.

Normally this error occurs when an UPDATE or DELETE statement is executed by the publisher for a primary key value and the record (against which UPDATE/DELETE executed) does not exist in the subscriber database.

But in my case the scenario is different.

I diagnosed and found the record exists in the article/table of subscriber database, because when I executed the command (retrieved with help of sp_browsereplcmds) at the subscriber it executed successfully.

What may be the possible reason of it?

I'm using SQL Server 2016 both side.

Distribution clean up job in transactional replication removed records but not files

$
0
0

Distribution clean up job ran without errors according to schedule, but I noticed that the snapshot files were not removed even if when created beyond max_disretention period. Records from msrepl_commands and msrepl_transactions were removed, but the files were not.

  • immediate_sync = 1
  • max_disretention = 72 hours

Migrate replication from SQL Server 2008 R2 to SQL Server 2016 - Old to new hardware

$
0
0

I will be working on a project shortly that would involve moving the replicated databases off of old to the new SQL Server instance. I haven't done this in the past so I don't have much knowledge either on how to move ahead except to do some research.

As of this date our current config sits on a Windows 2008 R2 server with SQL Server 2008 R2 and publisher along with the distributor is the same SQL Server instance e.g sqla and the subscriber is sqlb and also now sqlb is acting as a publisher and distributor for sqlc... these are all pull subscribers with one way transactional and snapshot replication.

Now, back to my original question. Is there a way to migrate the databases off of the old to the new with least downtime so I don't have to script out and reinitialize subscriptions as it would take a lot of time (that's what most of the people mentioned on google). Do we have any hacks or tricks to make this work without causing problems or if there is any step by step to work it out even if we use script method.

MySQL Group Replication multi-primary - add new member

$
0
0

I am configuring MySQL Group Replication in multi-primary mode to replicate all databases between two members and allowing writes on either.

I have my two members IP addresses configured in the whitelist and seed list:

loose-group_replication_ip_whitelist = "x.x.x.x,y.y.y.y."
loose-group_replication_group_seeds = "x.x.x.x:33061,y.y.y.y:33061"

My understanding is that if in a month or two I want add a third member, I will need to update both member's my.cnf files to add the third member to the whitelist and seed list. I will then need to restart the mysql services in order for the change to take effect. What is the best practice here? I see two options:

Option 1: restart the members one at a time such that the group never becomes empty.

  • Restart mysql on member1
  • Re-join the group with START GROUP_REPLICATION;
  • Restart mysql on member2
  • Re-join the group with START GROUP_REPLICATION;
  • Start mysql on member3 with START GROUP_REPLICATION;

Option2: stop all members and rebootstrap

  • Stop mysql on both existing members
  • Bootstrap the group on member1
  • Join the group on member2 and member3

Option 1 should result in no downtime whereas Option 2 would have downtime. Are there any other options I'm missing or is Option 1 best practice?

Unable to replicate this ggplot2 plot

$
0
0

I am unable to replicate an example from the ggrough library (https://xvrdm.github.io/ggrough/articles/Customize%20chart.html). In particular, I am trying to replicate the following plot (minus the font aspects):

enter image description here

The code is from the same link above under the "Kindergarten" header.

I am using the following code:

library(hrbrthemes)
library(tidyverse)
library(gcookbook)
library(ggplot2)
library(ggrough)
ggplot(uspopage, aes(x=Year, y=Thousands, fill=AgeGroup)) + 
    geom_area(alpha=0.8) +
    scale_fill_ipsum() +
    scale_x_continuous(expand=c(0,0)) +
    scale_y_comma() -> p 

options <- list(GeomArea=list(fill_style="hachure", 
                              angle_noise=0.5,
                              gap_noise=0.2,
                              gap=1.5,
                              fill_weight=1))
get_rough_chart(p, options)

However, I am unable to replicate the above. Here is what I get:

enter image description here

Again, I am not worried about the fonts, but do want to get the shaded geom_area to work. It currently doesn't render at all. For reference, here is what the p object yields (i.e., the plot before it goes through the ggrough processing):

enter image description here

Also note that I am able to replicate the "Blueprint" example, which uses geom_col. So it appears that something is going wrong with ggrough processing the geom_area, but not sure.

Viewing all 17268 articles
Browse latest View live