I used File f1 = new File("f1", 1024); datacenter2.addFile(f1); and cloudlet1.setUserId(brokerId1); cloudlet1.addRequiredFile("f1"); but if I remove "datacenter2.addFile(f1);" it will work fine and it would be nonsense. Thank you in advanced
How can I set a required file to a cloudlet and set the file to a datacenter in cloudsim? So it can work with the file
replication error
I am getting error for connect to master (mysql 5.1) from slave (mysql5.7)
Slave I/O for channel '': error connecting to master 'replica@hostname' - retry-time: 60 retries: 6, Error_code: 2027
Please help to solve this errror
postgres wal sender replication timeout during pg_basebackup
Let me start with the caveat that I am still green with Postgres.
I am working on a postgres 9.2 Active/Standby cluster on Debian wheezy for an application, based off of the ClusterLabs pgsql cluster documentation.
In the lab I am able to get this working without a problem. But on the production cluster I'm building, I keep running into a problem.
I brought the database files over from the current single production postgres server. By this I mean I shutdown postgres and tar-ed up the data directory and copied it over the the cluster's Master node. I put the files in place, set the permissions, and was able to start-up postgres on the Master via corosync just fine.
In preparing the slave, I used the pg_basebackup tool to bring the database over from the Master and this is where I keep having issues. As it is transferring, at about 57% I see the error:
$ pg_basebackup -h db-master -U u_repl -D /db/data/postgresql/9.2/main/ -X stream -P
pg_basebackup: could not receive data from WAL stream: SSL connection has been closed unexpectedly
176472/176472 kB (100%), 1/1 tablespace
pg_basebackup: child process exited with error 1`
And on the server, I see:
2016-04-06 21:05:31 UTC LOG: terminating walsender process due to replication timeout
But the transfer doesn't stop and keeps going to completion.
I found this question here on stackexchange about setting "ssl_renegotiation_limit" to 0, but this didn't make much difference.
Anyone have any ideas? I am completely baffled as to why this would error, but keep on going just fine. It is the same procedure I used in the lab setup... the only difference is that the production database is much bigger in size.
Thoughts?? Thank you kindly! -Peter.
MySQL failover - Master to Master Replication
My company is trying to implement a MySQL failover mechanism, to achieve higher availability in our webservices tier - we commercialize a SaaS solution. To that end we have some low-end VMs scattered through different geographical locations, each containing a MySQL 5.5 server with several DBs, that for the time being are merely slave-replicating from the production server - the objective up until now was just checking the latency and general resilience of MySQL replication.
The plan however is to add a Master-Master replication environment between two servers in two separate locations, and these two instances would handle all the DB writes. The idea wouldn't necessarily imply concurrency; rather the intention is having a single one of the instances handling the writes, and upon a downtime situation using a DNS Failover service to direct the requests to the secondary server. After the primary comes back online, the b-log generated in the meantime in the secondary would be replicated back, and the DNS Failover restored the requests back to the first one.
I am not an experienced administrator, so I'm asking for your own thoughts and experiences. How wrong is this train of thought? What can obviously go wrong? Are there any much better alternatives? Bash away!
Thanks!
Replication: Procedure or function sp_MSupd_PROC has too many arguments specified
- Source is one service pack behind destination (2012). I know service packs can be issues here, but it looks like this is only for versions 2000 and 2005.
- When using
sp_helptext sp_MSupd_PROC
the objects uses aCASE WHEN
with the binary input (last parameter) and updates or adds columns with the other parameters where the primary key column is equal to the primary key passed in. For non-SQL types reading this - is there an Entity Framework approach to this, as this entire replication process looks slow and completely outdated and I highly doubt is using efficient .NET code? - Using sp_help on the objects on both the source and destination show that they're both the same. Also, I did look at this briefly, but the no specific transaction number comes with the error - so I can't do his second step.
- I did a quick verification on the data in the table, confirming that it matches with the parameters shown from the
sp_helptext
.
I haven't found anything else I can do to troubleshoot here and updating service packs right now isn't an option. Note that this replication was fine yesterday and the day before yesterday, so what would cause it to be fine for multiple days, but now not be fine. I realize that if users change or update the source or destination object, that would cause problems, but I don't see that being the case.
Thanks.
pouchdb - secure replication with remote LevelDB
I am keen on using PouchDB in browser memory for an Angular application. This PouchDB will replicate from a remote LevelDB database that is fed key-value pairs from an algorithm. So, on the remote end, I would install PouchDB-Server. On the local end, I would do the following (as described here) on a node
prompt.
var localDB = new PouchDB('mylocaldb')
var remoteDB = new PouchDB('https://remote-ip-address:5984/myremotedb')
localDB.sync(remoteDB, {
live: true
}).on('change', function (change) {
// yo, something changed!
}).on('error', function (err) {
// yo, we got an error! (maybe the user went offline?)
});
How do we start a PouchDB instance that supports TLS for live replication as described in the snippet above? How do I start a PouchDB instance that supports TLS for live replication?
Where is the "log" stored in passive replication?
In passive replication, a "log" of computations is stored so that, if the primary node fails, a backup node may take over as the primary and recover the lost work of the dead primary. The primary node is not in constant communication with the backup nodes; instead, it makes sure that they don't fall behind by periodically sending a "checkpoint" state to them.
However, I am unsure where the "log" is stored? It obviously can't be stored in the primary node, because then the log would die along with the node. It can't be stored in the backup nodes, because that would require the primary node to be in constant communication with the backup nodes.
I first assumed that the log is stored in some external "log node" that the client or primary node communicates with. However, this would require a constant back-and-forth communication, which is what we're trying to avoid with the whole log-and-checkpoint system. The sources I read from don't go into sufficient detail on where the log is stored. It's just treated as some abstract object that is available to all nodes at all times, which isn't a realistic assumption.
MySQL slave server getting stopped after each replication request from Master
Basic master-slave MySQL configuration has been done on Windows machine. Master and slave servers are running on localhost with different ports.
Now when executing update or insert in master server, slave server getting stopped after that event. Once restarting slave server and check updates then update/insert is successfully executed in slave through replication setup.
What could be the possible root cause of this issue?
Log of show slave status\G :
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 127.0.0.1
Master_User: masteradmin
Master_Port: 3307
Connect_Retry: 60
Master_Log_File: USERMAC38-bin.000007
Read_Master_Log_Pos: 840
Relay_Log_File: USERMAC38-relay-bin.000004
Relay_Log_Pos: 290
Relay_Master_Log_File: USERMAC38-bin.000007
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 840
Relay_Log_Space: 467
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 63ac2f83-44ac-11e5-bafe-d43d7e3ca358
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Error log of slave before it got stopped :
'CHANGE MASTER TO executed'. Previous state master_host='127.0.0.1', master_port= 3307, master_log_file='USERMAC38-bin.000008', master_log_pos= 123, master_bind=''. New state master_host='127.0.0.1', master_port= 3307, master_log_file='USERMAC38-bin.000013 [truncated, 295 bytes total]
Storing MySQL user name or password information in the master.info repository is not secure and is therefore not recommended. Please see the MySQL Manual for more about this issue and possible alternatives.
Slave I/O thread: connected to master 'masteradmin@127.0.0.1:3307',replication started in log 'USERMAC38-bin.000013' at position 498
Slave SQL thread initialized, starting replication in log 'USERMAC38-bin.000013' at position 498, relay log '.\USERMAC38-relay-bin.000001' position: 4
General log of slave before it got stopped :
150819 11:04:44 10 Query stop slave
150819 11:04:45 8 Query SHOW GLOBAL STATUS
150819 11:04:48 8 Query SHOW GLOBAL STATUS
150819 11:04:51 8 Query SHOW GLOBAL STATUS
10 Query CHANGE MASTER TO MASTER_HOST = '127.0.0.1' MASTER_USER = 'masteradmin' MASTER_PASSWORD = <secret> MASTER_PORT = 3307 MASTER_LOG_FILE = 'USERMAC38-bin.000013' MASTER_LOG_POS = 498
150819 11:04:54 8 Query SHOW GLOBAL STATUS
150819 11:04:55 10 Query start slave
11 Connect Out masteradmin@127.0.0.1:3307
150819 11:04:57 8 Query SHOW GLOBAL STATUS
150819 11:05:00 8 Query SHOW GLOBAL STATUS
150819 11:05:02 10 Query show slave status
150819 11:05:03 8 Query SHOW GLOBAL STATUS
150819 11:05:06 8 Query SHOW GLOBAL STATUS
150819 11:05:09 8 Query SHOW GLOBAL STATUS
150819 11:05:12 8 Query SHOW GLOBAL STATUS
150819 11:05:15 8 Query SHOW GLOBAL STATUS
150819 11:05:18 8 Query SHOW GLOBAL STATUS
150819 11:05:21 8 Query SHOW GLOBAL STATUS
150819 11:05:24 8 Query SHOW GLOBAL STATUS
150819 11:05:27 8 Query SHOW GLOBAL STATUS
150819 11:05:30 8 Query SHOW GLOBAL STATUS
150819 11:05:33 8 Query SHOW GLOBAL STATUS
150819 11:05:37 8 Query SHOW GLOBAL STATUS
150819 11:05:40 8 Query SHOW GLOBAL STATUS
150819 11:05:43 8 Query SHOW GLOBAL STATUS
150819 11:05:46 8 Query SHOW GLOBAL STATUS
150819 11:05:49 8 Query SHOW GLOBAL STATUS
150819 11:05:52 8 Query SHOW GLOBAL STATUS
150819 11:05:55 8 Query SHOW GLOBAL STATUS
150819 11:05:58 8 Query SHOW GLOBAL STATUS
150819 11:06:01 8 Query SHOW GLOBAL STATUS
150819 11:06:04 8 Query SHOW GLOBAL STATUS
150819 11:06:07 8 Query SHOW GLOBAL STATUS
150819 11:06:10 8 Query SHOW GLOBAL STATUS
150819 11:06:13 8 Query SHOW GLOBAL STATUS
150819 11:06:16 8 Query SHOW GLOBAL STATUS
150819 11:06:18 12 Query BEGIN
12 Query COMMIT /* implicit, from Xid_log_event */
150819 11:06:19 8 Query SHOW GLOBAL STATUS
Error log after restarting slave :
You need to use --log-bin to make --log-slave-updates work.
You need to use --log-bin to make --binlog-format work.
Plugin 'FEDERATED' is disabled.
2015-08-19 12:11:26 150 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use Windows interlocked functions
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: CPU does not support crc32 instructions
InnoDB: Initializing buffer pool, size = 165.0M
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 8556085
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 8556558
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
2015-08-19 12:11:27 1f64 InnoDB: Warning: table 'mysql/innodb_index_stats'
InnoDB: in InnoDB data dictionary has unknown flags 50.
2015-08-19 12:11:27 1f64 InnoDB: Warning: table 'mysql/innodb_table_stats'
InnoDB: in InnoDB data dictionary has unknown flags 50.
InnoDB: 1.2.10 started; log sequence number 8556558
Server hostname (bind-address): '127.0.0.1'; port: 3309
- '127.0.0.1' resolves to '127.0.0.1';
Server socket created on IP: '127.0.0.1'.
2015-08-19 12:11:27 150 InnoDB: Warning: table 'mysql/slave_worker_info'
InnoDB: in InnoDB data dictionary has unknown flags 50.
Recovery from master pos 2235 and file USERMAC38-bin.000013.
Storing MySQL user name or password information in the master.info repository is not secure and is therefore not recommended. Please see the MySQL Manual for more about this issue and possible alternatives.
Slave I/O thread: connected to master 'masteradmin@127.0.0.1:3307',replication started in log 'USERMAC38-bin.000013' at position 2235
Event Scheduler: Loaded 0 events
E:\2-Softwares\mysql-5.6.10-winx64\bin\mysqld.exe: ready for connections.
Version: '5.6.10-log' socket: '' port: 3309 MySQL Community Server (GPL)
Slave SQL thread initialized, starting replication in log 'USERMAC38-bin.000013' at position 2235, relay log '.\USERMAC38-relay-bin.000011' position: 4
SQL Server 2016 AlwaysOn to SQL Server 2012 Replication
Planning to implement transactional replication from SQL Server 2016 AlwaysOn database to SQL Server 2012 standalone instance. On Microsoft blog it is recommended to use distributor on instance other than AG instance. Can I use SQL Server 2012 as distributor or SQL Server 2016 is only possible as distributor.
Row-row based logging in a blackhole replication filter setup?
I am using two MySQL server instances on the same server to filter replication to a third external server. My filter slave is using the blackhole engine as described here: https://dev.mysql.com/doc/refman/5.7/en/blackhole-storage-engine.html
Both master and slave use statement based replication. The documentation says:
Inserts into a BLACKHOLE table do not store any data, but if statement based binary logging is enabled, the SQL statements are logged and replicated to slave servers. This can be useful as a repeater or filter mechanism.
The above statement makes me assume that if I had both of my MySQL instances set to row based replication, nothing would make it to the third, external database. Which kind of makes sense since there are no actual rows in the filtering blackhole database.
However, I have been thinking... Would it not be possible for the filtering middle instance to simply pass on any row based instructions it receives to its own binlog, i.e. would a row-row filtering blackhole setup not work?
How to add rule to migrate on node failure in k8s
I have k8s cluster running on 2 nodes and 1 master in AWS.
When I changed replica of my all replication pods are span on same node. Is there a way to distribute across nodes.?
sh-3.2# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
backend-6b647b59d4-hbfrp 1/1 Running 0 3h 100.96.3.3 node1
api-server-77765b4548-9xdql 1/1 Running 0 3h 100.96.3.1 node2
api-server-77765b4548-b6h5q 1/1 Running 0 3h 100.96.3.2 node2
api-server-77765b4548-cnhjk 1/1 Running 0 3h 100.96.3.5 node2
api-server-77765b4548-vrqdh 1/1 Running 0 3h 100.96.3.7 node2
api-db-85cdd9498c-tpqpw 1/1 Running 0 3h 100.96.3.8 node2
ui-server-84874d8cc-f26z2 1/1 Running 0 3h 100.96.3.4 node1
And when I tried to stop/terminated AWS instance (node-2) pods are in pending state instead of migrating to available node. Can we specify it ??
sh-3.2# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
backend-6b647b59d4-hbfrp 1/1 Running 0 3h 100.96.3.3 node1
api-server-77765b4548-9xdql 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-b6h5q 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-cnhjk 0/1 Pending 0 32s <none> <none>
api-server-77765b4548-vrqdh 0/1 Pending 0 32s <none> <none>
api-db-85cdd9498c-tpqpw 0/1 Pending 0 32s <none> <none>
ui-server-84874d8cc-f26z2 1/1 Running 0 3h 100.96.3.4 node1
How (if at all) does Galera enforce authentication for SST via rsync when adding a node?
I have to be missing something here.
It just hit me as added a new node to my cluster in order to prepare for the removal of a different node: "How does the cluster know that it is okay to send the new node a SST?"
I am pretty sure that the only information the new node has about the cluster is the gcomm:// address. Surely that isn't looked at as "secure" information that passes for authentication. To my knowledge, no shell account on the new node has the same password as on the existing cluster nodes.
So what would prevent anyone from spinning up a new node and pointing it's gcomm:// address at one or more of my nodes to just get an SST and be able to see all of my data?
Of course, certificates will be put in place. But I'm talking about a default setup and how things work "out of the box." I couldn't find much of anything that talked about this out there.
Am I going nuts?
Syncing data from primary member to secondary after election
Maybe this is kind a stupid question but, I'm novice in configuring replication and I need to clear something before makeing anything else. Moment ago I configured replica set with 3 members in it. It's all working well but I think I don't understand the part about copying data from primary to secondary member of replica set after new election.
Let's say I made one collection in my primary member, after that kill instance of primary member, election has been made and now I have new instance as primary member on example --port 27018. What happend with all data on first instance which I killed? Do I need to make mongodump? Do I need to use some sync method for instant syncing or read the oplog and how to do that? Beacause, I tought that if I made an replSet, and turn on all three instances which are connected, all of them will listening the same oplog and after election update the database, or I'm completly wrong about that and I'm clearly missing some things?
Thank you!
MySQL Semi-synchronous replication with Multi-Master
Is it possible to use semi-synchronous replication with a Multi-Master setup?
I've tried to follow this guide to setup a semi-synchronous replication for a master-slave setup: https://avdeo.com/2015/02/02/semi-synchronous-replication-in-mysql/
But I'm not sure how to implement this on a Multi-Master setup.
There are two plugins: one for the master and one for the slave. Since a Multi-Master act as a Master and Slave, does that mean I have to install both plugins on all servers?
I'm using MySQL 5.7
Postgresql 10 replication mode error
I'm trying to setup a basic master slave configuration using streaming replication for postgres 10 and docker
Since the official docker image provides a docker-entrypoint-initdb.d
folder for placing initialization scripts i thought it would be a swell idea to start placing my preparation code here.
What i'm trying to do is automate the way the database is restored before starting the slave in standby mode, so i run
rm -rf /var/lib/postgresql/data/* && pg_basebackup 'host=postgres-master port=5432 user=foo password=foo' -D /var/lib/postgresql/data/
and this succeeds.
Then the server is shutdown and restarted as per the docker initialization script, which pops up a message saying
database system identifier differs between the primary and standby
Now I've been sitting online for a while now, and the only 2 explanations i got is that I either have a misconfigured recovery.conf
file, which looks like this
standby_mode = 'on'
primary_conninfo = 'host=postgres-master port=5432 user=foo password=foo'
trigger_file = '/tmp/postgresql.trigger'
Where the connection string is the same one i used for the base backup.
The second solution circulating is that the backup command could be messed up, but the only thing i add to the data folder after backup is the recovery.conf
file.
Anyone have any idea where i'm messing up?
Should i just go for repmgr and call it a day?
Thanks in advance
SQL Server doesn't accept server name
Today we tried to create a transactional replication on our SQL Server 2008, but when we are trying to Configure Distiribution
or New Publication
we couldn't do so, and an error message as shown below occurred. But also I was connected database with no server name.
Then when I try again to connect as error message server name WIN-7SRKNSIF0BK
and this time I get error as below. I was thinking maybe cause of my windows authentication but I tried to connect with sa
user but again same problem.
Now I realize my SQL Server version is shown as "PreRelease", can this be the reason for the issues?
Because I have no problem for example 2012 2016 and other 2008, and none of them is not "PreRelease".
Microsoft SQL Server Management Studio 10.0.1600.22
((SQL_PreRelease).080709-1414)
Fast HDFS and Hive data replication
I'm considering data repication between clusters for 2 use cases :
- DR (so replication between 2 data centers
- Sync between 2 production clusters
For first one, I'd tend to think Falcon is the right option. But for second one, I want to replicate data as sson as it is available (means end of put for HDFS, and end of table creation for Hive). What would be your view on this ?
How can I remove a channel from replication slave?
Today my question is about MySQL replication cleanup.
I used mysqldump
with the --master-info --all-databases
tag and restored it to a new host to be used as a replication slave.
After restore, I see some artifacts of slave information from the master. This is the third host in a replication chain.
I issued reset slave for channel 'xxxxx';
which returned Query OK, 0 rows affected (0.00 sec)
. When I later query using show slave status for channel 'xxxxx';
, I still see information for this replication channel appearing.
How can I cleanup this replication channel such that it never accidentally starts, as well as cleanup the output of show slave status \G
to only show the intended replication channel?
Replication/Mirroring/etc from Azure SQL Database to On-premise SQL Server possible?
I have a table on Azure SQL Database that I would like to have replicated/mirrored to our on-premise SQL Server
So the on-prem SQL Server would have a copy of table from Azure that is always up-to-date, available for read-only queries
Is there a technology for this ?
The reason I need this is because I need to join this Azure table to some tables on-premise (on 300K + rows) in a query, and linked server is not working for me very well, despite all the tricks and workarounds I have tried
Regards,
How to use a Snapshot for Column Changes with Transactional Replication
I have a database setup to do Transactional Replication from an On-Premise Database running SQL Server 2014 to an Azure SQL Database. The replication is set to run continuously and also set to replicate Schema changes. It is a Push Replication. The replication works correctly.
The issue I have is that I now need to add a new column to the source table which I want to be replicated to the Azure database. I tried this in my development environment and it took approximately 3 hours to replicate the change because the table has over 1 million records. However, it would be ok for me to generate a new snapshot and deliver the data that way to initially get the column changes to Azure. The column changes infrequently so it will only cause a load on the initial sync. I've done this in the past and a snapshot will replicate in about 20 minutes.
I think generating a new snapshot to get the data up there will be the fastest approach to replicate the data, but I'm not sure what steps to take to ensure that my changes to the source table are not replicated as soon as they are applied. Below are the steps that I'd like to take, but I'm just not sure how to accomplish this so that I don't have to wait for the transactions to be replicated for my new snapshot to be applied.
- Add my new column to my source table
- Let Replication push my schema change to the destination table
- Disable transactional replication
- Apply updates to the source table column
- Re-initialize the subscription with a new snapshot
- Enable transactional replication
I'm not 100% sure if I truly need to disable transactional replication. Ultimately, I just want to ensure that I don't have to wait 3 hours to replicate the change because it maxes out the Azure SQL Database DTUs and makes it unusable. If I can go the snapshot approach then the 20 minute downtime is acceptable.
I've found the following articles which explain how to reinitialize subscriptions so I think I understand that. However, when I apply my changes prior to re-initializing my subscription, won't I be stuck waiting the 3 hours for those changes to replicate before the subscription would try to reinitialize? Or does the re-initialize stop any replication in progress at the subscriber?