Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

SQL Change tracking SYS_CHANGE_COLUMNS

$
0
0

We are running SQL 2008 R2 and have started exploring change tracking as our method for identifying changes to export to our data warehouse. We are only interested in specific columns.

We are identifying the changes on a replicated copy of the source database. If we query the change table on the source server, any specific column update is available and the SYS_CHANGE_COLUMNS is populated.

However on the replicated copy the changes are being tracked but the SYS_CHANGE_COLUMNS field is always NULL for an update change.

Track columns updated is set to true on the subscriber.

Is this due to the way replication works and it is performing whole row updates and therefore you cannot get column level changes on a subscriber?

Any help or alternative approaches would be much appreciated.

Thanks


A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active

$
0
0

I have 2 servers connected over a low speed wan and we're running SQL Server 2008 with Merge replication.

At the subscriber, sometimes when attempting to insert new rows, I get this error:

A trigger returned a resultset and/or was running with SET NOCOUNT OFF while another outstanding result set was active.

  • My database doesn't have any triggers; the only triggers are the one created by the Merge replication
  • Also, whenever this error occurs it automatically rolls back the existing transaction
  • I am using DataTables and TableAdapters to insert and update the database using transactions

What I have checked:

  1. the database log file size is below 50Mb
  2. Checked the source code for Zombie transactions (since I wasn't able to retrieve the actual error at the beginning)
  3. Checked the connection between the two servers and found it congested

Questions:

  1. How to avoid this behavior and why it's occurring at first place?
  2. Why it's cancelling the open transaction?

Merge Replication: Updates to Schema

$
0
0

We have a main SQL Server which replicates to two backup servers via continuous merge replication.

There are regular updates to the schema. Regular changes include:

  1. Adding new tables and relationships with existing tables
  2. Changing the data type of columns on tables already replicated

What is the easiest way to maintain replication in such an environment?

If changes are made to the publisher, some changes cannot be synced to the subscriber and the whole replication has to be setup again (agent + subscriptions).

These databases are too big to replicate in its entirety every time there is an update.

Any advice will be appreciated.

Distribution clean up job in transactional replication removed records but not files

$
0
0

Distribution clean up job ran without errors according to schedule, but I noticed that the snapshot files were not removed even if when created beyond max_disretention period. Records from msrepl_commands and msrepl_transactions were removed, but the files were not.

  • immediate_sync = 1
  • max_disretention = 72 hours

Merge replication unintialized subcription is expired or does not exist

$
0
0

I am trying to set up a merge replication using web synchronization between a publishing SQL Server 2012 standard and subscribing SQL Server 2012 Express. After following the instructions provided at Technet, I am stuck on this:

Source:  Merge Process(Web Sync Server)
Number:  -2147200985
Message: The subscription to publication 'MyMergePublication' has expired or does not exist.

I already verified that SSL certification are good, that I can browse to the publishing machine's URL https:\\mycomputer\replisapi.dll and get the expected output. I already verified that snapshot was set up and I took a giant hammer & use an administrator account to run the pool identity which is really bad security-wise but wanted to validate that it was not security that was tripping me up.

To further the mystery, when I try and fail to sync, the publisher acknowledges that a new subscriber has been registered, but it cannot get the snapshot at all and thus subscriber database is still empty.

On the replication monitor, there are no failed synchronization history, or any errors; all it has to say is that the subscriber is uninitialized, and no more.

Turning up the verbosity of the merge agent, I saw some sql being executed and tried replicating the sql and i found this was failing with same error:

{call sys.sp_MSgetreplicainfo(?,?,?,?,?,?,?,90)}

I called it with only the 3 mandatory parameters supplied and it would fail. That is despite the prior call sp_helpmergepublication does return a row for that publication. Oddly, the content of sp_helpmergepublication does not match what I configured for the subscription (e.g. it says web url is null when viewing the properties correctly shows the web url being set). Not sure that is significant.

The content of sp_MSgetreplicainfo contains a call to another system sprocs that I cannot run for some reason (says not found) so I'm not sure what is actually going on here.

Any clues would be greatly appreciated.

federated tables? synchronization? replication? [mysql]

$
0
0

I have a REMOTE suppliers' table with, let's say, a code and a name, and several client databases will be creating new suppliers.

Server database

code  |  name
------+-------
1     |  Mark
2     |  John
3     |  Jodie

I thought that a federated table would fit the best, as every client could create new suppliers in a centralized table which assigns incremental codes,

Obviously, if the Internet connection is down, clients won't be able to create new suppliers, so this won't work for me.

I thought that the best idea would be assigning a prefix to each client, and the code count would be assigned locally.

Local table client A

code   |  name
-------+------
A1     |  Mark
A2     |  Jodie

Local table client B

code  |  name
------+------
B1    |  John

Then, the data would be merged into a single remote table.

Remote merged table

code  |  name
------+---------
A1    |  Mark
B1    |  John
A2    |  Jodie

But at this point, I am lost, I don't know if there is a solution for this problem, or I would need to merge them with a cron job + script.

Maybe a multiple-masters-to-1-slave would work?

Is there any way to schedule merge jobs?

Any idea would be appreciated.

SQL Server Transactional Replication With subscriber trigger returning a value

$
0
0

I have transaction replication setup on on SQL server 2008 R2. The subscriber has a after trigger on the published article that inserts into another table on the subscriber's database and returns the new value created in the trigger (the column is a persisted computed column). However, it seems we are not getting the new value back from the subscriber's trigger on the publisher after the insert occurs.

Is SQL server able to return the value from the trigger or because replication is transnational, which is just one way, it cannot?

UPDATE (2018-02-09 many many years later)

So hopefully I can better explain this than my past self:

A publisher, on a different SQL Server than the subscriber, publishes a table called dbo.Inventory. The subscriber subscribes to this publication and now has dbo.Inventory on its server. We add an AFTER INSERT trigger on the published table, dbo.Inventory, on the subscriber's SQL Server:

CREATE TRIGGER [dbo].[aiInsertNewPk] 
  ON  [dbo].[Inventory] -- remember this trigger is on the subscriber
  AFTER INSERT
AS 
BEGIN

SET NOCOUNT ON;

DECLARE @Serial TABLE (ColumnA CHAR(10));

INSERT INTO [Subscriber].[DifferentTable] ( <...columns> )
OUTPUT inserted.[Foobar] INTO @Serial(ColumnA)
SELECT <...columns>
FROM inserted
WHERE <logic>

-- output this value so c# application can grab it
SELECT ColumnA FROM @Serial

END

Our problem some years ago, was that trigger was not outputting the values from ColumnA (the last line of the trigger), so our C# application was always getting a null value back.

Not sure if that helps and honestly I have not revisited this issue to see if the problem was something else entirely. I just thought I would add this update since you were kind enough to comment many years later.

Replication throw an error after failover mirror partner is down in SQL Server

$
0
0

Received an error mentioned below after failover when mirror partner is down.

Error :

to fail over to a database which is not configured for database mirroring. code: 22037, text: 'Invalid connection string attributeCannot open database "DBName" requested by the login. The login failed.Login failed for user 'domain\user'.The connection attempted to fail over to a database which is not configured for database mirroring.'.

Scenario :

  1. X is a Principal server and Y is a mirror partner of X
  2. X is a publisher too, Z is a distributor server
  3. Z is added as distributor in X and Y Server. X and Y added as publisher in Z server too
  4. Configured Y as a "PublisherFailoverPartner" in replication log reader agent profile setting
  5. Did failover of X
  6. Thereafter Y became a principal and X became mirror partner
  7. Break mirror from Y to X or X server is down and unavailable

After above steps performed, replication is started to throw an error. When mirror is ON the replication is working, otherwise it raising an error mentioned.

Can you please suggest me steps to resolve it?


Inserts/updates are extremely slow with Merge Replication in SQL Server 2008

$
0
0

I got hired into a new company and they had merge replication already in place. They have had issues with it to the point I have to evaluate whether to keep it or replace it. The main issue is it takes many hours (over 12 hours in some cases) to insert or update a record. Honestly that is just unacceptable.

As an example I am trying to insert one record into a table (ComputerUsers) that links a laptop to a user. You only have two columns (one is call ComputerID and one is called UserID). There is a Computers table that stores the ComputerIDs and the Computername. There also is a Users table that stored the UserID and username.

All I am trying is insert one record into the ComputerUsers table. Last time I ran it it took over 13 hours. This table is published and it is part of the Dynamic Filter.

I have tried everything I have read online. I created two separate indexes for the UserID and ComputerID columns since they are both included in separate joins in the Filter. There is also a clustered primary key on the Userid and ComputerID columns since the combination of the two are unique (A computer can be assigned to two users or vica versa). I have also have tried rebuilding the indexes and even created some suggested indexes on the merge replication tables.

I am at my wit's end. What am I missing? I find it hard to believe this is normal for Replication because I doubt anyone would use it if it was. Thanks.

High tempdb disk I/O during merge replication of BLOBs

$
0
0

Having a merge publication for replicating BLOBs (type image), got very high tempdb disk I/O for my size of data. Publication is download-only and has no filters.

High disk I/O is caused by synchronization (when no subscribers are synchronizing, everything is ok), strongly correlated with number of subscribers. It happens even when no data is changed at Publisher between synchronizations, and that bothers me.

  • Size of replicated table: 7MB (total count of rows is about 100)
  • tempdb I/O : ~30 MB/sec for write (log and data files)
  • Number of subscribers: slightly over 100, each synchronizing every 30 minutes (more or less evenly).
  • Retention period set to 14 days

Using SQL Server 2008 at Publisher, SQL Server 2005-2008R2 at Subscribers. All subscribers use Web Synchronization.

Additionally, synchronization at subscriber takes a lot of time, with multiple occurrences in replmerg.log like these:

DatabaseReconciler, 2015/04/21 12:13:40.348, 3604, 25088,  S2,  
INFO: [WEBSYNC_PROTOCOL]  
Sending client ReconcilerPhase WebSyncReconcilerPhase_RegularDownload     

DatabaseReconciler, 2015/04/21 12:13:47.063, 3604, 25194,  S2,  
INFO: [WEBSYNC_PROTOCOL]  
Received server ReconcilerPhase WebSyncReconcilerPhase_LastRegularDownload

Tried setting @stream_blob_columns on and off with no effect.

The question is: Is it a good idea to use merge replication to send these blobs to subscribers? We have other publications (though they have no BLOB columns) with a lot of data without tempdb problem. Is it an SQL Server flaw, or bad setup?

Publisher and Distributor are on the same instance, SQL Server 2008 SP4, cannot be sure about Subscribers, some of them maybe not up-to-date. Anyway, I can prepare a test environment with few subscribers having controlled versions, if it seems to help.

Confirmed, that excessive tempdb usage caused by execution of sys.sp_MSenumgenerations90. Checked MSMerge_genhistory table, found over 1.2 millions of records where pubid is null.

Found this conversation with replication guru:

Executed sp_mergemetadataretentioncleanup with no effect.

Found a remark on a case like this (too much rows in MSmerge_genhistory) where deleting rows where pubid is null and genstatus=1 helped to fix replication.

Deleted rows where pubid is null (implying that all Subscribers are synchronized, and which are not - will be reinitialized in "manual mode") and disk IO is back to normal again!

I have a feeling, that this situation could be caused by the fact, that all of my Subscribers are anonymous via WebSync and most of them have the same hostname. I'll try to check, if -hostname key helps not to multiply records in MSmerge_genhistory.

Write-lock a whole table during transaction

$
0
0

I need to perform a delicate operation to my table in which I will solely insert, delete and select upon all of my rows and no God may interfere with the table during this operation: the table will be in an inconsistent state and any concurrent MySQL session shall not be allowed to modify it until commit-ed.

The use of SELECT ... FOR UPDATE | LOCK IN SHARE MODE is not suitable because, while it may potentially be used to lock all the rows in the table, it won't prevent the insertion of further rows by a concurrent session. Basically, I need to LOCK TABLES my_table WRITE within the body of a transaction.

The table contains about 20,000 rows and a master-slave, mixed-format replication is in place over a slow connection so, for any workaround, I'd prefer to avoid using temporary tables which may faint the slave, and the amount of data dumped into the binlog should ideally be minimized.

The engine is InnoDB on MySQL 5.6, for both master and slave.

How to manually invalidate a pull merge replication snapshot from the publisher

$
0
0

XY Problem background info:

I have a pull replication publisher in which I want to add or alter a index and have those changes be applied to the subscriber.

The solution to that problem I would like to do is to generate a new snapshot and re-init the subscription.


My Question:

How do I, from the publisher, mark a pull merge replication publication as having a invalid snapshot such that if I did sp_helpmergepublication the snapshot_ready column would return 0.

Doing exec sp_reinitmergesubscription @upload_first = 'true' will cause the subscriber to re-initialize but it does not mark the snapshot as invalid.

I know I could change a publication or article property then change it back and cause to invalidation to happen that way but I would really like to invalidate as the "primary action" instead of having the invalidation be a side effect of some other action.

I am looking for something similar to transactional replication's sp_reinitsubscription procedure which has a @invalidate_snapshot parameter, but I could not find the equivalent for merge replication.

Is there any way to invalidate a merge replication snapshot only without making some other kind of change that has snapshot invalidation as a side effect?

Indexed view doesn't exist on the subscriber

$
0
0

I am configuring transactional replication in SQL Server. The subscription is configured as push from 2008R2 publisher (distributor is the same server) to 2012 subscriber.

The object I want to replicate is an indexed view. The base tables exist only on the publisher.

The replication fails due to the following error:

Unable to replicate a view or function because the referenced objects or columns are not present on the Subscriber

It is true that the view doesn't exist in the subscription database. How can i create it without the base tables?

Any reason not to add 'rowversion' column?

$
0
0

The TechNet article, "Optimizing Microsoft Office Access Applications Linked to SQL Server", recommends adding a rowversion column to SQL Server tables linked from MS Access. From the Supporting Concurrency Checks section:

Office Access automatically detects when a table contains this type of column and uses it in the WHERE clause of all UPDATE and DELETE statements affecting that table. This is more efficient than verifying that all the other columns still have the same values they had when the dynaset was last refreshed.

The nice thing is that every UPDATE/DELETE statement and bound form will benefit from this addition without having to make any changes within MS Access (aside from re-linking the tables).

I was unaware of this feature until recently. I am considering adding a named rowversion column to every table in SQL Server that I link to from MS Access. Are there any downsides I should be aware of before I do this?

Obviously there will be storage requirements and performance impacts, but I assume these will be negligible. Also, several of these tables are articles in a merge replication scenario; does that make a difference?

MySQL - Enabling Scheduled Event on Master and Slave Simultaneously

$
0
0

This may seem a bit strange, but I am trying to get a Scheduled Event to execute on both Master and Slave.

I have two databases set up in a Master (A) to Master (B) replication environment.

Master A is READ_ONLY=OFF

Master B is READ_ONLY=ON

I create a user on both Databases:GRANT INSERT, EVENT ON test.* TO 'user'@'localhost' IDENTIFIED BY 'Password';

I then create my Event on Master A:

DROP EVENT `e_test`;
DELIMITER $$
CREATE DEFINER=`user`@`localhost` 
EVENT `e_test` 
ON SCHEDULE EVERY 1 MINUTE 
STARTS NOW() 
ON COMPLETION PRESERVE 
ENABLE 
COMMENT 'Adds a new row to test.tab1 every 1 minute' 
DO 
    BEGIN
        INSERT INTO test.tab1 (`fname`) VALUES (NOW());
    END;
$$

So far so good. It executes every minute, and adds an entry to the table, which replicates to the other Database.

However, on the Master B, it is marked as SlaveSide Disabled, and so doesn't execute.

If I do:

ALTER DEFINER=user@localhost EVENT e_test ENABLE;

on Master B, it starts to execute on Master B, but on Master A it is now flagged as Slaveside_Disabled, and so doesn't execute.

If I then enable it on Master A, Master B is Slaveside_Disabled.

The reason for wanting this (in case you were wondering), is so that as part of my failover script I simply need to execute SET GLOBAL READ_ONLY = { ON | OFF } on each database accordingly, as opposed to having to enable / disable all my events (one command vs many commands).

Under normal circumstances, on Master A (READ_ONLY=OFF) the events execute as per normal and adds the entry; On Master B (READ_ONLY=ON) the events execute, but don't insert an entry as they don't have permission.

I looked at using SET GLOBAL EVENT_SCHEDULER = { ON | OFF } as the one command, but if I set it to OFF as the default, then we need to remember to enable it each server restart. Alternatively if we set it to ON as the default we need to remember to disable it every server restart. The use of READ_ONLY seemed a better option as it can be easily included in a failover script.

Any ideas?


MySQL slave server getting stopped after each replication request from Master

$
0
0

Basic master-slave MySQL configuration has been done on Windows machine. Master and slave servers are running on localhost with different ports.

Now when executing update or insert in master server, slave server getting stopped after that event. Once restarting slave server and check updates then update/insert is successfully executed in slave through replication setup.

What could be the possible root cause of this issue?

Log of show slave status\G :

 *************************** 1. row ***************************

Slave_IO_State: Waiting for master to send event
              Master_Host: 127.0.0.1
              Master_User: masteradmin
              Master_Port: 3307
            Connect_Retry: 60
          Master_Log_File: USERMAC38-bin.000007
      Read_Master_Log_Pos: 840
           Relay_Log_File: USERMAC38-relay-bin.000004
            Relay_Log_Pos: 290
    Relay_Master_Log_File: USERMAC38-bin.000007
         Slave_IO_Running: Yes
        Slave_SQL_Running: Yes
          Replicate_Do_DB:
      Replicate_Ignore_DB:
       Replicate_Do_Table:
   Replicate_Ignore_Table:
  Replicate_Wild_Do_Table:
   Replicate_Wild_Ignore_Table:
               Last_Errno: 0
               Last_Error:
             Skip_Counter: 0
      Exec_Master_Log_Pos: 840
          Relay_Log_Space: 467
          Until_Condition: None
           Until_Log_File:
            Until_Log_Pos: 0
       Master_SSL_Allowed: No
       Master_SSL_CA_File:
       Master_SSL_CA_Path:
          Master_SSL_Cert:
        Master_SSL_Cipher:
           Master_SSL_Key:
    Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
            Last_IO_Errno: 0
            Last_IO_Error:
           Last_SQL_Errno: 0
           Last_SQL_Error:
Replicate_Ignore_Server_Ids:
         Master_Server_Id: 1
              Master_UUID: 63ac2f83-44ac-11e5-bafe-d43d7e3ca358
         Master_Info_File: mysql.slave_master_info
                SQL_Delay: 0
      SQL_Remaining_Delay: NULL
  Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
       Master_Retry_Count: 86400
              Master_Bind:
  Last_IO_Error_Timestamp:
 Last_SQL_Error_Timestamp:
           Master_SSL_Crl:
       Master_SSL_Crlpath:
       Retrieved_Gtid_Set:
        Executed_Gtid_Set:
            Auto_Position: 0

Error log of slave before it got stopped :

'CHANGE MASTER TO executed'. Previous state master_host='127.0.0.1', master_port= 3307, master_log_file='USERMAC38-bin.000008', master_log_pos= 123, master_bind=''. New state master_host='127.0.0.1', master_port= 3307, master_log_file='USERMAC38-bin.000013 [truncated, 295 bytes total]
Storing MySQL user name or password information in the master.info repository is not secure and is therefore not recommended. Please see the MySQL Manual for more about this issue and possible alternatives.
Slave I/O thread: connected to master 'masteradmin@127.0.0.1:3307',replication started in log 'USERMAC38-bin.000013' at position 498
Slave SQL thread initialized, starting replication in log 'USERMAC38-bin.000013' at position 498, relay log '.\USERMAC38-relay-bin.000001' position: 4

General log of slave before it got stopped :

150819 11:04:44    10 Query stop slave
150819 11:04:45     8 Query SHOW GLOBAL STATUS
150819 11:04:48     8 Query SHOW GLOBAL STATUS
150819 11:04:51     8 Query SHOW GLOBAL STATUS
10 Query    CHANGE MASTER TO MASTER_HOST = '127.0.0.1' MASTER_USER = 'masteradmin' MASTER_PASSWORD = <secret> MASTER_PORT = 3307 MASTER_LOG_FILE = 'USERMAC38-bin.000013' MASTER_LOG_POS = 498
150819 11:04:54     8 Query SHOW GLOBAL STATUS
150819 11:04:55    10 Query start slave
11 Connect Out  masteradmin@127.0.0.1:3307
150819 11:04:57     8 Query SHOW GLOBAL STATUS
150819 11:05:00     8 Query SHOW GLOBAL STATUS
150819 11:05:02    10 Query show slave status
150819 11:05:03     8 Query SHOW GLOBAL STATUS
150819 11:05:06     8 Query SHOW GLOBAL STATUS
150819 11:05:09     8 Query SHOW GLOBAL STATUS
150819 11:05:12     8 Query SHOW GLOBAL STATUS
150819 11:05:15     8 Query SHOW GLOBAL STATUS
150819 11:05:18     8 Query SHOW GLOBAL STATUS
150819 11:05:21     8 Query SHOW GLOBAL STATUS
150819 11:05:24     8 Query SHOW GLOBAL STATUS
150819 11:05:27     8 Query SHOW GLOBAL STATUS
150819 11:05:30     8 Query SHOW GLOBAL STATUS
150819 11:05:33     8 Query SHOW GLOBAL STATUS
150819 11:05:37     8 Query SHOW GLOBAL STATUS
150819 11:05:40     8 Query SHOW GLOBAL STATUS
150819 11:05:43     8 Query SHOW GLOBAL STATUS
150819 11:05:46     8 Query SHOW GLOBAL STATUS
150819 11:05:49     8 Query SHOW GLOBAL STATUS
150819 11:05:52     8 Query SHOW GLOBAL STATUS
150819 11:05:55     8 Query SHOW GLOBAL STATUS
150819 11:05:58     8 Query SHOW GLOBAL STATUS
150819 11:06:01     8 Query SHOW GLOBAL STATUS
150819 11:06:04     8 Query SHOW GLOBAL STATUS
150819 11:06:07     8 Query SHOW GLOBAL STATUS
150819 11:06:10     8 Query SHOW GLOBAL STATUS
150819 11:06:13     8 Query SHOW GLOBAL STATUS
150819 11:06:16     8 Query SHOW GLOBAL STATUS
150819 11:06:18    12 Query BEGIN
12 Query    COMMIT /* implicit, from Xid_log_event */
150819 11:06:19     8 Query SHOW GLOBAL STATUS

Error log after restarting slave :

You need to use --log-bin to make --log-slave-updates work.
You need to use --log-bin to make --binlog-format work.
Plugin 'FEDERATED' is disabled.
2015-08-19 12:11:26 150 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use Windows interlocked functions
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: CPU does not support crc32 instructions
InnoDB: Initializing buffer pool, size = 165.0M
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 8556085
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: Doing recovery: scanned up to log sequence number 8556558
InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percent: 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
2015-08-19 12:11:27 1f64  InnoDB: Warning: table 'mysql/innodb_index_stats'
InnoDB: in InnoDB data dictionary has unknown flags 50.
2015-08-19 12:11:27 1f64  InnoDB: Warning: table 'mysql/innodb_table_stats'
InnoDB: in InnoDB data dictionary has unknown flags 50.
InnoDB: 1.2.10 started; log sequence number 8556558
Server hostname (bind-address): '127.0.0.1'; port: 3309
- '127.0.0.1' resolves to '127.0.0.1';
Server socket created on IP: '127.0.0.1'.
2015-08-19 12:11:27 150  InnoDB: Warning: table 'mysql/slave_worker_info'
InnoDB: in InnoDB data dictionary has unknown flags 50.
Recovery from master pos 2235 and file USERMAC38-bin.000013.
Storing MySQL user name or password information in the master.info repository is not secure and is therefore not recommended. Please see the MySQL Manual for more about this issue and possible alternatives.
Slave I/O thread: connected to master 'masteradmin@127.0.0.1:3307',replication started in log 'USERMAC38-bin.000013' at position 2235
Event Scheduler: Loaded 0 events
E:\2-Softwares\mysql-5.6.10-winx64\bin\mysqld.exe: ready for connections.
Version: '5.6.10-log'  socket: ''  port: 3309  MySQL Community Server (GPL)
Slave SQL thread initialized, starting replication in log 'USERMAC38-bin.000013' at position 2235, relay log '.\USERMAC38-relay-bin.000011' position: 4

Prevent replication of ALTER commands

$
0
0

I am using MariaDB 10.0 multi-source replication for a specific use case.

For security reasons, I would like to prevent ALTER commands on master to replicate (such as CREATE, ALTER, DROP...) whatever user run these commands (even root) but of course let SELECT, INSERT, UPDATE, DELETE commands to replicate....

I do not want to use SET SQL_LOG_BIN=0|1 on client side. In fact, I never want to replicate schema modification.

In practice, I wish I could revoke alter permissions to my replication user (who currently has REPLICATION SLAVE permission)

Is there a way to achieve this?

EDIT 2018-02-19

Since my requirements seems nonsense for some readers, I give some additional information about this use case.

I created one (or more) MariaDB Proxy database(s) with tables using BLACKHOLE Storage Engine. So data is not stored on this proxy server, but binlogs are.

I have other MariaDB servers running the same database schema but with INNODB storage engine that replicates data from proxy server(s) using MariaDB Multi-source Replication.

On the proxy server, I can safely recreate, for example, a table schema with a CREATE OR REPLACE TABLE mytable (id int) ENGINE=BLACKHOLE statement as there is no data stored in it.

But this kind of statement MUST NOT run as is on the "slaves" (which are not real slaves as you noticed) as they must remain in their original storage engine or any other option they may have at the table level.

I can do this by issuing a SET SQL_LOG_BIN=0 before executing my command, but I was looking for a way to make sure that I will not break the slaves in case I forget to do it.

SQL Server distributor server move

$
0
0

I need to move my distributor server to a brand new server without resnapshotting it. The scenario I have is similar to the one defined in below link.

http://www.sqlservercentral.com/Forums/Topic844095-291-1.aspx

Environment:

1 publisher 1 distributor 4 subscribers Replication type: Transactional replication with updatable subscription

I want to know if someone has tried the solution given in above link or if you have any alternate solution.

Restoring a MySQL database to a failed master

$
0
0

I have a master-master configuration in MySQL with two servers. One server should stay live on the network to serve requests (call it server A) and the other should be taken offline to push new code changes (server B).

My idea originally was that after running STOP SLAVE on both servers, that server B could be shut down, updated, and even have a new database schema put in. After this, I thought that I could simply START SLAVE on server B and have the entire database from server A replicated/mirrored back over to server B. However, this is not the case. Restarting the slave and doing a CHANGE MASTER TO (...) and syncing up the log files does not replicate old changes like I want it to: it only replicates new writes from that point on.

I am looking for a way to bring server B up to speed with the latest database from server A, and then have server B continue to replicate changes in a master-master setup. Then I can continue the sequence of server upgrades by doing the same process but keeping server B online only.

Any solutions which require locking the tables will not work since I need to do this change without any downtime. Is this possible?

PostgreSQL Replications: Multiple Masters to a Single Slave

$
0
0

I have several PostgreSQL DBs in different geographical locations (local sites).

  • Each local site DB have the same schema, but unique data. For example, Take a table with columns: Site_ID, Department_ID, Department_Name. Site_ID is unique for each site.
  • I want to collect all the data from the local site DBs into a centralised DB (PostgreSQL again) which acts as a data warehouse.
  • The corresponding example table on the centralised DB will have the same columns as above. All local site data will go into this table. Each site data designated by the Site_ID, of course.

Question: How to achieve this with PostgreSQL replication methods? (streaming/multi-master UDR/BDR/etc.)

Restriction: The local sites can make only outgoing network connections (i.e. no inbound connections due to firewall restrictions)

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>