Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

An exception (0xc0000005) occurred in the Distribution subsystem

$
0
0

I m having a transactional replication between two databases on two different servers. From quite few times, the replication is failing showing the error msg as:

An exception (0xc0000005) occurred in the Distribution subsystem.

The problem gets solved when the SQL service is restarted but this can't be possible to do every time. I have checked the log files of SQL Server and found the following msg in ERRORLOG file:

EXCEPTION_ACCESS_VIOLATION at 0x0FFFFFFF.

Can't find the reason behind this issue? Can anyone please help me in solving this.


AWS Database Migration Service causing problem - SQL Server as Source

$
0
0

I have a problem using the AWS Database Migration Service for implementing a transactional replication from SQL Server as a source database engine, a help is highly appreciated.

The 'safeguardPolicy' connection attribute defaults to 'RELY_ON_SQL_SERVER_REPLICATION_AGENT'. The tools will start mimicking a transaction in the database for preventing the log to be reused and to be able to read as much changes from the active log.

But what is the intended behavior of these safeguard transaction? Will those sessions be stopped at some point? What is the mechanism to start / run for some time / stop such a transaction?

The production databases I manage are in Full recovery mode, with Log backups on each half an hour. The log grows to an enormous size due to the inability for a valid truncation procedure to succeed and because of those safeguard transactions initiated by the DMS tool.

The only solution to a full transaction log due to LOG_SCAN caused by such behavior of DMS for now is to stop the DMS tasks and run a manual truncation of the log, to release space not used. But it is not a solution at all if we need to stop the replication each time such a problem occurs, knowing that it will occur often.

Please share some internals about the tool if possible.

Thanks

Transactional Replication failing between SQL Server 2012 and SQL Server 2016 versions

$
0
0

As per the Microsoft documentation - A Subscriber to a transactional publication can be any version within two versions of the Publisher version. For example: a SQL Server 2012 (11.x) Publisher can have SQL Server 2014 (12.x) and SQL Server 2016 (13.x) Subscribers; and a SQL Server 2016 (13.x) Publisher can have SQL Server 2014 (12.x) and SQL Server 2012 (11.x) Subscribers.

But the subscription I am trying to create from Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64) to Microsoft SQL Server 2016 (SP2-CU12) (KB4536648) - 13.0.5698.0 (X64) is failing and here is the error message I receive - The selected subscriber does not satisfy the minimum version compatibility level of the selected publication.

Are these versions not compatible?

I am creating transactional replication in SQL Server 2017 but job is not running

$
0
0

I have created transactional replication and here are my scripts:

Distribution script:

use master
exec sp_adddistributor @distributor = N'servername', @password = N''
GO

exec sp_adddistributiondb @database = N'distribution', 
            @data_folder = N'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Data', 
            @log_folder = N'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Data', 
            @log_file_size = 2, 
            @min_distretention = 0, @max_distretention = 72, 
            @history_retention = 48,
            @deletebatchsize_xact = 5000, @deletebatchsize_cmd = 2000, 
            @security_mode = 1
GO

use [distribution] 

if (not exists (select * from sysobjects 
                where name = 'UIProperties' and type = 'U ')) 
    create table UIProperties(id int) 

if (exists (select * from ::fn_listextendedproperty('SnapshotFolder', 'user', 'dbo', 'table', 'UIProperties', null, null))) 
    EXEC sp_updateextendedproperty N'SnapshotFolder', N'E:\snapshot\ReplData', 'user', dbo, 'table', 'UIProperties' 
else 
    EXEC sp_addextendedproperty N'SnapshotFolder', N'E:\snapshot\ReplData', 'user', dbo, 'table', 'UIProperties'
GO

exec sp_adddistpublisher @publisher = N'servername', 
                         @distribution_db = N'distribution', 
                         @security_mode = 1

Publication script

use [databasename]
exec sp_replicationdboption @dbname = N'databasename', @optname = N'publish', @value = N'true'
GO

use [databasename]
exec [databasename].sys.sp_addlogreader_agent @job_login = N'pcusername',
           @job_password = null, -- pc does not have any password
           @publisher_security_mode = 1, 
           @job_name = null
GO

-- Adding the transactional publication
use [databasename]

exec sp_addpublication @publication = N'Basketball17Pub',
           @description = N'Transactional publication of database ''databasename'' from Publisher ''servername''.', 
           @sync_method = N'concurrent', 
           @retention = 0,
           @allow_push = N'true', @allow_pull = N'true', 
           @allow_anonymous = N'true', @enabled_for_internet = N'false', 
           @snapshot_in_defaultfolder = N'true', 
           @compress_snapshot = N'false', 
           @ftp_port = 21, @ftp_login = N'anonymous', 
           @allow_subscription_copy = N'false', 
           @add_to_active_directory = N'false', 
           @repl_freq = N'continuous', @status = N'active', 
           @independent_agent = N'true', @immediate_sync = N'true', 
           @allow_sync_tran = N'false', 
           @autogen_sync_procs = N'false', 
           @allow_queued_tran = N'false', @allow_dts = N'false',
           @replicate_ddl = 1, @allow_initialize_from_backup = N'false', 
           @enabled_for_p2p = N'false', 
           @enabled_for_het_sub = N'false'
GO

I used one machine for distributor and publication and no network path used for snapshot

But the job is not running - what am I missing? Any suggestions?

Distribution agent job is throwing an error msg as "The parameter is incorrect. The step failed"

$
0
0

I have an ongoing transactional replication configured for my database which was working perfectly fine untill two days. It has been observed that no data is being replicated and while I check the history of the distributor agent job, it shows an error msg as "The parameter is incorrect. The step failed." and then the distributor agent just stops after 5 mins. I can't figure out how a perfectly running replication can throw such a message with no further details.

Please if anybody can help me this problem.

Is it safe to turn on Delayed Durability on a transaction replication subscriber?

$
0
0

In a transaction replication environment with a publisher SQL Server that receives frequent inserts and updates from an application and a subscriber SQL Server with pull replication jobs, is it safe to enable delayed durability on the subscriber?

Microsoft says that delayed durability is not supported for transaction replication, but was unclear if this was in regards to any server involved in the replication, or just the publisher.

While there is always risk in turning on delayed durability, is there any added risk in turning it on for a replication subscriber? If it is unsupported or there are added risks, is there a way to reduce the WRITELOG waits on the subscriber? The subscriber is a reporting server and its number one wait is always WRITELOG due to the frequent inserts and updates occurring on the publisher from the application (45.3 hours of WRITELOG wait in 345.1 hours of uptime).

Subscription stored procedures no updating with ddl changes

$
0
0

We are updating our publication database (SQL Server 2012, 2014, 2016).

Transactional replication with push subscription up and running.

Turn off synchronization.

Run our upgrade script.

  1. Update some table schemas (add/delete columns) on publisher.
  2. Add some of the new columns to corresponding tables on subscriber that impact views on subscriber.
  3. Drop/create views on subscriber that use new columns.

Create new snapshot of changed publication.

Re-start synchronization.

Sometimes we'll get an error that the column already exists on the subscriber. I assume it's because we've added the column with our code so we can recreate views that use the new column.

Any insight into this would be helpful.

My main question is about the SQL Server stored procedures (sp_msupd and sp_msins) on the subscriber. They don't seem to be automatically recreated when a ddl change is sent over. They are missing or containing columns that were added or deleted during the upgrade.

Is there a command that needs to be run? I've tried the sp_refreshsubscriptions but that didn't help. Any help is appreciated.

Default expression support in GTID based replication

$
0
0

mysql Ver 8.0.15 for Linux on x86_64 (MySQL Community Server - GPL)

I'm using InnoDB as storage engine

I'm executing ALTER TABLE..ADD COLUMN statement with DEFAULT clause. For instance


ALTER TABLE testTable0 ADD COLUMN IP VARCHAR(20) NOT NULL default('127.0.0.1');

Since I have GTID based replication, I keep running into the error

[HY000][3775] Statement violates GTID consistency: ALTER TABLE ... ADD COLUMN .. with expression as DEFAULT.

On another host where replication is not enabled, this works. Can someone please explain this?


Need to replicate stored procedure changes, but not new columns in SQL Server transactional replication

$
0
0

I have a couple of SQL Server 2016 machines, using transactional replication to replicate a portion of the database to a remote server. It is critical that we control what columns are replicated to prevent accidental exposure of certain data.

We have selected only the appropriate tables, views and columns for replication. But we found that as new columns were added to these tables/views, they were automatically included in the replication. I was able to disable the replicate schema changes option in the publication. Solved that issue.

This seems to have caused a new issue. We do need to replicate changes made to stored procedures. After disabling replication of schema changes, the data is in sync, but the stored procedures changes are not replicated.

So, any thoughts on how to work around this? Can we default it to not replicate new columns even with replicate schema changes enabled? Or some other way to get it to replicate stored procedures changes without enabling replicate schema changes?

Not able to add distributor while configuring transactional replication

$
0
0

I need to setup transactional replication, the publisher is SQL Server 2012 Enterprise edition and the distributor is SQL Server 2017 Standard edition. I am trying to add the distributor, from the distributor configuration wizard when I try to connect to the distributor, it fails saying- "SQL server could not retrieve information about server 'SERVER NAME', Could not find stored procedure sp_MSreplcheck_qv Error: 2812". However we have been able to add this distributor for other publication servers. I found a site where people has mentioned the solution for it but that doesn't work for me. I have been exploring Google but not able to find the solution. Any help would be highly appreciable.

HBase WAL replication - Is WAL replication from cluster with HBase 1.1.2 to cluster with HBase 2.0.2 supported?

$
0
0

Is HBase WAL replication from cluster with HBase 1.1.2 to cluster with HBase 2.0.2 supported ? Tried a simple test where following table was created in both the clusters

create 'repl_test', { NAME => 'cf1', REPLICATION_SCOPE => '1'}

When issued enable_table_replication 'repl_test' to enable replication got following error message, which doesn't seems to be correct given that i have used exactly the same create statement mentioned above for creating the tables in both the clusters.

ERROR: Table repl_test exists in peer cluster 1, but the table descriptors are not same when comapred with source cluster. Thus can not enable the table's replication switch.

MySQL Master - Slave replication stops after 20min with no errors

$
0
0

I'm using MariaDB 10.4 in a standard master slave setup.

Everything works fine for about 20min, I can see changes replicating across but then after about 20min replication just stops.

However, if I restart the slave MySQL instance, replication begins again and catches up.

Restarting the master has no effect on the replication it stays broken.

When in the broken non-replicating state this is the output from the slave and master.

enter image description hereenter image description here

So it looks like it thinks its caught up, but it hasn't, no matter how long I leave it.

I've used netcat to test the connection and I know it can see the master still, plus when I restart MySQL it works right away.

The my.cnf config has not been changed apart from adding the server ID and this is how I configured the slave initally CHANGE MASTER TO MASTER_HOST='dynamicip.xxxxxxx.org', MASTER_PORT = 4302, MASTER_USER='remote_replication', MASTER_PASSWORD='xxxxxxxxxx', MASTER_LOG_FILE='mysql-bin.000006', MASTER_LOG_POS=328;

The error logs contain no errors also.

Where would I begin to troubleshoot this?

Issue with PERSISTED Computed Column Replication , Coming as NULL at Subscriber end

$
0
0

Hi I do have Publisher Table and Subscriber Table , both has same schema , Difference is publisher table has one computed column which return varchar Like Sample Below:

CREATE FUNCTION [cimfn_FormPartition](@pKeyVal BIGINT)
RETURNS VARCHAR(100)
WITH SCHEMABINDING
AS
BEGIN    
       [LOGIC]       
RETURN @Output
END
GO

Publisher End:

create table table1
(
  [pKey] [bigint] IDENTITY (1, 1) NOT NULL ,
  [cKey] AS dbo.cimfn_FormPartition(pKey) PERSISTED,
  [id] [varchar] (20) NULL ,
  [refId] [varchar] (20) NULL ,
  constraint [pKey] PRIMARY KEY  NONCLUSTERED
  ( pKey ASC)
)

ALTER TABLE [table1] ADD CONSTRAINT [CI_Items] UNIQUE CLUSTERED ([cKey]) ON [PRIMARY]

Subscriber End:

create table table1
(
  [pKey] [bigint] NOT NULL ,
  [cKey] [varchar] (100) NULL,
  [id] [varchar] (20) NULL ,
  [refId] [varchar] (20) NULL ,
  constraint [pKey] PRIMARY KEY  NONCLUSTERED
  ( pKey ASC)
)

ALTER TABLE [table1] ADD CONSTRAINT [CI_Items] UNIQUE CLUSTERED ([cKey]) ON [PRIMARY]

One of My Requiredment is: I don't want any computation at subscriber end, I just want to get the value of computed column from publisher like other column.

In the Table Article Properties All things I have set to false , and set ACTION IF NAME IS IN USE : Keep existing object unchanged

OUTPUT I am getting in subscribtion table is:

**pKey  cKey    id  refId
1   NULL    Item1   i1**

I Have addedd Article Proper and Even though subscription table is getting null value in cKey ,and all other column has correct value. I have check replication monitor as well , No error is showing there. I more drilled down with the store procedure (sp_browsereplcmds) and found out the insert command does not have computed column with this. So now again question is why Distributor is not picking computed column from publisher?

How to ensure Postgresql Primary and target database are identical

$
0
0

I am using PostgreSQL logical replication to stream my primary database. How do I ensure that the data in the primary and standby are identical without any trigger firing when not needed?

GoldenGate SPECIALRUN abending with [OGG-02419] Missing checkpoint file name

$
0
0

I have GG working fine with processing updates from source to target. However, I am getting a strange error when trying to do an initial load. I have the extract file from the source and I've set the parameter file to pick it up. I've also added the replicat with ADD REPLICAT rlcosmos, SPECIALRUN, EXTFILE dirdat\ld000000. When I try to run using START myreplicat I get and error of "OGG-02419 Missing checkpoint file name". My understanding was that checkpoints were not needed for SPECIALRUN, so I'm a bit confused as to the error, since I can see that the checkpoint file is there. Any ideas?

[EDIT], I tried modifying the configuration file (as PRM FILE2 below) and running as a normal replicat, ADD REPLICAT rlcosmos, EXTFILE dirdat\ld000000. The process works that way.

REPLICAT DETAIL

GGSCI (xdaz002092) 52> INFO REPLICAT rlcosmos, DETAIL

REPLICAT   RLCOSMOS  Initialized   2020-07-19 15:12   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:50:23 ago)
Log Read Checkpoint  File dirdat\ld000000
                     First Record  RBA 0
  Extract Source                          Begin             End

  dirdat\ld000000                         * Initialized *   First Record


Current directory    C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1

Report file          C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\dirrpt\RLCOSMOS.rpt
Parameter file       C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\dirprm\RLCOSMOS.prm
Checkpoint file      C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\dirchk\RLCOSMOS.cpr
Checkpoint table
Process file
Error log            C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\ggserr.log

LOG FILE:

***********************************************************************
                    Oracle GoldenGate for Big Data
                    Version 19.1.0.0.1 (Build 003)

                      Oracle GoldenGate Delivery
  Version 19.1.0.0.2 OGGCORE_OGGADP.19.1.0.0.2_PLATFORMS_190916.0039
       Windows x64 (optimized), Generic on Sep 16 2019 05:33:49

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.
                    Starting at 2020-07-19 14:52:55
***********************************************************************

Operating System Version:
Microsoft Windows 10, on x64
Version 10.0 (Build 19041)

Process id: 30828

Description:
***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
2020-07-19 14:52:55  INFO    OGG-03059  Operating system character set identified as windows-1252.
2020-07-19 14:52:55  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.
2020-07-19 14:52:55  INFO    OGG-01360  REPLICAT is running in Special Run mode.

Source Context :
  SourceModule            : [ggapp.checkpt]
  SourceID                : [../gglib/ggapp/checkpt.c]
  SourceMethod            : [chkpt_context_t::openCheckpointFile]
  SourceLine              : [699]
  ThreadBacktrace         : [14] elements
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\gglog.dll(??1CContextItem@@UEAA@XZ)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\gglog.dll(?CreateMessage@CMessageFactory@@QEAAPEAVCMessage@@PEAVCSourceContext@@IZZ)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\gglog.dll(?_MSG_@@YAPEAVCMessage@@PEAVCSourceContext@@HW4MessageDisposition@CMessageFactory@@@Z)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(ERCALLBACK)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(ERCALLBACK)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(ERCALLBACK)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(ERCALLBACK)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(_ggTryDebugHook)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(_ggTryDebugHook)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(_ggTryDebugHook)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(ERCALLBACK)]
                          : [C:\oracle\product\19\OGG_BigData_Windows_x64_19.1.0.0.1\replicat.exe(CommonLexerNewSSD)]
                          : [C:\WINDOWS\System32\KERNEL32.DLL(BaseThreadInitThunk)]
                          : [C:\WINDOWS\SYSTEM32\ntdll.dll(RtlUserThreadStart)]
 
2020-07-19 14:52:55  ERROR   OGG-02419  Missing checkpoint file name.

2020-07-19 14:52:55  ERROR   OGG-01668  PROCESS ABENDING.

PRM FILE:

SPECIALRUN
DISCARDFILE dirrpt\rlcosmos.dsc, purge
ASSUMETARGETDEFS
TARGETDB LIBFILE ggjava.dll SET property=dirprm\cosmos.props
MAP MYPDB.HR.*, TARGET MYPDB.HR.*;
TABLEEXCLUDE MYPDB.HR.EMP_DETAILS_VIEW;
END RUNTIME

PRM FILE2:

--SPECIALRUN
REPLICAT rlcosmos
DISCARDFILE dirrpt\rlcosmos.dsc, purge
ASSUMETARGETDEFS
TARGETDB LIBFILE ggjava.dll SET property=dirprm\cosmos.props
MAP MYPDB.HR.*, TARGET MYPDB.HR.*;
TABLEEXCLUDE MYPDB.HR.EMP_DETAILS_VIEW;
--END RUNTIME

Using Cookies to Replicate Shopify Store for Affiliate Sales Tracking

$
0
0

Good Day Everyone!

So working on a project for a client who wants to track affiliate sales on Shopify. Problem is, they are not using any of the generic affiliate management solutions available in Shopify but their own in-house management system.

How could someone theoretically make replicated URLs so that whenever someone uses this URL to purchase anything on the store, it gets tracked to the affiliate and the order gets added to their sales and they can hence be paid out on it eventually.

I know that this would probably have something to do with cookies, affiliate and orders database and session tracking but I don't know how all these pieces fit together.

Any suggestions on how to implement this? If anyone already has done anything similar.

Postgresql Replication fail over scenario - not able to bring back old primary as slave

$
0
0

I am facing issue while making old primary to standby after 1st fail over.

1st time slave has switched to master and when old master comes back it is still acting as primary.

I am using repmgr.

Can you insert data into a Redis replica? Why or why not?

$
0
0

I have a redis-server instance that is a replica of a primary redis-server instance. Also I have a python3 script that uses the redis library to query the replica instance.

However, this script also tries to inserts data using SET. I'm not sure if it successfully inserts or not.

What happens when you use SET on a replica? My understanding is that a replica is supposed to replicate the data of the redis-server instance, so I can only imagine three possible behaviors

  1. Does it pass your SET command up to the primary redis-server instance?
  2. Does it keep your data but only locally?
  3. Does it just ignore your SET command?

I can't find any documentation on this question, and it seems like all three behaviors would be reasonable. If you know what the behavior is, can you please explain why that is the case as opposed to the other two cases?

mariadb replication (master / slave) - slave lagging behind due to long execution on delete query

$
0
0

I have setup a pretty classic MariaDB 10.4.13 replication GTID setup with two servers (writable master and read only slave).

From some time I have noticed some inconsistencies in some of my application SELECT queries routed to slave.. After some troubleshooting I have seen that the slave's "Seconds_Behind_Master" value grow up to 10,000 seconds (!).

By doing a SHOW PROCESSLIST on the slave I noticed long queries like :

 11 | system user | | NULL | Slave_SQL | 14 | Delete_rows_log_event :: find_row (-1) | DELETE FROM `mytable` WHERE` id` = 5580

and each one takes over 20+ seconds to perform (!) so they get accumulating and cause lagging...

The same delete queries launched on the master is instantaneous (0.031 sec.) - Moreover slave hardware is more powerful than master (4 core CPU vs 2 core CPU) and load average / CPU on the slave is very low.

I have already tried to increase the parallel "slave_parallel_threads" to the number of slave CPU (4) as explained here but without any benefit.

Any clue on how to fix this or improve replication performance to keep master/slave in sync ?

Custom Subdomains and Shopify

$
0
0

Good Day Everyone!

So making a Shopify store that has a 3rd party backend for affiliate management. Now the integration is all manually done and for now, I've been using the "ref" method in URLs to generate affiliate URLs and using them to track conversion once someone purchases something from the store and paying out commissions.

What if this had to be converted into something like alias.url.com or url.com/alias and still track the affiliates?

How would that work? What sort of issues should be expected? If someone has already done this, I'd love some advice on this implementation.

Kind Regards and Best Wishes!

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>