Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

londiste ERROR Node 'slave_IP' already exists

$
0
0

Yesterday, i used londiste for logical replication between PostgreSQL 9.3 and 9.5

Today, i try use londiste again. But i dont deleted node, schema after use londiste yesterday.

List package

yum list skytools*
Installed Packages
skytools-95.x86_64           3.2.6-1.rhel7
skytools-95-modules.x86_64   3.2.6-1.rhel7

Config master

cat /etc/skytools/londiste-master.ini
[londiste3]
job_name = appqueue
db = dbname=my_database host=master_IP
queue_name = appqueue
logfile = /var/log/skytools/master.log
pidfile = /var/run/skytools/master.pid

Config Slave

cat /etc/skytools/londiste-slave.ini 
[londiste3]
job_name = appqueue
db = dbname=my_database
queue_name = appqueue
logfile = /var/log/skytools/slave.log
pidfile = /var/run/skytools/slave.pid
You have mail in /var/spool/mail/root

Install scheme

qadmin -h master_IP -U postgres -d my_database -c "install londiste"
INSTALL

Copy DB

pg_dump -h master_IP -s -C -U postgres my_database |psql -U postgres

create-root master_IP

su postgres -c "londiste3 /etc/skytools/londiste-master.ini create-root master_IP 'dbname=my_database host=master_IP'"
2017-10-27 08:36:16,250 20249 INFO plpgsql is installed
2017-10-27 08:36:16,250 20249 INFO pgq is installed
2017-10-27 08:36:16,252 20249 INFO pgq.get_batch_cursor is installed
2017-10-27 08:36:16,252 20249 INFO pgq_ext is installed
2017-10-27 08:36:16,253 20249 INFO pgq_node is installed
2017-10-27 08:36:16,254 20249 INFO londiste is installed
2017-10-27 08:36:16,254 20249 INFO londiste.global_add_table is installed
2017-10-27 08:36:16,262 20249 INFO Node is already initialized as root

create-leaf slave_IP

su postgres -c "londiste3 /etc/skytools/londiste-slave.ini create-leaf slave_IP dbname=my_database --provider='host=master_IP dbname=my_database'"
2017-10-27 08:37:10,984 20414 WARNING No host= in public connect string, bad idea
2017-10-27 08:37:10,991 20414 INFO plpgsql is installed
2017-10-27 08:37:10,992 20414 INFO pgq is installed
2017-10-27 08:37:10,993 20414 INFO pgq.get_batch_cursor is installed
2017-10-27 08:37:10,993 20414 INFO pgq_ext is installed
2017-10-27 08:37:10,994 20414 INFO pgq_node is installed
2017-10-27 08:37:10,994 20414 INFO londiste is installed
2017-10-27 08:37:10,994 20414 INFO londiste.global_add_table is installed
2017-10-27 08:37:11,006 20414 INFO Initializing node
2017-10-27 08:37:11,022 20414 ERROR Node 'slave_IP' already exists

Run londiste3 master and slave worker

su postgres -c "londiste3 -d /etc/skytools/londiste-slave.ini worker"
Ignoring stale pidfile

su postgres -c "londiste3 -d /etc/skytools/londiste-master.ini worker"
Ignoring stale pidfile

Run pgqd

pgqd -d /etc/skytools/pgqd.ini
2017-10-27 08:38:31.638 20659 LOG Starting pgqd 3.2.6

Master status

su postgres -c "londiste3 /etc/skytools/londiste-master.ini status"
Queue: appqueue   Local node: master_IP

None (None)
                              Tables: 0/0/0
                              Lag: (n/a), NOT UPTODATE
master_IP (root)
                              Tables: 0/0/0
                              Lag: 17h56m7s, Tick: 1

Try Delete node:

su postgres -c "londiste3 /etc/skytools/londiste-slave.ini drop-node slave_IP"
2017-10-27 09:20:50,945 29464 ERROR get_node_database: cannot resolve slave_IP

su postgres -c "londiste3 /etc/skytools/londiste-slave.ini drop-node slave_IP dbname=my_database"
2017-10-27 09:21:17,859 29543 ERROR command 'drop-node' got 2 args, but expects 1: node_name

su postgres -c "londiste3 /etc/skytools/londiste-master.ini drop-node master_IP"
2017-10-27 09:24:32,038 30190 ERROR node still has subscribers

How correct fix this error? Package don`t have application pgqadm.py


SQL server transactional replication for 2 to 1 server

$
0
0

i am looking for recommendation for performance of SQL DR.

I do have 2 Primary server and 1 DR servers :

Can i have 1 or 2 distribution database , comparatively which option is better ?

Distribution DB if i create at target server , will it cost more performance or it is same as Source side ?

I am having 10 Db around in two servers and want to have all 20 DB on one DR servers. Do i need to create one publishers and one subscriber for each database ? There will be any performance issue while having it ?

Thanks & Regards, (SQL server user)

Transact Replication - Row Filter on Linked Server condition

$
0
0

I have a table that has transactional replication to another database on a different server.

Due to the location of the other server there is a delay in delivery at some times of the day, based on site activity and also the bandwidth availability and performance.

As a fail safe to get the transaction across quicker, I call a remote procedure on the linked server.

I have updated/override the SQL generated procedure on the destination db [sp_MSIns_dbo..] to fist check that the record is not available yet as inserting it via the Proc and Replication would create a primary key violation.

I also timestamp the replicated record upon arrival at the destination db.

My solution, although a bit hacky is working perfectly and I can see that proc and replication alternates and general delivery time is faster.

However, I am concerned about the table schema being changed or replication going stale or being re-initialised by someone else in the company, overriding my changes to the [sp_MSIns..] procedure.

Is there a way to use the Filtered Rows setting in Replication to exclude records for the Primary Key on the linked server if it exists?

MSSQL server replication - compatibility level

$
0
0

I currently have two SQL servers in a merge replication setup. The publisher is running SQL Server 2016 Standard and the subscriber is running SQL Server 2016 Express. For some reason i am not able to change the compatibility of the publication in the publication properties. The only one listed there is "SQL Server 2008 or later". Likewise i get the error:

"Incorrect value for parameter '@publication_compatibility_level'"

When running this T-SQL:

DECLARE @publication AS sysname;  
SET @publication = N'publication name*' ;   
EXEC sp_changemergepublication   
    @force_invalidate_snapshot = 1,
    @publication = @publication, 
    @property = N'publication_compatibility_level', 
    @value = N'130RTM'
GO  

The only allowed value is "100RTM" corresponding with the only version i can pick in the publication properties.

I would like to change the compatibility version to 2016. Any indications as to why this is not possible or how it can be achieved is much appreciated.

CD-CM setup with merge replication

$
0
0

I am in the process of trying to make the publishing process quicker and simpler for one of our customers, on their sitecore based website. Through research I stumbled upon Merge Replication which might solve some of our issues, but it introduces other issues. I need your help and guidance to figure out which way is the best!

We've got a CD & CM setup, with 1 CM server which has its own SQL instance. 2 CD servers with a SQL instance each. At the moment I have the current setup:

CM (Master-, web- and core-database) Web is shown only internally on a secure admin url for the site, this works like a preview site.

CD1 & CD2 are the servers for visiting users, these each have a publishing-target in Sitecore.

When we deploy a release: 1. Deploy new code for CM. Publish templates and potential content changes for Sitecore to Web. Verify and authenticate that everything is correct. 2. Take out CD1 of the Load Balancer, deploy new code for CD1, publish templates and potential changes to Sitecore, verify and authenticate, then put server back into the load balancer. 3. Repeat step 2 for CD2. 4. Deployment done

this process is working OK for us now, we are up and running at all time without downtime on the site.


We've got a few issues with the current setup:

  1. Our search (Elastic search) are being populated when CM publishes to Web, so atm there is an issue with elastic search potentially can have data which is not yet published to the CD servers.

  2. When publishing, the editors could forget to publish to one of the CD servers, which would cause inconsistencies between the servers, which we would like to avoid.

  3. Everything needs to be published multiple times for same environment, takes up time.

  4. Editors do not know what a CD server is, they just want to have a “preview” and “Live” publishing target.


I've looked into the Merge Replication for Sitecore, and actually also have it working in a test environment. The advantage we want from this is that we only have two publishing targets:

  1. Preview (CM server preview database)

  2. Live (CM server web database, which then gets replicated out into the CD servers web databases)

  3. The Elastic search instance will relay on data from CM’s web database, which is live data.

  4. We have can have a Elastic search instance running on preview as well.

The issue here is, that now I can't deploy only for CD1 or CD2, when doing deployment. What if I have breaking changes towards Sitecore? The site will break if I publish new breaking Sitecore items to a server which hasn't been deployed to yet?

How can I get the best of these two worlds? Any?

SQL Replication - FTP Snapshot

$
0
0

SQL transactional replication is setup but hitting an issue when trying to FTP the files over the internet, we have a VPN connection established to an external server.

The process could not retrieve file 'ServerNamePubName/20171113081054/TableName.pre' from the FTP site 'FTPSITENAME'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20033)

  • Publication is enabled for Internet
  • FTP is set to allow SSL connections
  • Can see the external subscription against the pub

As far as I know the firewall rules have been set correctly, I am waiting on this being confirmed 100% by network team.

If my FTP connection string is incorrect I wouldn't expect the process to see the actual filename, but I am a little confused on the correct syntax with the 'Path from the FTP root folder' setting on the publication, various posts say I don't need the root info of the FTP but need to append FTP at the end?

Why does existing data not replicate when using custom stored procedures

$
0
0

I am setting up a transactional replication in SQL Server. I am using custom stored procedures for the insert, update, and delete. I also am not replicating schema to the target, a table is already there. The tables do not match as the target contains audit columns. In the SP the values are all mapped.

When I run the snapshot it completes and there are no rows in undistributed. When I check the target table, no data has been replicated.

If I go and perform a transaction on source, it is replicated to the target. Why would the existing data not replicate?

Samples below, if you need something else from this let me know.

Source Table:

CREATE TABLE [dbo].[Table_1](
[ID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[Value1] [varchar](50) NULL,
[Value2] [varchar](50) NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED 
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Target Table:

CREATE TABLE [dbo].[Table_1](
[ID] [int] IDENTITY(1,1) NOT NULL,
[SRC_ID] [int] NOT NULL,
[VALUE1] [varchar](50) NULL,
[VALUE2] [varchar](50) NULL,
[SRC_DELETE] [bit] NOT NULL,
[TRAN_DT] [datetime] NOT NULL,
[PROCESS_FLAG] [bit] NOT NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED 
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Table_1] ADD  CONSTRAINT [DF_Table_1_SRC_DELETE]  DEFAULT ((0)) FOR [SRC_DELETE]
GO

ALTER TABLE [dbo].[Table_1] ADD  CONSTRAINT [DF_Table_1_TRAN_DT]  DEFAULT (getdate()) FOR [TRAN_DT]
GO

ALTER TABLE [dbo].[Table_1] ADD  CONSTRAINT [DF_Table_1_PROCESS_FLAG]  DEFAULT ((0)) FOR [PROCESS_FLAG]
GO

Delete SP

Create procedure [dbo].[rep_del__Table_1]
@pkc1 int
as
begin
declare @primarykey_text nvarchar(100) = ''
insert into [Target].dbo.Table_1
(SRC_ID, SRC_DELETE)
values (@pkc1, 1)
if @@rowcount = 0
if @@microsoftversion>0x07320000
Begin

set @primarykey_text = @primarykey_text + '[ID] = ' + convert(nvarchar(100),@pkc1,1)
exec sp_MSreplraiserror @errorid=20598, @param1=N'[dbo].[Table_1]', @param2=@primarykey_text, @param3=13234
End
end

Insert SP

CREATE PROCEDURE [dbo].[rep_ins__Table_1]
@c1 int,
@c2 varchar(50),
@c3 varchar(50)
as
begin
insert into [Target].dbo.Table_1
(SRC_ID, VALUE1, VALUE2)
values (@c1, @c2, @c3)
end

Update SP

CREATE procedure [dbo].[rep_upd__Table_1]
@c1 int,
@c2 varchar(50),
@c3 varchar(50),
@pkc1 int
as
begin  
declare @primarykey_text nvarchar(100) = ''
insert into [Target].dbo.Table_1
(SRC_ID, VALUE1, VALUE2)
values (@pkc1, @c2, @c3)
if @@rowcount = 0
if @@microsoftversion>0x07320000
Begin

set @primarykey_text = @primarykey_text + '[ID] = ' + convert(nvarchar(100),@pkc1,1)
exec sp_MSreplraiserror @errorid=20598, @param1=N'[dbo].[Table_1]', @param2=@primarykey_text, @param3=13233
End
end

sp_addpublication

exec sp_addpublication @publication = N'Sample', @description = N'Transactional publication of database ''Source'' from Publisher ''DESKTOP''.', 
@sync_method = N'concurrent', @retention = 0, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'false', @enabled_for_internet = N'false', 
@snapshot_in_defaultfolder = N'true', @compress_snapshot = N'false', @ftp_port = 21, @ftp_login = N'anonymous', @allow_subscription_copy = N'false', 
@add_to_active_directory = N'false', @repl_freq = N'continuous', @status = N'active', @independent_agent = N'true', @immediate_sync = N'false', 
@allow_sync_tran = N'false', @autogen_sync_procs = N'false', @allow_queued_tran = N'false', @allow_dts = N'false', @replicate_ddl = 0, 
@allow_initialize_from_backup = N'false', @enabled_for_p2p = N'false', @enabled_for_het_sub = N'false'

sp_addarticle

exec sp_addarticle @publication = N'Sample', @article = N'Table_1', @source_owner = N'dbo', @source_object = N'Table_1', @type = N'logbased', 
@description = N'', @creation_script = N'', @pre_creation_cmd = N'truncate', @schema_option = 0x000000000203008D, @identityrangemanagementoption = N'manual', 
@destination_table = N'Table_1', @destination_owner = N'dbo', @status = 16, @vertical_partition = N'false', @ins_cmd = N'CALL [rep_ins__Table_1]', 
@del_cmd = N'CALL [rep_del__Table_1]', @upd_cmd = N'CALL [rep_upd__Table_1]'

sp_addsubscription

exec sp_addsubscription @publication = N'Sample', @subscriber = N'DESKTOP', @destination_db = N'Target', @subscription_type = N'Push', 
@sync_type = N'replication support only', @article = N'all', @update_mode = N'read only', @subscriber_type = 0

Can merge replication operate with 0 ms delay, i.e. be real-time?

$
0
0

I'm facing an issue that I have two identical databases on different servers and both working as read/write and data are inserted on both. The thing is there is a merge replication between the two databases and it's working fine but there is a minute delay until the data is synchronized. What I want is to remove this delay. I want the data to be synchronized immediately.

What kind of solutions can I apply? Is there another feature like replication but without delay, or can I make the replication work without a delay, i.e. to be real-time. Any suggestion please. Appreciate the help.


The format of a message during Web synchronization was invalid. Ensure that replication components are properly configured at the Web server.

$
0
0

Getting the error "The format of a message during Web synchronization was invalid. Ensure that replication components are properly configured at the Web server" in SQL server 2012 using merge replication.

We have tried changing the registry key value

WebSyncMaxXmlSize to 4Gb but still getting the same error. It fails when we are using large chunk of data failing at 3100 chunks

Risk of turning on RSCI on transactional replication subscriber DB

$
0
0

how would turning on RSCI affect(effort to reduce deadlock) the database if it’s the subscriber of a lot of transactional replications from a few publication servers? Any changes or issue would occur on the replication that we need to take a note?

Thank you!

Merge Replication could not drop object due to Foreign Key Constraint

$
0
0

Good Day Everyone.

I am slightly confused here. (it doesn't take a lot to confuse me though)

I have a merge replication and it started giving me this error:

Could not drop object 'TableName' because it is referenced by a FOREIGN KEY constraint. (Source: MSSQLServer, Error number: 3726)

I understand why I m getting the error: it is trying to drop the table that is referenced in another table as a foreign constraint, what I am battling to understand is why it is trying to drop the table?

I have deleted the foreign key reference an the subscriber, the replication goes through only once and as soon as the process repeats I am stuck with this error again.

Can someone please shed some light on this?

SQL Replication Automatically Add table to Publisher

$
0
0

I am using SQL 2012 transactional replication for an entire database. Everytime I add a new table, say Table A, I have to manually go to the SSMS GUI, and add the table to be publisher, then I resync to import data.

Is there a way to automatically add tables with data into replication, or do I have to write a dynamic/t-sql/powershell script, which checks if sys.tables is replicated and then automatically adds to replication sp_addarticle ?

Merge Replication - Invalid Column Name

$
0
0

Good Day everyone,

I am battling with a merge replication. I added a new column to an existing Merge Replication and Rerun the snapshot.

This replicates to 3 different servers in different locations. 1 Replication works perfectly, but 2 of the replication fails with the following error:

The schema script 'ProductionCategories_8.prc' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001) Get help: http://help/MSSQL_REPL-2147201001 Invalid column name 'SortOrder'. (Source: MSSQLServer, Error number: 207) Get help: http://help/207

When viewing the table where the referenced column is located the Column is in both the publisher and relevant subscribers.

I also checked spelling and case to ensure there are no Case Sensitivity or silly spelling mistakes, but all is fine.

I really need to get this sorted as group reporting is being hampered by this.

Any suggestions will be greatly appreciated.

how to replicate articles from different schemas?

$
0
0

from sp_addarticle (Transact-SQL)

I get an example as how to add an article to a publication.

this adds the table dbo.Product to the publication AdvWorksProductTran

DECLARE @publication    AS sysname;
DECLARE @table AS sysname;
DECLARE @filterclause AS nvarchar(500);
DECLARE @filtername AS nvarchar(386);
DECLARE @schemaowner AS sysname;
SET @publication = N'AdvWorksProductTran'; 
SET @table = N'Product';
SET @filterclause = N'[DiscontinuedDate] IS NULL'; 
SET @filtername = N'filter_out_discontinued';
SET @schemaowner = N'Production';

-- Add a horizontally and vertically filtered article for the Product table.
-- Manually set @schema_option to ensure that the Production schema 
-- is generated at the Subscriber (0x8000000).
EXEC sp_addarticle 
    @publication = @publication, 
    @article = @table, 
    @source_object = @table,
    @source_owner = @schemaowner, 
    @schema_option = 0x80030F3,
    @vertical_partition = N'true', 
    @type = N'logbased',
    @filter_clause = @filterclause;

How would I do if I had also the following tables from different schemas to add to this publication?

I want to add these two tables to the publication, how do I do it using sp_addarticle?

  1. my_schema01.Product
  2. my_schema02.Product

How to avoid table locks and replicate large articles using transaction replication

$
0
0

We are planning to migrate our SQL on prem database to azure and this database has lot of tables and out of them few are very highly transaction table (contains millions of records), we want to minimize the down time of the application and decided to use the transaction replication using snapshot for the replication of data and then take some down time and do a cut-over to azure database from our application

Below are the issues which we have seen so far in pre prod

  1. Table locks during initial time and lot of requests from the application were failing due to this locks. How can we avoid this?
  2. 2 articles (millions of rows) replication failed one which was almost completed (90%) and other due to some data issue, we have created 3 separate publications one for rest of small tables and other 2 for each big tables. I know that we can reinitialize the publication and do it from start but that will again delay the cut over time and also table locks.
    So how can we handle this case 1 for where we most of the data was replicated and we do not want to start from scratch

I hope many of you have experience this issues and has some best practices to share.


MySQL Replication Relay for Peer-to-peer replication for Multi-master replications

$
0
0

Any one knows about MySQL Replication Relay for Peer-to-Peer Replication for Multi-master replications?

Please share if you know anything with example and syntax for multiple clients with as masters replications in MySQL.

MySQL Replication Relay for Peer-to-peer replication for Multi-master replications [closed]

$
0
0

Any one knows about MySQL Replication Relay for Peer-to-Peer Replication for Multi-master replications?

Please share if you know anything with example and syntax for multiple clients with as masters replications in MySQL.

Syncronizing process difference between replication and alwayson high availability

$
0
0

Just wondering if SQL Server use different technology for Transactional Replication and Alwayson High Availability or is it same behind the scene. Basically I need to know if SQL Server use same or different technology(protocol/process) for Replication and Synchronizing the secondary replicas in High Availability in SQL Server 2016. Thanks in advance.

Merge Replication From Multiple Express to Central

$
0
0

So I am trying to setup a merge replication from multiple remote SQL 2016 Express installations to a central SQL 2016 Server. This would be for reporting purposes so I would need all the express installations to report information to the central server daily.

The trouble I am having is that the main server is setup as the distributor and publisher (publisher is a blank database with the 41 tables we are wanting to report information into), and the express installs are all subscribers. Is this even possible and if it is, can someone point me in the right direction for documentation as I am not finding any for this purpose.

Thanks for all help!

Drop a replicated subscription table that has been orphaned after restore migration

$
0
0

We restored a database on a new server for a migration.

When attempting to set a subscription to a transnational publication, we get the error:

Can't drop the table because it is set for replication.

However, it is not being replicated to.

I have already set several push subscriptions up from this server that go to another so I don't want to completely wipe out all replication and rebuild everything.

I was wondering if there is a place that I could delete the piece of information that makes the article think it is still replicated to, so that I can add it back to a subscription

Date        1/29/2018 3:42:13 PM
Log     Job History 

Step ID     1
Server      Server1Pub
Job Name        RepJobName
Step Name       Run agent.
Duration        00:00:05
Sql Severity    0
Sql Message ID  0
Operator Emailed    
Operator Net sent   
Operator Paged  
Retries Attempted   0

Message
2018-01-29 21:42:13.174 Copyright (c) 2016 Microsoft Corporation
2018-01-29 21:42:13.174 Microsoft SQL Server Replication Agent: distrib
2018-01-29 21:42:13.174 
2018-01-29 21:42:13.174 The timestamps prepended to the output lines are expressed in terms of UTC time.
2018-01-29 21:42:13.174 User-specified agent parameter values:
            -Publisher Server1Pub
            -PublisherDB DBPub
            -Publication PubName
            -Distributor Server1Pub
            -SubscriptionType 1
            -Subscriber Server2Sub
            -SubscriberSecurityMode 1
            -SubscriberDB db1
            -XJOBID 0x466A0D3EF212C442B2B0E6521144ED71
            -XJOBNAME ReplicationJobName
            -XSTEPID 1
            -XSUBSYSTEM Distribution
            -XSERVER Server2Sub
            -XCMDLINE 0
            -XCancelEventHandle 0000000000006F44
            -XParentProcessHandle 000000000000592C
2018-01-29 21:42:13.174 Startup Delay: 2294 (msecs)
2018-01-29 21:42:15.474 Connecting to Subscriber 'Server2Sub'
2018-01-29 21:42:15.509 Connecting to Distributor 'Server1Pub'
2018-01-29 21:42:16.279 Parameter values obtained from agent profile:
            -bcpbatchsize 2147473647
            -commitbatchsize 100
            -commitbatchthreshold 1000
            -historyverboselevel 1
            -keepalivemessageinterval 300
            -logintimeout 15
            -maxbcpthreads 1
            -maxdeliveredtransactions 0
            -pollinginterval 5000
            -querytimeout 1800
            -skiperrors 
            -transactionsperhistory 100
2018-01-29 21:42:16.864 Initializing
2018-01-29 21:42:17.239 Snapshot will be applied from the alternate folder '\\192.168.5.124\Replications\unc\Folder'
2018-01-29 21:42:17.914 Agent message code 3724. Cannot drop the table 'dbo.table' because it is being used for replication.
2018-01-29 21:42:18.144 Category:COMMAND
Source:  Failed Command
Number:  
Message: drop Table [dbo].[table]

2018-01-29 21:42:18.194 Category:NULL
Source:  Microsoft SQL Server Native Client 11.0
Number:  3724
Message: Cannot drop the table 'dbo.table' because it is being used for replication.
Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>