Quantcast
Channel: StackExchange Replication Questions
Viewing all 17268 articles
Browse latest View live

Streaming replication archive folder taking all space- Postgresql- 9.4

$
0
0

We had recently setup postgres streaming replication between master (physical, CentOS release 6.6) and slave(virtual, CentOS 7.6). We did the test and streaming replication is working fine.

We created few databases in master and it is getting replicated to slave servers. Our Postgresql version is 9.4 and data directory is "/var/lib/pgsql/9.4/data" in both master and slave.

While setting up streaming replication, we edit postgresql.conf in master

wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode = on
archive_command = 'cp %p /var/lib/pgsql/9.4/archive/%f'

Our recovery.conf in slave is

standby_mode = 'on'
primary_conninfo = 'host=192.168.0.10 port=5432 user=replication password=password'
trigger_file = '/var/lib/pgsql/9.4/trigger'
restore_command = 'cp /var/lib/pgsql/9.4/archive/%f "%p"'

I also added below command to clear the old archive in recovery.conf in slave

archive_cleanup_command = 'pg_archivecleanup /var/lib/pgsql/9.4/archive %r'

but I started seeing below error in pg_log of slave.

< 2019-01-16 09:02:48.937 EST >WARNING:  archive_cleanup_command "pg_archivecleanup /var/lib/pgsql/9.4/archive %r": child process exited with exit code 2
pg_archivecleanup: archive location "/var/lib/pgsql/9.4/archive" does not exist< 2019-01-16 09:07:45.927 EST >WARNING:  archive_cleanup_command "pg_archivecleanup /var/lib/pgsql/9.4/archive %r": child process exited with exit code 2

After doing some research I realized that problem is genuine as the slave is not having this directory "/var/lib/pgsql/9.4/archive" and its present in master server only.

Now, we removed "archive_cleanup_command" from recovery.conf (slave) but in our master server "/var/lib/pgsql/9.4/archive" is taking so much of space. We need to clear these old files inside "archive".

I had few doubts and wanted to take some suggestions to resolve the problem :-

  1. Can you suggest what can be done so that we don't run into every day "No space left in /var of the master" issue ? Can we run the pg_archivecleanup from master server and remove the old files like below
pg_archivecleanup -d archive 000000010000003700000010.00000020.backup
pg_archivecleanup:  keep WAL file "archive/000000010000003700000010" and later
pg_archivecleanup:  removing file "archive/00000001000000370000000F"
pg_archivecleanup:  removing file "archive/00000001000000370000000E"
  1. Should I setup streaming replication again with some mount point attached to both master and slave and save the archive of master to "mount point" and then add "archive_cleanup_command" in recovery.conf?

Problem with MongoDB Replication - AWS and Windows Hosts

$
0
0

I've been messing with this for a bit now and I have managed to crawl through the configuration given the documentation is rather non existent.

Right now the problem is that my ReplicaSet Secondaries cannot get a heartbeat to my Primary. I am able to ping all hosts from each other and I am able to connect to the shell from all hosts.

The ReplicaSet initiated and I was able to add the members, so I know they can all communicate.

Is there something I need to open up on the firewall to get the heartbeats through? I couldn't find anything in the documentation.

Thanks!

Error 'Unknown Collation: '2048' on query

$
0
0

I'm doing MYSQL Replication for the first time.

Master mysql version: mysql Ver 14.14 Distrib 5.1.52, for pc-linux-gnu (i686) using readline 5.1

Slave Version: mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1

I already make a connection of master to slave but when an insert query execute, it shows Error 'Unknown Collation: '2048' on query. Default database is 'master'.

Already check for answers in google but no solution was made for 2048. What should I do to fix it?

error query: Error 'Unknown collation: '2048'' on query. Default database: 'master'. Query: 'INSERT INTO DOCTORS_LOG(cno,empno,name,dept,sex,age,civilstatus,empstatus,datehired,complaints,diagnosis,dte,personnel) values (cno,'69408','JEFFREY MONTERON','PRODUCTION/MACHINERY ENGINEERING','MALE','20','SINGLE','Contractual','2017-09-18','cold x 2 days ','A> AURTI P> dynatussin 1 cap PO tid for 3-5 days',(Select sysdate()),'Dra. Joyce Guanin')' 1 row in set (0.00 sec)

What happened to pg_xlog?

$
0
0

There uesd to be a directory called pg_xlog that stored all of the WAL logs in PostgreSQL? Part of restoring a basebackup under the archiving scheme required me to copy the WAL into DATA_DIR/pg_xlog. What happened to this directory?

Determine when subscription became inactive in transactional replication

$
0
0

When I try to send a tracer token on some of my publications to get the latency I get the following error -

No active subscriptions were found. The publication must have active subscriptions in order to post a tracer token.

There are subscriptions tied to these publications as well. I can fix this by re-initializing\rebuilding replication, but I was wondering if there is a way to tell with when the subscription stopped receiving anything? I want to determine how long this has not been working.

The tables that are being replicated do not have timestamps on them that allow me to figure it out based on that. I have checked Replication Monitor, navigated through several of the tables in the distribution database, checked the job history and SQL logs and not able to determine this. Is there a timestamp recorded somewhere that shows the last synch from the distributor to the subscriber?

We are using SQL Transactional Replication (Push) and on SQL Server 2012 SP4.

How do I query the running pg_hba configuration?

$
0
0

I want to test if a replication connection is authorized by pg_hba.conf on the provider before issuing the replication-starting command, and don't know how. (I have access to both unix and postgresql shells on both nodes)

For the non-replication connection, I would connect psql using a connstring like 'host=$MASTER_IP port=5432 dbname=$DATABASE user=$DBUSER password=$DBPASSWORD'

Context: I am writing a script to automate the setup of replication between servers, and configuration of the servers is managed through different systems/repositories (legacy reasons). Therefore, I want to test if settings are all right at each step.

SQL Server Delta Records Pull & Push

$
0
0

In our SQL Server DB, we have about some 800+ tables and there are 40 - 50 tables are business critical tables. MIS team needs to generate reports based on those 50 business tables.

Day by day, records are getting huge in those business tables. Daily MIS team is pulling those millions of records directly from Production like select * from Table1 and pushing those records into their environment. Table1 may have 30 million records.

Those 50 tables gets updated frequently. MIS team requires those delta records (update/inserted/delete)

Our Senior DBA advice us to use the below 4th approach. He says replication is a failure model and it will leads I/O problem.

What would be the best solution?

We have few approaches here

  1. Always On
  2. Replication
  3. Mirroring
  4. Introducing new column (LastModifiedDate& creating index) in those 50 tables and pulling those records periodically and populating it to MIS environment.

There will be huge code change for the new column LastModifiedDate approach.

Based on those 50 tables, we have huge number of stored procedures which it has Insert/Update statements. In those stored procedures, we need to do code change for LastModifiedDate.

Since we have SQL Server 2008 R2 only, we cant got for Always On approach. If it is the only approach, then we can approach management to upgrade the 2008 to 2014/2016.

What would be the best solution from the above approaches?

Please let me know if any other approach to do.

Cannot set up replication between two servers

$
0
0

I am trying to subscribe a 2016 (sp2, cu1) server to another 2017 server. I am getting the following error message:

For merge publications, the version of the Subscriber must not exceed the version of the Publisher

The database on the subscriber is running in compatibility mode 2016 (130) and the publisher is running in 2017 (140). The version of SSMS on the subscriber is also lower than the version on the publisher.

Any advice?

Additional details

While I am aware that on surface level this should be working it still is not, could the error I am getting be hiding a different problem?

  • Publisher Compatibility level: 140
  • Subscriber Compatibility level: 130

I am unable to set the compatibility level of the subscriber to 140 because the server is running 2016 and not 2017. I am also unable to upgrade the publisher right now because of business reasons (I am unable to take the server offline for any period of time). I addition to this, we have other 2016 (sp2, cu1) servers running as subscriber to this server.


Foreign Key Error with Transactional Replication from Backup

$
0
0

I'm getting a missing foreign key error when I begin transactional replication.

Steps taken:

  1. Created a pull publication (live server)
  2. Created a copy only full backup
  3. Restored backup on reporting server
  4. Created a pull subscription on from backup (live server)
  5. Created a pull subscription on subscriber (reporting server)

I'm getting errors soon after looking through replication monitor:

The INSERT statement conflicted with the FOREIGN KEY constraint.

The issue is it's trying to insert a record into a table for which there's no corresponding record (although the record does exist on the live server).

Unless I've missed a step I'm not sure why this is happening. I had suspected it may be a case of distribution records getting cleaned up too early, so I disabled the SQL Agent job, but it's still happening.

Any help would be greatly appreciated.

Postgresql 9.6 Streaming Replication Archive_mode command

$
0
0

I am in the process of configuring both postgresql-9.6 MASTER and SLAVE for streaming replication. But First I need to put the MASTER on ARCHIVE_MODE I have the option of using either commands below to archive WAL files from pg_xlog to /opt/mm/dxxxx.

rsync -aSIq %p /opt/mm/dxxxx/%f ORtest ! -f /opt/mm/dxxxx/%f && cp %p /opt/mm/dxxxx/%f

I have tested both commands and I keep running in to an ERROR.rsync: change_dir "/var/lib/pgsql/pg_xlog" failed: No such file or directory (2)

I am able to proceed with this until I am sure archive command will work. A failing archive command can potentially prevent the MASTER server from restarting as I will need to execute pg_ctl restart -w after edit the postgresql.conf file.

Access denied error when archiving WAL during postgresql 9.5 replication setup

$
0
0

I want to setup Postgres replication on Windows. I followed the given youtube url step by step: youtube video.

For archive_command I use this:

archive_command = 'copy "%p" "\\\\ip-address of sharable drive\\soft\\\wal\\%f"' 

but when I look into the folder, folder is empty and WAL files are not transferred from pg_xlog to the desired location; it gives error access denied. In services.msc postgresql log on as network services.

services.msc

error log:

2018-04-30 16:34:06 IST LOG:  archive command failed with exit code 1
2018-04-30 16:34:06 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:34:07 IST LOG:  archive command failed with exit code 1
2018-04-30 16:34:07 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive1\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:34:08 IST LOG:  archive command failed with exit code 1
2018-04-30 16:34:08 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:34:08 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later
2018-04-30 16:35:08 IST LOG:  archive command failed with exit code 1
2018-04-30 16:35:08 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:35:09 IST LOG:  archive command failed with exit code 1
2018-04-30 16:35:09 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:35:10 IST LOG:  archive command failed with exit code 1
2018-04-30 16:35:10 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:35:10 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later
2018-04-30 16:36:10 IST LOG:  archive command failed with exit code 1
2018-04-30 16:36:10 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:36:11 IST LOG:  archive command failed with exit code 1
2018-04-30 16:36:11 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:36:12 IST LOG:  archive command failed with exit code 1
2018-04-30 16:36:12 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:36:12 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later
2018-04-30 16:37:12 IST LOG:  archive command failed with exit code 1
2018-04-30 16:37:12 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:37:13 IST LOG:  archive command failed with exit code 1
2018-04-30 16:37:13 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:37:15 IST LOG:  archive command failed with exit code 1
2018-04-30 16:37:15 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:37:15 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later 
2018-04-30 16:38:15 IST LOG:  archive command failed with exit code 1
2018-04-30 16:38:15 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:38:16 IST LOG:  archive command failed with exit code 1
2018-04-30 16:38:16 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:38:17 IST LOG:  archive command failed with exit code 1
2018-04-30 16:38:17 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:38:17 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later
2018-04-30 16:39:17 IST LOG:  archive command failed with exit code 1
2018-04-30 16:39:17 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:39:18 IST LOG:  archive command failed with exit code 1
2018-04-30 16:39:18 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:39:19 IST LOG:  archive command failed with exit code 1
2018-04-30 16:39:19 IST DETAIL:  The failed archive command was: copy %\ip-address of sharable drive\soft\SUJITwal\000000010000000E00000083
2018-04-30 16:39:19 IST WARNING:  archiving transaction log file "000000010000000E00000083" failed too many times, will try again later

Connection error 2003 from MySQL slave to MariaDB master whiie starting replication

$
0
0

I am attempting to perform Replication in a Macbook pro with Xampp-VM and MAMP.

Master : MariaDB from Xampp-VM Slave : MySQL from MAMP

I followed all the configuration steps in MariaDB knowledge basehttps://mariadb.com/kb/en/library/setting-up-replication/

Setting: Master : [mariadb] log-bin server_id=99 log-basename=master1 skip-networking=0 bind-address={slave server ip} Slave: [mysqld] log-bin server_id=100

The SQL command executed in slave: CHANGE MASTER TO MASTER_HOST='master-server-ip', MASTER_USER='replication_user', MASTER_PASSWORD='bigs3cret', MASTER_PORT=3306, MASTER_LOG_FILE='mariadb-bin.000096', #what I got from SHOW MASTER STATUS; MASTER_LOG_POS=568, #what I got from SHOW MASTER STATUS; MASTER_CONNECT_RETRY=10;

When I started the slave by and see the status. It said connecting to the master but the slave was started normally as shown in the following output Slave_IO_Running: Yes Slave_SQL_Running: Yes

The output of error log of slave DB connection error code: 2003

My questions: 1. Is it possible for MariaDB of Xampp-VM to receive remote DB connection? 2. I left the setting of apache server of Xampp-VM default. Did I miss out anything in the server setting? 3. I can't find the error log of MariaDB in the terminal of Xampp-VM. Where is it? 4. Any approach to troubleshoot the connection failure?

Thanks!

Splitting Snapshot files with MaxBCPThreads for Transactional Replication

$
0
0

I've just set up a publication, and I'm attempting to get the snapshot to apply faster. So far the Distribution Agent is respecting the MaxBCPThreads settings, but the Snapshot Agent is not. I'm expecting it to split the files so the threads on the Distribution Agent would go and grab the data. But it doesn't appear to be able to do that any time I snapshot.

Some possible solutions that I've seen online where to update the agent profile (I originally just edited the agent step with the flag, and that worked for the dist agent but not for snapshot).

I tried updating the agent profiles and that hasn't made any difference. I also found people saying that you should have sync_method set to native so I checked my script and I had already created the publication with native mode specified.

I'm wondering if I'm missing a specific setting that MaxBCPThreads needs in order to split all the bcp files into different files each.

I thought I had solved my own issue: It looks like you have to have a clustered index with a distinct set of ranges to get SQL Server to split the files into partitions. But right now my index seems to have 0's for all ranges.

DBCC SHOW_STATISTICS

enter image description here

After additional testing, I've found that this seems to only work on replicated tables. If you were to replicate based on (indexed) views then it seems that you only get the 1 bcp file instead of the partitioned stuff you'd get from normal tables.

The question is: Why doesn't SQL replication partition bcp files for Indexed Views like it does with normal tables?

I'm replicating the indexed view itself without the table ("Indexed View as a Table"). The reason is I have to join identifying information for the database for the subscriber to use for other things. The only way I've found to do it so far is to manually split my views using BETWEEN, which isn't particularly efficient. I'm hoping I can get SQL Server to do what I'd expect when replicating a normal table.

how to publish user defined table type as an article in Transactional replication

$
0
0

I am new to replication and trying to use transactional replication. I am trying to publish all data and schema. My stored procedure takes user defined table type as input.

CREATE type TableBParam as table
(
    Id Bigint,
    TableAId Bigint not null,
    FieldB1 nvarchar(50)
)

--Deadlock was observed on the save query
go
CREATE PROC SaveTableB
(  
 @val [dbo].[TableBParam] READONLY  
)  
AS 
BEGIN 
SET NOCOUNT ON;  

MERGE [dbo].[TableB] AS T  
USING (SELECT * FROM @val) AS S  
  ON ( T.Id = S.Id)  
WHEN MATCHED THEN  
    update set FieldB1 = S.FieldB1
WHEN NOT MATCHED THEN 
    insert(TableAId, FieldB1) Values(S.TableAId, S.FieldB1);

END
go

when the snapshot agent runs it gives me an error "Script failed for user defined table type TableBParam"

I couldn't find an option to specify user defined table types in the article dialog when we setup Local publish. I have also explore the article properties to which didn't help me.

enter image description here

Appreciate your suggestions.

Force replication to statement for INSERT to a table to fire trigger on slave

$
0
0

We have a PROD DB which replicates into a slave DB using mixed replication. We want to add a trigger so that a row is added to our DW when a row is INSERTed into table_a (on master). The issue is that this INSERT is coming through using Row-based replication and the trigger (which is on table_a on slave) is not firing. We need to have the trigger on the slave table as that is where our DW is.

Looking around online it looks like this should work if statement-based replication is used. Is it possible to force the INSERT to table A to be processed as statement-based replication? Or is there any other way we can achieve this?

The INSERT itself is deterministic as is the trigger. We are using MySQL 5.6.

If you need any other information please let me know.


Should I write to MySQL slave (replica) for reporting?

$
0
0

I intent to put reporting/analytics database on slave and run job scheduler every night to aggregate and insert data from operational database to reporting database.

Should I do this or I have to setup dedicated server for reporting/analytics database and use some tools to aggregate and insert data from slave to reporting server?

Thank for help,

MariaDB of Xampp-VM can't receive remote connection

$
0
0

I am attempting to perform Replication in a Macbook pro with Xampp-VM and MAMP.

Master : MariaDB from Xampp-VM Slave : MySQL from MAMP

I followed all the configuration steps in MariaDB knowledge base https://mariadb.com/kb/en/library/setting-up-replication/

Setting: Master :

[mariadb] log-bin
server_id=99 
log-basename=master1 
skip-networking=0 
bind-address={slave server ip} 

Slave:

[mysqld] log-bin
server_id=100

The SQL command executed in slave:

CHANGE MASTER TO MASTER_HOST='master-server-ip', 
  MASTER_USER='replication_user', 
  MASTER_PASSWORD='bigs3cret', 
  MASTER_PORT=3306, 
  MASTER_LOG_FILE='mariadb-bin.000096', #what I got from SHOW MASTER STATUS; 
  MASTER_LOG_POS=568, #what I got from SHOW MASTER STATUS; 
  MASTER_CONNECT_RETRY=10;

When I started the slave and check the status. It said connecting to the master but the slave was started normally as shown in the following output

Slave_IO_Running: Yes 
Slave_SQL_Running: Yes

The output of error log of slave DB

ERROR 2003 (HY000): Can't connect to MySQL server on 'master-ip' 

It seems the master DB can't receive remote connection for some reason.

My questions:

  1. Is it possible for MariaDB of Xampp-VM to receive remote DB connection?

  2. How to get the location of error log of MariaDB in the terminal of Xampp-VM ?

  3. Any approach to fix it?

Automatically purging binlogs after successful replication

$
0
0

I have a dedicated server hosting a MySQL database, and want to use replication to have an always up-to-date backup somewhere.

I chose to use Amazon RDS as my replica, and I'm following this guide:

Replication with a MySQL Instance Running External to Amazon RDS

What I fail to understand however, is who/what is supposed to take care of purging binary logs. The only configuration I found is expire_logs_days, which the documentation describes as:

You can also set the expire_logs_days system variable to expire binary log files automatically after a given number of days (...). If you are using replication, you should set the variable no lower than the maximum number of days your slaves might lag behind the master.

Is this the standard procedure DBAs use?

Isn't it possible to have RDS, which is the only replica, purge the logs on the master as soon as it's processed them?

Replication Scenario on Postgres

$
0
0

Recently i got a request from team that confusing. I have search the internet but found no exact information about it. I have 2 request of replication scenario that i dont know whether is it possible or not.

  1. Replication between 2 servers with different database name. Example, i have 2 server, A (primary) and B (slave), is it possible to replicate database primary_product in server A, into primary_product_anltycs in server B (Different database name) ? Any link tutorial for this ?

  2. Replication between 2 servers which consist of several databases in Server A, need to be transfered into single database with multiple schema (custom schema name) on Server B. Is this possible ? Any link tutorial for this ?

Be informed, i have successfully setup replication using WAL.

Any help would be very appreciated.

Options for copying data out of continuously updating tables

$
0
0

I work with a pretty large (O(10^1) TB) DB2 (LUW, v. 9.7) database. It's got data coming in on a continuous basis via Golden Gate replication from another DB2 database. It's used for business intelligence and analytics.

Now I'm working with a group in the parent company which is trying to build an enterprise data warehouse. They want to collect data from their own databases, as well as from their acquisitions (like my site). To this end they purchased an Oracle BDA appliance (Cloudera Hadoop), and sometime soon an Oracle Exadata box will be stood up.

Putting aside the fact that the target database is Hadoop, not a traditional RDMS, I'm having a hard time coming up with solutions that will faithfully copy the data out of the source database, given that rows are not only being continuously inserted, but also updated. (As far as I can tell, rows are never deleted.)

Question

I'm interested in what the landscape of possible approaches looks like, as well as which approaches will scale and not be too much of a performance burden on the source database?

Current Solution

Currently we copy data to Vertica in-house, using a home-built solution. Small tables are dumped on the target and then copied in their entirety to Vertica. Large tables have a trigger that updates a table with a single row. That datum records the oldest value of a timestamped and indexed column seen in any row that's inserted or updated. All the SELECTs are done as uncommitted reads. This appears to work, but we do transfer a large amount of data; the new project requires transmission over a much greater distance and with presumably less bandwidth. Moreover, this process is only run once per week. While I don't think a lag measured in minutes is required for this new project, the principals might not be very happy with a weekly refresh.

Possible Solutions

Here's what I've brainstormed so far:

  1. A vended replication solution like Golden Gate.

  2. Some in-house solution that ships transaction logs (probably beyond our dev capabilities).

  3. A trigger that exports any inserted or updated row. I assume this would be a horrendous performance hit on the source DB.

  4. A trigger that records the primary key of any inserted/updated row. Also would appear to be a big performance hit.

  5. Adding an indexed timestamp column for time of last modification, and an accompanying trigger to modify it on update. The DBAs I work with claim this would be a performance hit (I can't really tell if it would be superior to numbers 3 and 4 above). Moreover, it adds the complication that the upstream data source doesn't have this column, with possible implications for the current replication process between the two DB2 databases.

Viewing all 17268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>