Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Table schema affect on transactional replication performance

$
0
0

We've implemented transactional replication (push model) over a WAN and are sometimes seeing slow-downs during bulk updates of a specific table (ie. we are seeing a high number of 'commands not replicated' for that specific table). The table in question has the format

CREATE TABLE [Table](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [FrequentlyUpdated_0] [float] NULL,
    [FrequentlyUpdated_1] [float] NULL,
    [FrequentlyUpdated_2] [float] NULL,
    [RarelyUpdated_0] [varbinary](max) NULL,
    [RarelyUpdated_1] [varbinary](max) NULL,
    [RarelyUpdated_2] [varbinary](max) NULL
)

where the RarelyUpdated_n columns can contain data that is quite large relative to the FrequentlyUpdated_n columns (eg. 20 MBs). The question now is whether it will likely improve performance if we split the table in question into two distinct tables like the following

CREATE TABLE [FrequentlyUpdatedTable](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [FrequentlyUpdated_0] [float] NULL,
    [FrequentlyUpdated_1] [float] NULL,
    [FrequentlyUpdated_2] [float] NULL
)

CREATE TABLE [RarelyUpdatedTable](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [RarelyUpdated_0] [varbinary](max) NULL,
    [RarelyUpdated_1] [varbinary](max) NULL,
    [RarelyUpdated_2] [varbinary](max) NULL
)

Or phrased in another way: Does performance depend on row data size or just the size of the updated data?

PS. All servers involved in the setup are not heavily loaded, so I suspect the performance issue is related to I/O.


Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>