Quantcast
Channel: StackExchange Replication Questions
Viewing all articles
Browse latest Browse all 17268

Considerations to take into account for moving a big table into another database

$
0
0

In our application, we a have 67 GB table ( very big compared to the rest of our tables ) and it acts almost as an archive table since its record are not modified and the ratio of read operations is 1/10 of the operations of the rest of our tables. But in overall, 10K rows are added to this table daily. For replication purposes ( having replications of this table is not as important as other tables, or maybe not important at all ), we intend to move this table in to another database.

I wanted to know what considerations we should take into account on this matter? For instance:

  1. From the replication perspective, can't we just ignore this table in replication rather than migrating it into a new database?

  2. Can this migration have performance gain for us? Is there anything we should consider about it?

  3. Isn't there any special configuration for this kind of tables resulting in smaller indices/files or faster selections? ( we don't have join on this table )

and other considerations that you think should be considered...

Our database is PostgreSQL 9.3.13, 4 database instances on 4 machines (one main and three stand by server), 24 cores, 128G RAM.


Viewing all articles
Browse latest Browse all 17268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>