I work with a pretty large (O(10^1) TB) DB2 (LUW, v. 9.7) database. It's got data coming in on a continuous basis via Golden Gate replication from another DB2 database. It's used for business intelligence and analytics.
Now I'm working with a group in the parent company which is trying to build an enterprise data warehouse. They want to collect data from their own databases, as well as from their acquisitions (like my site). To this end they purchased an Oracle BDA appliance (Cloudera Hadoop), and sometime soon an Oracle Exadata box will be stood up.
Putting aside the fact that the target database is Hadoop, not a traditional RDMS, I'm having a hard time coming up with solutions that will faithfully copy the data out of the source database, given that rows are not only being continuously inserted, but also updated. (As far as I can tell, rows are never deleted.)
I'm interested in what the landscape of possible approaches looks like, as well as which approaches will scale and not be too much of a performance burden on the source database.
Currently we copy data to Vertica in-house, using a home-built solution. Small tables are dumped on the target and then copied in their entirety to Vertica. Large tables have a trigger that updates a table with a single row. That datum records the oldest value of a timestamped and indexed column seen in any row that's inserted or updated. All the SELECTs are done as uncommitted reads. This appears to work, but we do transfer a large amount of data; the new project requires transmission over a much greater distance and with presumably less bandwidth. Moreover, this process is only run once per week. While I don't think a lag measured in minutes is required for this new project, the principles might not be very happy with a weekly refresh.
Here's what I've brainstormed so far:
A vended replication solution like Golden Gate.
Some in-house solution that ships transaction logs (probably beyond our dev capabilities).
A trigger that exports any inserted or updated row. I assume this would be a horrendous performance hit on the source DB.
A trigger that records the primary key of any inserted/updated row. Also would appear to be a big performance hit.
Adding an indexed timestamp column for time of last modification, and an accompanying trigger to modify it on update. The DBAs I work with claim this would be a performance hit (I can't really tell if it would be superior to numbers 3 and 4 above). Moreover, it adds the complication that the upstream data source doesn't have this column, with possible impliactions for the current replication process between the two DB2 database.