We have following setup:
- server SRV_1 has two databases: OLTP database DB_APP and reporting database DB_REP.
DB_APP has a number of triggers which do some data transformation (denormalization, uses joins to other tables to match denormalized data) and inserts/updates data into DB_REP - SRV_1 has transactional replication which publish data from DB_REP
- There are few jobs on SRV_1 which regularly modify data in DB_REP basing on data from DB_APP
- DB_REP is not used by any application, it is only a source for replication
- server SRV_2 is a reporting server which subscribes for DB_REP and makes it available to customers.
The problem is that DB_APP has few tables which are often updated and the triggers make really heavy CPU load. One of the most often updated table has about 100 updates per second.
What is the best approach to optimize this setup? So far I was thinking about such:
- try to optimize triggers as much as possible (right now they use if exists check to do insert/update and I can replace it with MERGE)
- create indexed views instead of triggers and replicate views from DB_APP to DB_REP
- remove triggers, remove DB_REP and publish replication directly from DB_APP using replication stored procedures sp_msins_xxx with data transformation implemented. I am not sure if I can have queries to other tables in these procedures. And what to do with jobs?