We are observing abnormally large log writes whenever we update a table that is used for parameterized filter in a merge replication. There are a number of join filters as well.
When executing a stored procedure that simply deletes some rows and inserts new rows into the table, it can take anywhere between few seconds to few minutes. During that time, 200-300 MB worth of data is written to the transaction log.
At first, I assumed it was because we had precomputed partitions on and modifying the data on the table would cause the partitions to be re-calculated. As a test, I tried turning off the feature to precompute partition (but keep the partition groups AKA "optimize synchronization"). That didn't help. I turned off all optimization altogether. That still didn't help.
So it doesn't seem that precomputed partition is the reason behind large amount of log write whenever updating that table. I'm stumped now. What else could be being written and how can we avoid this?