According to this post:
In a single replica set, you cannot distribute writes, they all must go to the primary. You can distribute reads to the secondaries already, via Read Preferences as you deem appropriate. The driver keeps track of what is a primary and what is a secondary and routes queries appropriately.
According to the Mongo docs:
You may also deploy a group of mongos instances and use a proxy/load balancer between the application and the mongos. In these deployments, you must configure the load balancer for client affinity so that every connection from a single client reaches the same mongos.
So basically, it seems like if you've got a single replica set of 3 nodes, you can't really use a proxy/load balancer since all writes need to go to the primary and you need client affinity... so all reads also need to go to the primary.
What I'm thinking though is that it might be possible to have applications connect to a load balancer. The load balancer would route all requests to the primary (not very balanced, but whatever)... until/unless the primary went down - at which point the load balancer would start routing requests to a "new primary".
I'm not sure if this is possible however since, how would the load balancer know which mongo server had been elected the new primary (and thus where it should route new requests)?
Assuming it was possible, this would achieve a degree of redundancy, in case the primary ever goes down... I'm also hoping it would also have the side effect of avoiding stale writes when a network partition occurs, since the load balancer (and thus all DB clients) would only ever connect to a single primary.
Or is this a stupid question...