The "Left-Right" concurrency control algorithm is not tricky at all; in fact the algorithm for writers (here a single writer for simplicity) is almost trivial:
1. Update temporary copy with new value.
2. Redirect new readers to temporary copy.
3. Wait for readers of permanent copy to finish.
4. Update permanent copy with new value.
5. Redirect new readers to permanent copy.
6. Wait for readers of temporary copy to finish.
It's pretty clear from the above that writers are blocking (they have to wait for readers to finish) and readers are wait-free (they can always read from one of the copies). (This is no surprise since we know that wait-free reads and writes given a single writer requires R+1 copies, where R is the number of readers.) Also, the linearization point for writes is clearly step 2) above. So this is no substitute for MVCC (writers must still wait for readers), but it's remarkable (to me) that you can get wait-free readers with just 2 copies and a very simple algorithm.
One issue besides blocking writers is that readers are wait-free but not "invisible", since they must update a reference counter (or equivalent) when they start and finish. Unlike say OCC, this violates the "scalable concurrency commandment" that "readers must not write" (i.e., modify global shared memory). An epoch-based solution might have better scalability, but wouldn't scale down to fine-grained data structures like registers (e.g. individual database records). My own experience is that reference counts can cause major contention on even just a few cores, but of course disjoint reads will not cause any contention. I'm less worried about the single-writer limitation, since multiplexing single-writer approaches like flat combining are often more effective than directly supporting multiple lock-free writers anyway.

