This!
Even with a stable framerate, this technique intentionally delays the next frame to add compensation frames.
As an example, let's have a magic VR helmet running at 120Hz and instant processing (ie, 0ms GTG time, which doesn't exist) and a video card capped at a perfectly stable 30 FPS (aka 30Hz).
We will split a second into ticks - we'll use the VR helmet's frequency of 120 Hz, so we have 120 ticks, numbered 1 to 120. (Just to annoy my fellow programmers!)
We therefore get a new actual frame every 4th tick - 1st, 5th, 9th, etc.
Without motion compensation, we would display a new frame every 4th tick - 1st, 5th, 9th, etc.
With ideal (instant) motion compensation, we can't compute a transition frame until we have the new frame. So we could, theoretically, go real frame #1 on 1st tick, computed frame based on #1 and #2 on 5th tick, real frame #2 on 6th tick, computed frame based on #2 and #3 at 9th tick, etc.
This would also be jerky - 2 motion frames then 3 at rest? We could push the frames back a tick and fill the interval with three compensation frames, but then we increase the delay, which is always higher than this example (and is multiplicative). So we'd have frame #1 at 5th tick, computed frames at 6th/7th/8th, frame #2 at 9th tick, etc. You've now introduced a minimum 4 tick delay, which at 120Hz is 1/30 of a second, or 33ms! To an otherwise impossibly-perfect system!
What about historical frames instead to PREDICT the next blur? Well, then, when something in-game (or, really, on screen) changes velocity, there would be mis-compensation. (Overcompensating if the object slows, undercompensating if it speeds up, and miscompensation if direction changes).
There's more problems, too:
- This doesn't help when the video card frameskips/dips.
- Instant GTG and instant motion frame computation do not exist. At best, they're sub-tick, but you'd still operate on the tick.
- Input delay already exists for game processing, etc.
- Increased input delay perception would be exponential to the actual length of the delay. For example, 1-2ms between keypress and onscreen action? Hardly notable. 50ms delay just to start registering a motion on screen and course correct? Maybe OK, maybe annoying. 150-200ms? Brutal.