As long as you've got enough parallax to work out the depth information from your scene, you can push the effect to recreate viewpoints that are wider then you have real data for.
You will end up with tiny slivers of image that you don't have pixel data for when there's a foreground element that diverges more then it did before, but that's easy to recreate. All post converted 3d films have this problem to an even greater extent, there's algorithms out there to clone the surrounding pixels or even use pixels from other frames if the object is moving through the scene
There are lightfield cameras out there that instead of using a single chip, they use an array of small cameras (think cell phone cameras) The adobe one is 500 Megapixels
See the research by Todor Georgiev http://tgeorgiev.net/ The Lytro camera is a nice cheap toy, but there's some stunning results form researchers.