it smells fishy to me..
would be easier to take pinhole photos (perfect focus though the entire range) and blur in software. In short I think thats sorta what going on. The lens is supposed to be made up of many individual elements or "micro lenses" that represent each PIXEL. so I'm presuming 18 million micro lenses for an 18megpixel resolution, proably wont be that high res to start with though..
Anyway, the underlying idea SEEMS to be that if a light ray (all the that photons pass through one micro lens) spreads out on sensor more then a perfectly focused light ray would, then you CAN run some math and derive how much out of focus it is and figure out what should be under the lens IF it were properly focus. OF course you loose data, which means that farther out of focus the less detail sorta explaining why it seems a bit soft in the examples.. ok, thats what I came up with anyway..