Maybe stereoscopic video is not completely dead

I didn't even realize that there are now working models for Glasses free 3D. It seems that the different options are very limited in terms of resolution and possible audience size, but it's just a matter of time before 3D could once again become "the thing", not just at the theaters, but also at home. This time without the awkwardness of wearing the glasses.

Good article on the state of glasses free 3D

My only worry is that the technology people are also talking about AI 2D to 3D conversion software that will make any movie into a 3D movie. Imagine trying to watch a fast paced action movie with a bunch of quick cuts and hand held camera work in 3D. It's the kind of thing that kills the potential of a purposely designed and executed 3D movie before it has a chance to be properly explored. Why do I think that? It's because that is exactly what happened when all those 2D movies tried to ape the success of Avatar by converting to 3D simply to cash in with no thought at all for what 3D can mean to storytelling.

I for one look forward to the day when I can watch a good stereoscopic movie at home on a 100 inch monitor without have to wear 3D glasses.
 
Last edited:
I've always been really into 3d, and was a super early adopter. It started in 1998, when you could play a game called Magic Carpet with the red and blue glasses on a 486. I got the I/O glasses when they came out, and played games in 640*480 stereo for years. Then years later, I got the very first oculus prototype, the pre release one.

It's always been an underdeveloped sector, and the tech is still in it's infancy, but I personally have had some amazing and memorable experiences inside virtual reality, and I just wish there were more interested people out there to share them with. I've played chess, sitting at the same table as a person in Japan. I've watched game pieces come to life and walk around like the game in star wars. I've stuck my head out the window of a plane as I flew over Hokiado at sunrise. I've driven a Ferrari 360 modena along the costal highways of California. It's actually all been really cool.

As far as the passive 3d viewing surfaces, they've been working on if for a while now. I think a few phones may even have it already. That's definitely the point where 3d can finally become mainstream. I never really liked the glasses, r/b/ or shutter variations, and no one else did either. As far as 3d content, and AI, and conversion, it's not the greatest situation, and I think it does more to hurt the publics interest in stereoscopic rather than help. I get that when you buy a 3d monitor, you want more 3d content to watch, but the depth on those artificial ones feels very shallow and unimpressive.

However....... If you see the real thing, 3d as it's meant to be, where it looks like the monitor is as deep as it is wide, or in VR with native content, it's just a whole different thing. It's amazing, and I would love to see it come back around in a format that works.

The main issue with 3d movies specifically, that keeps them from being really great, is that people have to develop a focusing tolerance for it to start working well, and if they make it deep, and a person with no tolerance watches it, they get a terrible headache. That's why most theatrical 3d movies appear very shallow in depth.

Anyway, really cool that you brought that up, I'm super into it, and have thought many times about making this Labyrinth project stereoscopic 3d. I could actually do it, and maybe it would be a good idea. What do you think?
 
You read my mind. Your project, or any project that has polygonal models, is a nature for stereoscopy.

I've practically written an entire book of stereoscopy from a technical and artistic point of view. You're right, depth budgeting needs to be considered and put into practice to keep people from experiencing eye fatigue. You can always spot 3D movies practicing depth budgeting by watching them without the glasses. You will discover that not every single shot is stereoscopic. Some are regular planar images. Your brain bridges the gap between the two and amazingly, the 2D planar images do look like they too are in 3D

Just for fun, and in the spirit of 3D, here is an example of cross eye 3D that I made last month. This is an ED 209 from Robocop. I printed this on my 3D printer. It stands around 7 inches tall.

ed.jpg


Look at the picture and cross your eyes. When you do this, you will see 3 pictures. The one in the middle will be in 3D. Give your eyes some time to relax and get comfortable. Once they do, looking at the picture will feel natural.

Once you can see the middle picture in 3D, you can use your hands to block the side images so you only see the middle picture. Place your hand up, in front of your face. Slowly move them together to block the left and right images so you don't see them. You will still be able to see the middle picture in 3D!
 
Last edited:
What about a Magic Eye Movie? Jurassic Park, the Magic Eye edition:
 

Attachments

  • 300882CB-76A7-4E24-873A-C6B8C6E3385F.jpeg
    300882CB-76A7-4E24-873A-C6B8C6E3385F.jpeg
    154.4 KB · Views: 147
Last edited:
I'll at least do a test cell at some point. Maybe very soon. No real reason not to. It will double my render time for everything if I go down this path, and that's definitely a negative, but adding an extra layer of niche interest, such as stereoscopic, could be beneficial in several ways. I personally would love it. 2x render time is a tough pill to swallow though.
 
Yeah, but if the Unreal engine is rendering in near real time already, it shouldn't be much of a hit, time wise. A little off subject but I saw a video that was claiming some incredible things about Unreal 5 and that videocard you sometimes mention. Rendering millions and millions of polygons in real time while still using raytracing (no light maps or baked lighting). One of the clips shown had a model directly from Zbrush, full multi-million polygons, plus the other geometry in the scene plus hundreds of textures. Unless I'm mistaken they claimed the scene that they presented was rendered in real time. I thought to myself 'my god!'. It almost looked like a fully rendered scene using V-ray or Arnold..
 
Unfortunately, what I'm doing isn't in real time. We slow it way down so we can accomplish things like temporal AA, and higher light attenuation ranges. It varies by scene of course, but it's probably closer to one frame every few seconds. Still, that's 4k raytraced cgi with post processing at 2 seconds a frame average. I used to get frame times of 90 minutes. The V-ray frames looked a little better, maybe 20% better, but this is so fast by comparison that it enables a completely different scope and workflow. In Max I remember meticulously planning every single frame, then waiting days for a small scene to complete. This system is so fast that I shoot more like I would with a physical camera, shooting fast as action happens, and then just cutting it down in post. I'd say I throw away upwards of 85% of footage shot.

Stereo would be really awesome though, I looked into it a bit today, to see if there were any prefab rigs I could just import into the main engine, but I haven't seen anything so far. I could do it manually, but that's not a good solution for rebuilding a hundred existing project sequences with a new camera. Honestly, I'd probably have trouble doing this first run in stereo, just because it needs to be so large, and I can't afford a second 3090 card for parallel processing. I'd like to have several hundred more cells this year, and launch early next year, and slowing down the render would no doubt be an issue. In example, I just finished todays shoot and sent it to render. It says 18 hours render time, and that's at a reduced AA setting. Mainly because I threw 120 humans into this one. So if I doubled that, and produced it at my normal settings, it would take almost 2 weeks to produce what will probably become 5 cells.

About UE5 in general, it's capable of amazing rendering of several different types. It will actually path trace now, but that's far slower, point being you can get those vray results if you want to wait 20x as long. Archviz setups in UE5 look nearly photoreal in real time as you mentioned.

Houdini, vray, and many others have now created native plugins harnessing their specific abilities into UE, so we're at the very brink of a spectacular new age of graphics capabilities that simply hasn't been around long enough for even a tiny fraction of it's potential to be realized.
 
Unreal engine does have a 3D camera. You have to turn it on then set the parallax. There might be more to it but that's when I stopped reading. You might have to re-render but I don't think you'd have to set up and animate the new camera. You might even be able to use your existing camera as either the left or right 3D camera. That would save time re-rendering.. I didn't read if the 2 cameras are parallel or converged. Maybe either..
 
Last edited:
I did know about the internal functionality, I just always look for 3rd party plugins first in the research cycle. That may sound wrong, but here's my reasoning. There are a lot of niche features in UE, and they do an amazing job of making hundreds of systems work together. However, there's a limit to how deep they can go into control and testing for certain niche features, and often that gets addressed in these plugins, or "mods" For example, you can get a good range of bokeh effects from the default camera, but you can't get anamorphic bokeh that looks right, so someone built another camera where it does look right, or close at least. Anyway, if no mods or patch exists for a feature, then I think, probably the internal functionality is pretty robust. Anyway, I do mostly use internal stuff at the end, but try to eliminate the possibility of having to switch systems twice by working backwards from the third party.

3d is a super niche usage, so I was thinking it might be underdeveloped in the vanilla version. I did find a plugin while I was looking, that extends the 3d camera functionality to 360 panoramic vr shots. I could make some minor side products very quickly with single frame 360 vr snapshots.

Thanks for finding that info though, since I didn't turn up anything better than the internal yesterday, I'll try that today. I'm locked out by a render for a while though, so it might be a couple days before I have a demo of this up and running. What I have in mind is to do an entire cell as a test, after verifying output with a 5 second clip. I want to get a feel for how hard/easy it is to actually distribute the footage. Youtube will take an SBS, and that should bridge to everything, but you know how it is, always test, never assume.

What kind of 3d viewing stuff do you have? If I publish a 3d file here, is there anyone who could even watch it if they wanted to?

I always want to master in HDR also, but for some reason that seems to be taking off slowly as well.
 
Last edited:
I use to have a RealD 3D set up with active shutter glasses and stereo multiplexer plus a 55 inch DLP projection tv with stereo image sync transmitter. All that was hooked up to my computer. I would render over under stereo pairs. The multiplexer would convert them to checkerboard without compressing the images then send it to the tv where it was separated and shown; left right left right..... After that I switched to side by side stereo pairs that converted to interlaced multiplexed stereo pairs that I could view with RealD 3D passive polarized glasses. Now, since 3D is mostly dead, I run research files that use the tried and true anaglyph color separation. I like red and cyan.

Here are a few of my anaglyph (red/cyan) images from years past.
house3d.jpg


3dwebalien.jpg


skullCGI3D.jpg
 
Now I wish I had my 3d glasses still. They always get lost over the years.

If you're a stereoscopics enthusiast, you really need to try out the Oculus Quest 2. This thing is fully self contained at 300 bucks, and if you haven't tried one out yet, it will blow your mind. Plus you can watch all 3d content from past years on it as well. It won't decode anaglyph, but O/U and SBS work great, I used to watch movies inside a giant virtual theater in 3d on the CV1.

It's actually pretty great for exercise also. Boxing in VR in particular is great, because the time really flies by and you're in an aerobic state IRL the whole time with all the ducking and dodging. I used to spend a lot of time in this phenomenal climbing simulator, just hanging from a cliff by two fingers all the time, trying to grab another rock ledge, and occasionally plummeting to my death.

I don't really think 3d is dead, I just think that Sean is about right, and that it's coming back in a big way in a few years from now as VR stabilizes and improves.

This was probably my favorite game to play in 2016. In the commercial she just climbs a few feet, but inside the vr world, sometimes you're climbing for 30 minutes straight through some really difficult areas where you are constantly physically stretching your arms to reach the next ledge. This game works best with the hand trackers that are now included with all vr sets.

 
HA! That looks like a lot of fun.
If anything will jumpstart 3D again it will be Avatar 2. We can only hope.
Get yourself some anaglyph glasses from Amazon. Don't waste your money on the more expensive plastic models. Their lenses are too thick. The simple paper models are the best.

amazon.jpg
 
Back
Top