• Popcorn Points determine how popular a video is. You can click the popcorn bucket or simply react (Like, Love, etc.) and it will register a vote.


Seizure warning for people sensitive to flashing lights.

About 3 days ago around midnight I was finally able to successfully connect the two sides of the Save Point pipeline, the UE5 side and the AI side. I'm making a video showing that process working for the first time, but this isn't it. This one just shows a single concept of the process, where camera angles and fov are perfectly matched across an unlimited number of possible scenes. Think of a physical stage, with preprogrammed camera tracks like a live broadcast, allowing modularity between pre calibrated sets.
Last edited:
Upvote 2
It took a really long time and a lot of work to get this running. You'd be surprised at how complex the process really was. Now that it's working though, I can constantly improve it, gradually commit every step and protocol to memory. A month from now, I'll be able to do 30 scenes in a day.

This moment is probably the turning point in the entire project, where production speed takes an exponential leap, and screen characters and art become much easier to connect with. It's always been the downfall of CGI animation that viewers can't emotionally attach to CG characters the way they can to 2d animation. 3d animation is vastly easier to deal with, but has always fallen flat in the market. Bridging 3d to 2d and automating that is one of the single most important aspects of the whole Save Point project, so this is a major milestone.
Last edited:
what tech are you going to use to speed up the facial expressions and movements ?
I'm not sure I understand the question exactly. I'm right now using a separate AI that runs as a subset of the main, and it does facial animation and movement pretty well from the neck up. Body animations are using the technique seen above, but it's hard to explain, because you're not actually looking at one technique, it's a dozen subengines working in series and parallel.

Basically this tech I'm demonstrating here has nothing to do with animation, or at least very little (it's complicated). Primary animation is handled in UE5, which is the part of the pipe you've been seeing for a year, and the AI side takes finished animations and converts them to a new visual format one frame at a time. The movement is actually an illusion, which is how film has always been, and basically the AI layer contributes almost no movement information to the frames. Motion blur, despill, deflicker, nvidia optical flow, composited noise overlays and the like are used in the final stage to create a believable motion composite from the B engine output frames.

Though I don't understand exactly what you're asking, I'll try to answer. Facial animations in the pipeline originate from video face capture, and that data is passed to UE5, where it is normalized onto a static head model, fed up into the AI layers, and then reconstituted as an animated face. I can absolutely go straight from film to the AI layer, but the lack of absolute continuity and the need to retrain a new AI for each clip makes that a very unreliable option vs the way I'm doing it. I can still get away with a cheap shot here and there, but for actual pro caliber results, I need a level of stability that can't be achieved by one off tactics.

The aspect of the whole tech chain the speeds things up so fast is not one specific technology. It's the process I created from the chain of technologies.

I'll try to explain very briefly. It has always been the plan to make all assets modular as drop in alpha layers. SP final assets are organized into pools of sets, characters, effects, skies, weather etc, and grouped by camera angle, fov, movement (such as pan left slowly), and a few others. This means that if I have a scene where a character stands on a cliff overlooking a valley, I can film 5 versions of what's in that valley, using an exact copy of the original camera track. Beyond that point, I can drag and drop valleys into the scene, or characters, or cliffs, etc, and they will all mesh perfectly in motion on the screen. This isn't really a technology, but rather a system built on the availability of new technologies.

It should become clearer as people see more. The big takeaway is that once a difficult and time consuming process is done, it is permenantly stored, and does not need to be repeated. Combined with embedded scene compatability information, this means that once one man walks across one forest trail with path shape x, that work never has to be repeated, and any character could walk down that path or any compatible path forever with no additional labor. Same for many many aspects. So this is a process designed to snowball in speed, starting right now, at the connection milestone.

If you go back 2 years, you can actually find me talking about this exact stage that I just arrived at 4 days ago.

This is going to be a big deal when people understand the advantages of this method. It's crazy fast compared to the legacy method, like hundreds of times faster for a result that's very close. Right now I need to buy a 5090 card to get this up to full resolution (it's running at 1080p right now, because a 3090 can't handle this pipeline, but after the system is applied, one could get the same results on a 150 dollar card, and that's when this will open up to other contributors in a big way, because right now you need 7 grand and a phd to work on this, and after I'm done you'll need a mouse and a laptop and that's it.

I'd also note that there is a very simple example of this concept working in the second video above. You see the dozens of cats? That's just one cat, made into one tiny SP style asset (Protagonist cat semi isometric walk towards camera left), and then I just copy pasted it 30 times and layered it over a plate. So it looks like I animated 30 cats, but this is just one animation of one cat done one time. Remember Scooby Doo walking down the hallway of the haunted castle? Same thing. They made one animation of that hallway, and used it in 100 episodes. You can see how useful this becomes once you can make a new hallway with a click or two, and keep all the other animations intact.

At the beginning of the idea, I did some research to see if anyone had used my method before, since it seemed so logical to me. It turns out that it had been used before, and was actually the foundational concept of the Hannah/Barbera corporation, who used it to great effect across decades, milling out huge amounts of inexpensive content that went into syndication across dozens of countries.
Last edited: