• Wondering which camera, gear, computer, or software to buy? Ask in our Gear Guide.

vfx Deep Fake

I downloaded some software called DeepFaceLab (DFL). It's suppose to be a very good DeepFake program for swapping faces or full heads in videos. The program is huge and doesn't have the usual GUI that most people have become accustom to. It's actually a set of batch files. Each one does a specific task. Right now I'm learning by reading the manual and watching the Youtube videos. I'm thinking about running a test by seeing if I can map my wife's face over mine. The thing is that it can take days for the program to learn the two faces before it can perform the magic. Also, it seems that you have to let the program learn for every single video clip you give it, even if you're using the same two faces... Anyway. That's all I know so far. Some of the results I've seen are eerie. So realistic. Is anyone else here playing with DeepFake technology?
 
Obviously I have, back in the day (last year) you used to be able to remote coprocess the deep learning computations like model training on google colab, which made it easier. You could just assign a task and let it go in the background on a remote server. However, so many people abused the technology that google outlawed running the code on their systems, and now your stuck running it on your local computer.

Anyway, it would be great if people would quit trying to overthrow governments in third world countries by using this software, since a lot of us have legitimate uses for it, and it would be nice if it wasn't banned globally.

The organized code batches are the default format for these formulas, which are developed and shared for academic purposes. Something to look into is gaining an understanding of branching formulas (code branches) since it's now very common for people to tweak and modify the core formulas for various types of user tasks.

Anyway, it's a very interesting area to get into, and I hope you're having fun with it.

here's a great starting point to introduce you to the cutting edge alpha state viz tech that hasn't made it into the mainstream yet.

 
Cool video. It seems so basic but everything starts somewhere. Perhaps The Maxtrix is more of a cautionary tale that we might have thought.
I think I probably should have explained that link a bit. I wasn't linking to the video on the front page, I was linking you to the page itself. Two minute papers is basically a news feed where they publish updates as to significant advances in the field of AI and Graphics technologies. You can use this channel to basically stay on top of developments in cutting edge code as they happen.
 
I have a very specific use for this program, and although I do find this sort of thing fun, it is not my goal to have fun.
I jump into these 5 and 10 thousand hour tasks, and while I always have a serious goal, I've found it's very hard to maintain pace day to day if I don't try to make it fun and interesting. You can work on something for 100 hours without any dopamine, but 3000 hours? I think I need that element of enjoying the journey to keep pacing over such marathons. Typically interest and enjoyment will run out before job x is complete, but if I can stay excited enough to get a long way in, I can make it to the end via sunken cost logic (the point of no return).
 
I think I probably should have explained that link a bit. I wasn't linking to the video on the front page, I was linking you to the page itself. Two minute papers is basically a news feed where they publish updates as to significant advances in the field of AI and Graphics technologies. You can use this channel to basically stay on top of developments in cutting edge code as they happen.
I discovered that shortly after posting about the cool video. It's so exciting to see what is now possible in real, or near real time. Volumetric effects take forever to render. Same with caustics yet there are now examples that rendered in milliseconds using the latest GPUs. WoW!
 
"what a time to be alive!"
..Two minute papers could not have put it better :yes:

I was watching the video on the Two minute paper page. The red characters trying to get the blue characters. The blue characters would run away. They would run into a room then block the entrance with large cubes. This would keep the red guys out. Nothing left to do. After so many iterations of the same results, the red guys figured out that they could use a ramp that was conveniently placed off to the side to go over the wall to get into the room the blue guys were in. Think about that. The red guys didn't just move the ramp then use it. They needed to know that the ramp was movable. They needed to have an understanding of elevation and gravity and possibly friction. Either they were programmed with some basic knowledge of engineering or, at least some life experience such as climbing a hill or steps or standing on a block or they would not have been able to figure out that using the ramp would get them into the room where the blue guys are. They would have had to give up, go away and onto another adventure. maybe in one of their adventures they would learn about the concept of going up and over instead of through a doorway, then they could go back to the blue guys and use the ramp to go over the wall to get them.. AI must start with some basic information and skills. The red guys already knew how to walk but they didn't know how to make use of the ramp. That took time. how did they figure it out. Even if they just decided to give it a try with no expectations of success, why didn't they try it on the first iteration? What did they learn that suddenly made them try? I guess this answer, plus so many more, are what make AI the new technology that it is rather than simply computer programming with more predictive instructions....

I love this shit.
 
..Two minute papers could not have put it better :yes:

I was watching the video on the Two minute paper page. The red characters trying to get the blue characters. The blue characters would run away. They would run into a room then block the entrance with large cubes. This would keep the red guys out. Nothing left to do. After so many iterations of the same results, the red guys figured out that they could use a ramp that was conveniently placed off to the side to go over the wall to get into the room the blue guys were in. Think about that. The red guys didn't just move the ramp then use it. They needed to know that the ramp was movable. They needed to have an understanding of elevation and gravity and possibly friction. Either they were programmed with some basic knowledge of engineering or, at least some life experience such as climbing a hill or steps or standing on a block or they would not have been able to figure out that using the ramp would get them into the room where the blue guys are. They would have had to give up, go away and onto another adventure. maybe in one of their adventures they would learn about the concept of going up and over instead of through a doorway, then they could go back to the blue guys and use the ramp to go over the wall to get them.. AI must start with some basic information and skills. The red guys already knew how to walk but they didn't know how to make use of the ramp. That took time. how did they figure it out. Even if they just decided to give it a try with no expectations of success, why didn't they try it on the first iteration? What did they learn that suddenly made them try? I guess this answer, plus so many more, are what make AI the new technology that it is rather than simply computer programming with more predictive instructions....

I love this shit.
Yes, what you're describing is the really interesting part of AI research. It's called emergent behavior, and it's one of the things I'm working on incorporating into filmmaking. I can put some deer in the forest for the background of a shot, and that looks good, but with a simple AI I can have them grazing around, looking for patches of fresh grass, etc. However, with a more complex AI, the deer can get in fights, avoid certain other deer, hide behind trees, and so on, and once the brain has goals, methods, and experimentation capabilities, that same background can take on a life of it's own, and suddenly, I don't know what's going to happen with my background extras, and you can get all these interesting behaviors "emerging".

It's interesting to note that if you watch the BTS documentaries about the making of HBO's OZ, they were directing this same way, with humans. All the actors in the prison were told to be in character and going about the normal day to day of prison life whether the camera was on them or not, and it increased the sense of realism a great deal, because the camera would be following two people talking, and in the background, you would always see people eating lunch or fighting or praying or playing cards naturally. With CG every action of every character is typically choreographed, so it gets labor intensive to have a crowd in the background doing hundreds of animation. This is essentially what the much lauded monster army system weta digital pioneered for the LOTR films was for. This type of thing has been in video games for years, but the time is now right to just bridge these concepts into film use. Soon we will see amazing worlds teeming with life that surprises even it's creators.

There is a lot of this technique visible in this trailer, with all these animals and fish imbued with slivers of AI and then filmed. There is no pose by pose animation being done here, this was all filmed inside an engine where the behavior of the character and fauna is the output of an AI controlling a blueprint that coordinates the animations automatically.


Right now all I've implemented so far are crowds of people, stopping at intersections, reading newspapers, taking selfies, etc. Also some birds and fish are up and running, but the really advanced stuff is yet to come. At some point in development, if I need a swat team to breach a house, I can just automate that by giving them the goal, and watch the scenario play out 20 different ways until I have the version I want to shoot for the film. End users won't be able to see the quality difference, because they don't have the other scenes to compare it to, but overall it should be very noticeable, in terms of overall scope and quality that's possible on budget x.
 
I've played around a little bit with Massive. I'm sure you already know of that program. It's been around a while. Crowd scenes, battle scenes, riots.... I didn't consider at the time that it may have been AI. I assume the program was simply using routines to handle collisions, and other situations. Whatever it was using looked pretty good to me.

Keep everyone posted when you hear news of the latest breakthroughs. For me, I'm content to sit back and simply be amazed by most of it. modeling, rigging, painting, lighting, Keyframe animation, mocap, fluid effects, dynamics and compositing.. I don't think my brain has room for much more :scared: I do have some use for DeepFaceLab though so I'll toss out some memories from my childhood to make room for it.
 
Lol, it's been a while back, I had actually forgotten the name of it. Right now SP uses Anima 4 for this. It's decent, but I'll probably swap it out for something I can normalize better. On the other hand, I need to see how far I can go with Dall E 2 , since Anima might gel just fine one normalized into a unified stylistic by the AI.

Here's some of my early tests with the Anima system. There's no AI here really, just timers and avoidance systems, but it does a decent job populating scenes with just a few hours work per scene.



 
Back
Top