• Popcorn Points determine how popular a video is. You can click the popcorn bucket or simply react (Like, Love, etc.) and it will register a vote.

Genetic


Just a little clip show from this minor side project I'm doing to develop some techniques.

This might help people understand the concept of the visual genetic. This entire reel, and in fact the whole micro project it's clipped from, are derived from one genetic I created. In this case I named it "Neon Jungle" because of the bioluminescent plants it produces. Then to make a story, I mix in other genetics such as guy rowing boat, or pyramid, and in this way, I can grow entire worlds, biomes, etc.

It's simple to mutate genetics on the fly. This means that if I add some fire to a jungle for example, I can take any genetic created from that mixture, and make that a source of an entire dynasty of genetics. It's in this way that Save Point can realistically accomplish design at a universal scale.
 
Upvote 2
Impressive! Why was the tilt up at 1:02 so jumpy?
 
It's just a very loosely edited reel that I threw together in 10 minutes because I was trying to explain the visual genetics aspect to Spike. So that clip simply shouldn't have been used. Also, this is produced by the inferior temp animation brain, which doesn't work very well, and makes mistakes like this all the time. On the reels where I actually took time to edit them, I just cherry pick the shots that it gets right, which is barely even 1/4 of them. That's one of the bad shots, and just shouldn't be in there.

here's a reel with the same tech branch, that I did edit a bit.


I have another animation system that I built that does the animation perfectly, (no jumpy camera moves or melting buildings), but it's super slow in comparison to the one shown above for right now, so I have to allocate 40x as much work time to make reels for that branch, which is the production level branch.

And here's what's for some reason my most popular video by about 6 to 1, 1500 shots from the current pipeline minus the "not really working yet" animation stages. You can pause the video at any moment and end up looking at a perfectly composed shot from somewhere in the SP fictional universe. Color balance, composition, theme, everything on brand. Some people asked me why this is significant, it's because it happened in 12 hours.


Once I can continue the midstage quality on through the final animation without quality loss, it's game on. Not long now.
 
I kinda just wondered from a technical standpoint, since it's not a real camera move. Why would a fake tilt be jittery like a real one, when done bad?
 
I kinda just wondered from a technical standpoint, since it's not a real camera move. Why would a fake tilt be jittery like a real one, when done bad?
I do have an answer, but..... It's complicated. When the camera moves in this animation solution, it's having to artificially refabricate the world of the shot as it moves. So it starts from a a perfect state, and tends to degenerate in quality as it diverges from the source. This can cause a number of issues, and I mean a large number, and so basically it's not a good long term solve, as any shot tends to disentigrate between 3 and 7 seconds from the origin data. If it was working conventionally, doing a digital pan like you've seen a million times before, it would be flawless, but would not feel "alive", so that's what you saw in reels like this below. Rock solid camera moves, but feels static and lifeless. What I'm after is free 3d camera movement in a 100% fabricated world.


See, perfectly smooth camera movement, but lifeless and flat by comparison to even the weaker animation stage I'm using now.

I don't know if I explained that too well, it's jumping because it's struggling to imagine what's on the next resolution line as the camera pans up, kind of stumbling.
 
Last edited:
I get it but isn't AI art the point where it can take care of it? I thought it was so advanced now it could do all this.
 
I get it but isn't AI art the point where it can take care of it? I thought it was so advanced now it could do all this.
nope. Working on it. You'll occasionally see very limited clips that make it seem so, but those techniques break down fast when you try to do the real thing. Right now, the broken animation stage I'm using to fastrack minor demos is sourced from the RG2 engine, which has 50 million dollars in funding. That 50 million dollar solution is what you see skipping the camera in the OP video. People I talk to seem to think that AI automated animation is solved, I think because a lot of people have tried to claim the distinction of pioneering it via some touched up 2 second clip, or limited bounds demo, but the reality at this moment is that no one in the world can provide smooth and consistent AI animation. Lot of people working on it though, inclusive Microsoft, adobe, Spielberg, and Open AI.

This is my solution that I built, and it does work, but I don't have the investment available to automate it, and it wouldn't be for mass market anyway, because it's not simple enough to just sell as a website toy, which is where most of these research projects make their money. Right now it takes like 4 hours to make a shot like this work, and requires maybe 40 manual interventions per clip, or more, so it's not good for "press the button and get the results" companies such as Runway and Open AI.

 
Last edited:
I get that AI is work, people need to stop complaining like it's some robot that will take over everything. It is machine learning yes, but you also need to feed it when it's hungry.
 
In other words, it's not always

1695432020402.png


😆
 
I get that AI is work, people need to stop complaining like it's some robot that will take over everything. It is machine learning yes, but you also need to feed it when it's hungry.
Dude, you have no idea. This is literally the most complicated and work intensive thing I've ever done in my life. I'm averaging about 70 hours a week, and loving it.

I'm afraid that once I take this thing public in the next few months, I'll end up involved in a lot of those debates, and it's going to take a lot of patience to endure what I'm sure will be tens of thousands of uneducated opinions about how easy all of this is. I'm about 28 months of full time work now (though I did basically start from zero when it came to full scene and character animation) and while I understand why people think I just "pushed a button" I'd challenge anyone to try and replicate the quality and consistency of that "pipeline timelapse" video. Easy to throw shade, rather difficult to build a consistent universe that feels coherent and on brand.

Anyway, I'm glad you get it. A lot goes into making this stuff actually work for more than a tik tok.
 
In other words, it's not always

1695432020402.png


😆
Some isolated bits are, and of course I automate everything I can every time I can, but... I'll give you an example.

I got tired of manually fishing around for what countries I could reference that would produced distinctive enviroments that I felt worked for the brand. So I go over to chat GPT 4, and ask it to list all the countries in the world. Then I copied that list into a txt file, then I built a referencer into the input feed of the midstage, told it to make a bunch of pictures of a road in each country, so I could see what looked cool.

I leave it running all night while I'm working in another program on another design aspect, then wake up next day and check the folder.

There's now 1837 files in there. My readout from one night of automated research. I start at 8 am, and spend 6 hours straight looking at each individual output, grading them, filing them as success, failure, highlight, etc. That's just step one and two of some aspect as basic as "let's find some photogenic locations" Then there could be days of experimenting in similar ways with the top 10% of results, triangulating in on the winning genetic for final product use.

The OP video is a "winning genetic" that after a week of development, consistently produces amazing visual locations over stress testing. Once one is validated, it's forward portable as overall tech improves across the pipeline, which now consists of many different AI brains working in series and parallel.
 
Last edited:
But speaking of the "easy pushbutton parts", there are some, and I had a fun idea last week, if you're interested.

A contest for all of the forum members, concluding around Dec 1.

Anyone can write a story, and we vote on them, and whoever wins, I'll turn their story into a finished 22 page comic book by Christmas.

1695433755567.png


And almost forgot, did you ever get the notification for those album covers I made for you a couple months back? I never heard any reply, and just forgot about it, but with the recent web issues, it occurred to me that maybe you just didn't see it. The thread was called "Heavy Metal Album Covers" and has a handful of cloud shouting themed metal cd covers.
 
Last edited:
A contest for all of the forum members, concluding around Dec 1.

Anyone can write a story, and we vote on them, and whoever wins, I'll turn their story into a finished 22 page comic book by Christmas.

OK I'll bite. (And, by the way, the comic panels are great!)

I thought of this bit I wrote one night, some time ago, just kind of free-writing, because it was about a recent topic: UFO encounters. I found it buried in some digital folder and cleaned it up a little. It was an exercise in first person voice, comic in intention, like Salinger in Catcher in the Rye, or Philip Roth In Portnoy's Complaint. For a comic book it might need a little adaptation, although the narration might actually work, in a box or something, and there are scenes that might be funny when illustrated.

As a piece of writing, it needs to read fast and easy. With, like, Catcher in the Rye, or Portnoy's Complaint, I get immediately drawn in--I hear Holden Caufield, Alexander Portnoy, and they are vastly entertaining, and delightfully, literally, lol funny. It's a high bar, and this thing was just a spontaneous little jump at it.

Anyway, trigger warning: This narrator is a little blasphemous, so anyone uneasy with their prophet being disrespected probably wouldn't like him (or me) much.

Blue Wagon

(It's a little long, 2700 words, 7 pages, so if nobody wants to bother I understand.)


(However, If anyone did bother, my question would be: were you amused? Did you, at any point, laugh?)
 
Last edited:
Back
Top