Stability AI

I'll keep working with the program to see if it can be tamed at all. For the sake of fun, it's a great program, but I've you want something specific, it's pretty frustrating.

reading through this lexica website examples is the best way to learn how to talk to it
https://lexica.art/

my newest.. jet skiing in the ocean with massive waves. looks like so much fun :)
 
Last edited:
damn this stable diffusion stuff getting cool.
You have to do some programmer stuff to set this up right now, but soon enough it will be available on the different GUIs

ability now to upload and original image and then alter it -- this is AMAZING if youre thinking about getting a tattoo for example
you could upload a pic of yourself, naked or shirtless or whatever, and then tell it to give you whatever kind of tattoos and generate like 1,000 different images and scroll through them all and find out what tattoo looks the best for you.

just one of the applications that springs to mind.
 
I found this cool list !!
Use this to help guide your prompts

Formats​

3D Render, Blueprint, Book Cover, Cartoon, Cave Painting, Collage, Diagram, Double Exposure, Glass, Illustration, Line Drawing, Macro 35Mm Photograph, Marble Statue, Oil Paint, Oil Painting, Painting, Paper, Pencil Sketch, Photograph, Pixel Art, Polaroid, Portrait, Selfie, Sketch, Tattoo, Watercolor, Watercolor Painting

Styles​

Abstract, Airbrush, Anime, Art Deco, Art Nouveau, Banksy, Baroque, Bauhaus, Biopunk, Classicism, Cubism, Cybernetic, Cyberpunk, Dadaism, Dieselpunk, Fractal, Futurism, Game Of Thrones, Glitchcore, Gothic, Hieroglyphics, Impressionism, Impressionist, Low Poly, Minimalist, Op-Art, Photorealism, Pop Art, Post-Apocalyptic, Realism, Renaissance, Rick And Morty, Rococo, Steampunk, Stranger Things, Surrealism, Synthwave, The Simpsons, Ukiyo-E, Vaporwave

Perspectives​

85Mm, Aerial View, Bokeh, Close Up, Dark Background, Drone, Fisheye Lens, Framed, From Above, From Behind, From Below, Full Shot, Hard Lighting, In The Distance, Isometric, Knolling, Landscape, Lens Flare, Long Shot, Low Angle, Motion Blur, On Canvas, Overhead View, Panoramic, Shallow Depth Of Field, Telephoto, White Background, Wide Angle
 
Nice, thanks for posting it!

I'm deep into building a fast fluid sim variant this week, but I'll soon be back to AI, though the next research jaunt will be more focused on stylizing input frames, and preserving fluidity of motion in frame sequences. I suspect that any insights I can get into how to better communicate with it will be helpful, even in the stylization use case.

Would love to see more output examples from you guys, this is interesting stuff!
 
Nice, thanks for posting it!

I'm deep into building a fast fluid sim variant this week, but I'll soon be back to AI, though the next research jaunt will be more focused on stylizing input frames, and preserving fluidity of motion in frame sequences. I suspect that any insights I can get into how to better communicate with it will be helpful, even in the stylization use case.

Would love to see more output examples from you guys, this is interesting stuff!
I've been waiting for them to publicly release the 1.5 version before I go back into AI... right now it's only on the official website and none of the apps have it, so i feel like im a 2nd class artist right now lol. they fixed all the weird hand and face issues so its kinda lame knowing i shouldn't be getting these artifacts anymore and soon enough theyll be gone.
 
Yeah, I know that feeling. Why bother making a bunch of stuff that's going to be obsolete in a minute anyway. I waited for years for them to release UE5, just because I knew it was pointless to spend a lot of time doing stuff that wasn't going to matter once it released.
 
It was fun to play with but too random to be used as a pre-vis tool.
I felt exactly the same way about it. Mostly. Dall E could come up with some good visual inspiration concepts, but it's not to the point where you could storyboard with it effectively. They are making really steady progress over recent years though, so I expect it's just a matter of time.
 
I felt exactly the same way about it. Mostly. Dall E could come up with some good visual inspiration concepts, but it's not to the point where you could storyboard with it effectively. They are making really steady progress over recent years though, so I expect it's just a matter of time.
It's crazy and also a little sad what will become of the arts...

Think about choose your own adventure books.. this thing, once its better writing books too, jesus.... it can have an unlimited number of chapters that literally are made up on the spot and customized EXACTLY to your marketing profile.

How can a random author trying to break into the industry compete with that?

and for movies...
It will get to the point where people can make entire films, soundtrack, voice actors, etc, do it all just by typing it in with AI and then selecting their favorite iterations and refining.. a single person will be able to do everything by themselves...

It wont be AS GOOD as the real thing.. but when it 's 180 million dollars cheaper?? It's gonna make sense to do it that way.
I have a feeling a lot of what we know about the film industry will dramatically change in this century.
 
It's crazy and also a little sad what will become of the arts...

Think about choose your own adventure books.. this thing, once its better writing books too, jesus.... it can have an unlimited number of chapters that literally are made up on the spot and customized EXACTLY to your marketing profile.

How can a random author trying to break into the industry compete with that?

and for movies...
It will get to the point where people can make entire films, soundtrack, voice actors, etc, do it all just by typing it in with AI and then selecting their favorite iterations and refining.. a single person will be able to do everything by themselves...

It wont be AS GOOD as the real thing.. but when it 's 180 million dollars cheaper?? It's gonna make sense to do it that way.
I have a feeling a lot of what we know about the film industry will dramatically change in this century.
OMG, I think you're starting to understand what Save Point is, lol.
 
OMG, I think you're starting to understand what Save Point is, lol.
No im talking about something far more expansive that knows as much about each individual as facebook or tiktok.

Like for example I don't want to be a cat, i hate cats and im allergic to them, I would never read a book about a cat. just wouldn't happen.

Now if you're talking dragons I am in, a book with a dragon as a 1st person perspective. shit i dont think ive ever read something like that. sounds interesting to ME.
I'm talking about something that is SPECIFICALLY customized just to that individual.
 
No im talking about something far more expansive that knows as much about each individual as facebook or tiktok.

Like for example I don't want to be a cat, i hate cats and im allergic to them, I would never read a book about a cat. just wouldn't happen.

Now if you're talking dragons I am in, a book with a dragon as a 1st person perspective. shit i dont think ive ever read something like that. sounds interesting to ME.
I'm talking about something that is SPECIFICALLY customized just to that individual.
well, there's a LOT of complexity to implementing either approach effectively. What you are talking about is a vast, reactive AI that's wearing 50 hats and communicating effectively between them. Really what you're describing is closer to what you would just see in an a traditional CRPG.

There's just a lot of overlap in the goals you're mentioning, and what I'm designing here. For example, being able to custom market different plots that were tuned to the user was a design goal from the beginning, but it's not done with scraping, data aggregation, or similar. We just do it in the choices themselves, or that's the plan for the actual main story. So choices branch all the time, and via the map we have a record of choices a user has made at any point in the timeline. So let's say every time a viewer is faced with a choice where they can show interest in muscle cars, they take that choice. A few instances in, we can infer that this viewer is interested in muscle cars, and send them off onto an adventure where they must race, trade, or develop muscle cars. During that branch only, we can show ads relevant to automotive enthusiasts.

I doubt more expansive is the right description. Algorithms running solo could certainly spit out more volume that SP ever would, but then that's already been done many times. You could look at AI dungeon, available now. Rimworld, Dwarf Fortress, and similar have pioneered infinite emergent storytelling at a low level. D&D ruleset games often allow incredible variety in one's avatar, and emergent plots. The issue is always the quality of the product. I've studied procedural generation for about 20 years now, and I know how things turn out better than most. It will be some time before we can see purely reactive AI generations that eclipse even mid grade television. I'd say a robot can custom make a CW quality show in 25 years.

My design is a hybrid, where human creatives work with AI's, mainly as a leverage system, to allow output speeds so high that the branching story paths become feasible. We can't spend a million dollars and a month developing a scene that only a small percentage of viewers will watch. The other side of it, that I think my Silicon Valley friends would miss, is that people like making movies, and completely bypassing creators in favor of automation looks good on paper, but leaves a huge group of people unserved, and would degenerate into predictable templates faster than you would imagine. AI is very promising, but I think writing people completely out of the equation is a mistake, and when we build a tool, we should treat it as a tool, and not as a replacement for the worker. That's an idea that we've seen tried many times, and has constantly made a small number of people rich, while making the world worse for everyone else. I'd cite the war-dialer, now illegal under US law. Same thought process, automate, advertise, no work, huge income. In some sense, it's the cancer that sociopaths introduce into society.

Something I do see happening in the future though, much sooner, is AI filters that add product placement items into film and television. What if a rerun of the Sopranos just had Christopher wearing a different jacket? Something that some fashion company wanted to push that year. What if you could yell at the TV, "buy Chris jacket" and it would pause the show and take you to a website where you could order it. Maybe even more subtle, a guitar leaning against a wall in the room while he talks to Tony on the phone. Point the remote at the guitar, and say "buy guitar", same thing. Has your cable box seen you looking at acoustic guitars, but never electric? People watching the same episode could see different background objects, so that client could see an acoustic propped against the wall where another viewer of the same episode could see an electric, if that's what they had been purchasing, or showing interest in.

Also, the cat is only for this pre demo, so in the main game the player is a person. The way it's built, it's actually quite possible to recreate the entire thing with different characters, so that's a long term goal after vastly more significant goals are achieved. Lastly, there are Dragons in both the Labyrinth demo, and Save point, though there are no current plans to let you drive one. That would be cool though. You do get to ride a dragon in both..

Lastly, the AI custom tailoring itself to the individual, that's what my first project was, with Mobius Engine. It only had to deal with sound, which was 1000x easier though. The engine would tailor itself to real time feedback, such as a biofeedback monitor, or events inside a game world, and could gravitate towards music composition that was having the desired effect on the client. EG, person says I want to be calm, then music starts playing. The engine would remember which chords, rythms, melodies, sounds, seemed to hit closer to the target, and begin triangulating in on composition and mix that achieved those results in the client. It wasn't super fast, but over time it could learn what music you liked, and compose new tracks in real time that were closer to that.
 
Last edited:
well, there's a LOT of complexity to implementing either approach effectively. What you are talking about is a vast, reactive AI that's wearing 50 hats and communicating effectively between them. Really what you're describing is closer to what you would just see in an a traditional CRPG.

There's just a lot of overlap in the goals you're mentioning, and what I'm designing here. For example, being able to custom market different plots that were tuned to the user was a design goal from the beginning, but it's not done with scraping, data aggregation, or similar. We just do it in the choices themselves, or that's the plan for the actual main story. So choices branch all the time, and via the map we have a record of choices a user has made at any point in the timeline. So let's say every time a viewer is faced with a choice where they can show interest in muscle cars, they take that choice. A few instances in, we can infer that this viewer is interested in muscle cars, and send them off onto an adventure where they must race, trade, or develop muscle cars. During that branch only, we can show ads relevant to automotive enthusiasts.

I doubt more expansive is the right description. Algorithms running solo could certainly spit out more volume that SP ever would, but then that's already been done many times. You could look at AI dungeon, available now. Rimworld, Dwarf Fortress, and similar have pioneered infinite emergent storytelling at a low level. D&D ruleset games often allow incredible variety in one's avatar, and emergent plots. The issue is always the quality of the product. I've studied procedural generation for about 20 years now, and I know how things turn out better than most. It will be some time before we can see purely reactive AI generations that eclipse even mid grade television. I'd say a robot can custom make a CW quality show in 25 years.

My design is a hybrid, where human creatives work with AI's, mainly as a leverage system, to allow output speeds so high that the branching story paths become feasible. We can't spend a million dollars and a month developing a scene that only a small percentage of viewers will watch. The other side of it, that I think my Silicon Valley friends would miss, is that people like making movies, and completely bypassing creators in favor of automation looks good on paper, but leaves a huge group of people unserved, and would degenerate into predictable templates faster than you would imagine. AI is very promising, but I think writing people completely out of the equation is a mistake, and when we build a tool, we should treat it as a tool, and not as a replacement for the worker. That's an idea that we've seen tried many times, and has constantly made a small number of people rich, while making the world worse for everyone else. I'd cite the war-dialer, now illegal under US law. Same thought process, automate, advertise, no work, huge income. In some sense, it's the cancer that sociopaths introduce into society.

Something I do see happening in the future though, much sooner, is AI filters that add product placement items into film and television. What if a rerun of the Sopranos just had Christopher wearing a different jacket? Something that some fashion company wanted to push that year. What if you could yell at the TV, "buy Chris jacket" and it would pause the show and take you to a website where you could order it. Maybe even more subtle, a guitar leaning against a wall in the room while he talks to Tony on the phone. Point the remote at the guitar, and say "buy guitar", same thing. Has your cable box seen you looking at acoustic guitars, but never electric? People watching the same episode could see different background objects, so that client could see an acoustic propped against the wall where another viewer of the same episode could see an electric, if that's what they had been purchasing, or showing interest in.

Also, the cat is only for this pre demo, so in the main game the player is a person. The way it's built, it's actually quite possible to recreate the entire thing with different characters, so that's a long term goal after vastly more significant goals are achieved. Lastly, there are Dragons in both the Labyrinth demo, and Save point, though there are no current plans to let you drive one. That would be cool though. You do get to ride a dragon in both..
Yeah what i am talking about isnt even possible with current technology but it is the inevitable... its sucks to think about, if authors are basically entirely replaced.. and the only people are that read human authors are retro hippies that listen to vinyl.

good call on the advertisements.. now you can only pay to have it in for a year, etc then you have to renew the contract.
makes lots of sense. also i think it would switch to targeted advertising like i was mentioning before, it would target whatever is specific to me the viewer instead of having the same product for everyone
 
and the only people are that read human authors are retro hippies that listen to vinyl.
I'll take that as a complement, sir.
roots.jpg
 
Made some new ones today, this time I just told it to paint cats in mazes like HR Giger would, and it did ok, even though none of the images are really good enough to use as is. Nothing of significance, I just thought people might find some of the images interesting.

20221015083656_343704080.png

20221015083721_1394134817.png

20221015083813_3454590098.png

20221015083838_4097593479.png

20221015083956_980598510.png

20221015084004_581072209.png

20221015084738_3618705020.png

20221015084907_1093740931.png

20221015085020_1497401325.png

20221015085123_4239428368.png

20221015155148_2289414834.png

20221015161838_3360544629.png
 
Back
Top