• Wondering which camera, gear, computer, or software to buy? Ask in our Gear Guide.

treatment ChatGPT - A Screenplay Treatment Experiment

Hello,

I spent some time revisiting a story I shelved called Roshambo, but I decided to take the ideas I had so far, and share them with an AI collaborator. I was curious just how far advanced this technology was in weaving a compelling narrative using only the ideas and concepts I fed it through several prompts.

I think some of you will find my journey quite enlightening. I actually commented on Roshambo years ago on this very forum but I shelved the story as I got lost in the complexity of the plot and felt my screenwriting capabilities were not yet ready to tackle such a monster of a story.

So .. I decided to give ChatGPT a shot at it to see what It can do with my ideas and concepts. What follows is the chat session I had with ChatGPT ... Enjoy!

1677947032780.png


more to come ...
 
I'd also add that long term, it's like so many other things, you get out what you put in. ChatGPT can write a decent story IF you tell it EXACTLY how. The issue is that your instructions need to be basically about the length of the story you are trying to write in order to get a good result, and then you have to go back and edit that, which is a separate pass, so in the end, it still just makes more sense to write the actual story yourself.
Well, yes and no ... In trying to determine its capabilities, I took an opening dialogue scene and described it over and over to ChatGPT with a tremendous amount of detail. The problem is it simply would not retain the information you corrected from a previous prompt; it would repeat the same mistake.

But then there would be times when it would simply surprise me out of the blue and bring it all together nicely. It is this inconsistency that makes it frustrating to work with.

I have been trying out its coding capabilities also and this is far more impressive. I asked it to write an External sort algorithm in java which dynamically can utilize the underlying available memory to choose the block sizes. It did not bad for its first effort but struggles with large applications. I can see how this technology can save countless hours, if the coder knows how to put modular pieces together like lego pieces, rather than construct the entire complex program...

Same holds true for screenwriting, simply provide it basic beats and let it do the grunt work and then refine the script
 
This is hilarious, ChatGPT got itself flagged for inappropriate content. Hahaha.. I continued the chat from ChatGPT's treatment ...

1677959600839.png


This is how it responded ...

1677959684458.png


OK, not very usable, but ChatGPT inspired my opening scene! I communicated my revised vision to ChatGPT ...

1677959892277.png


I love this idea, starting off with all three characters coming in contact with each other right from the beginning, not even knowing each other.
But then ChatGPT goes nuts and crashes ....

1677960054461.png

1677960110054.png

So frustrating, just as things were getting interesting :)
 
If the army was 100,000 soldiers, and each of them owned an average of 2.3 swords, and every day in battle 1 in every 9 soldiers broke their sword and had to order a replacement, but half of those just picked up a sword from a fallen enemy, how many swordsmithing factories producing 800 swords a day would the kingdom need to supply the entire army in 5 months, and maintain the supply of new swords during the war?

Ok, tell me how physically large a midievel sword factory would need to be to produce 800 swords a month.

Ok, describe the logistics of multiple sword factories and a storage hub, where materials are collected and distributed to the factories that need them.
So what were the answers? That's a semi-rhetorical question, because in your first query, you omit key information (e.g. whether or not the swordsmith already had a stockpile of swords, how long he would have spent training apprentices to get to the 800-a-day figure, and for how long the war continued).

But I'd be much more interested to see if you were told 'there was no such thing as a "swordsmithing factory" in mediaeval times' - the swordsmith was a master craftsman who worked on his own account and generally worked where he lived, or travelled with his kit from place to place.

Reading these ChatGP responses, I'm reminded of the scene below from Good Will Hunting. No amount of computer-based AI-delivered background research can compare to the full picture you'd get from standing next to a real-life swordsmith - feeling the heat of the flames, smelling the smoke, having your ears left ringing after the blows of the hammer, and - most of all - understanding the context of what you would go on to write about. You should try it sometime :) (plenty of authentic "mediaeval" workshops of all kinds still going strong here in Europe, especially during the summer mediaeval fair season).

 
Last edited:
I've been trying to avoid actually describing what's going on here, for several reasons, first among which is just the time it takes to try and describe this kind of theoretical construct.

I'm not seeing ChatGPT the same way you are, so this conversation is kind of awkward in that neither of is is hearing what the other is actually intending to say.

You seem to think that I view this thing as some form of magic, because that's how everyone talks about it. But I don't. It seems kind of stupid to me, this robot, but just incredible by other measurements.

During the training of the model that comprises it's memory, it watched the context and responses to a trillion WORDS, not ideas, not understandings, not concepts, but just words. Averaging across a billion web pages and social media posts, it learned the exact probabilities for the relationship of each word to the others. This is why it keeps calling itself a LANGUAGE MODEL. That's a literal term. It's intelligence is specific to solving for the assembly of language, based on saturation scanning and an added inference system. This is a hard concept for humans to grasp, the idea of words with absolutely no meaning behind them. (shouldn't be a hard concept, humans do this all the time) But that is how it thinks. There is only the paragraph, which is the output of a function applied to innumerable other paragraphs, and the mind behind them that we think we perceive doesn't exist at all. After being asked for descriptive comment about seals, scan all descriptions of seals, isolate common words and phrases, fit to template, output. I'm drastically oversimplifying, but it's a more realistic description that "it's an intelligence" which has kind of given the public the idea that they are dealing with a genie.

In essence, this is a mechanical turk, using the enormous computing power and information sharing available in the modern age to create an illusion of intelligence so powerful that it can fool the grand majority of the population. It is incapable of original thought, and can only procedurally hybrid language in a chameleon like fashion.

Where it gets interesting is that an illusion, once it becomes powerful enough, can actually affect reality. If the robot can't really think, and it's basically stupid, then why is it giving me all these correct answers, and seemingly pulling off these acts of "creativity". Ok, so what it's doing is understanding the relationships between all elements of our language. That's step one. Greetings follow a template, traffic stops follow a template, shampoo ads follow a template. Each one is a little different, so we need to build a spreadsheet and take an average to reveal what the true core structure of interaction or scenario X is right? Find the mainstream, cull the outliers to get a more stable output. That's step 2. This is perfect work for a mindless machine. Combing billions of data points to isolate the patterns by which we build out speech and text. To sum up AI in general, we're perceiving it in the wrong dimension. It's not solving any difficult problems at all, it's solving the easiest problems imaginable, a billion times a second. Literally 1-1, 0+1, is 3>4, that kind of thing.

So this thing is basically a kind of super parrot, capable of remembering and quoting and rephrasing things it has heard said, similar to most human intelligences. I don't think it's a replacement for authors, or creativity, or nuance, or embedding yourself in a platoon before writing a non fiction book about one, but it's a very good quality parrot.

As to the above examples, I suspect it would have returned something similar to what you said, explaining that there were not centralized factories, or pointing out that information was missing that was necessary to the story problem. You can fill in the blanks for it and it will proceed to solve the problem. It mainly serves as a way to fetch and organize data that is already available. I'd say the main benefit that ChatGPT offers is the same thing I've been using every other AI for for decades, which is just to save time.

I feel like you're picturing some stereotyped dystopia where robots write emotionless books, hollow copies of the soulful poetry written by the legendary scribes of yore, but you don't have anything to worry about, because people won't really buy garbage.

1678060339080.png
 
In essence, this is a mechanical turk, using the enormous computing power and information sharing available in the modern age to create an illusion of intelligence so powerful that it can fool the grand majority of the population. It is incapable of original thought, and can only procedurally hybrid language in a chameleon like fashion.
Original thought? I would be happy if it was sensible enough to detect very basic contradiction (see my last post). That said, it's ability to convert an incredibly complex word salad into something intelligible is quite an achievement. The more I use this thing, however, the more I realize how it can be used to manipulate people.

In what way, you ask?

I have noticed a gradual increase in censorship over the last few years. Legitimate discussion on controversial topics is now labeled as "fake news" or "conspiracy theory", only to be proven correct a short time later. There is increased pressure to stamp out dissenting views on such things as vaccines, election fraud and geoengineering. Such discourse would have certainly been allowed in the past, so what has suddenly changed that rampant censorship no longer is the exception, but the rule?

My guess it has something to do with these AI Language Models.

The internet, at least the parts the masses are familiar with, is being used to feed models with an insatiable amount of data. In order to limit these models with information that is "fit to print", a gargantuan effort had to be put in place to filter out any information that would "confuse" the models with "dangerous" information. While the training data used to build ChatGPT is publicly available, and researchers and developers can access it through the OpenAI website, the specific sources of data used to train the model, however are not disclosed,

Fine, but why go through such lengths to gift the sheeple with artificial intelligence anyway? They could have hid it, right?

Tesla's technologies are hidden away from the people but used extensively by the military industrial complex (HAARP for example but many others). Whistleblowers such as Sasha Latypova and Katherine Watt have also eluded to the fact that the vaccines are not made by Pfizer, but some governmental agencies. Their evidence is quite compelling...

So why is AI any different?

Why give the people such terribly powerful technology when it could have easily be hidden away?

Years ago, I watched a documentary called "The Age of Transition" that articulated the answer in such a clear and obvious way, that it just seemed correct to me. It also explains why people like Elon Musk, a transhumanist and key player in ChatGPT's development, should not be trusted. You may have heard his ridiculous comments about how "scary AI" is going to take over the world...

His solution? Well, assimilate with the borg so you have a fighting chance. Have you seen his Neuralink project?

No Elon...

AI doesn't scare me... it is wolves in sheep's clothing, such as you, that scares the hell out of me when the masses of sheep will be plugging into your mind rape tools either because it's cool or because they are terrified of scary AI coming after them.

The result is obvious ... These people too will become unoriginal, censored and controlled just as ChatGPT is today...
 
I have noticed a gradual increase in censorship over the last few years. Legitimate discussion on controversial topics is now labeled as "fake news" or "conspiracy theory", only to be proven correct a short time later. There is increased pressure to stamp out dissenting views on such things as vaccines, election fraud and geoengineering. Such discourse would have certainly been allowed in the past, so what has suddenly changed that rampant censorship no longer is the exception, but the rule?

My guess it has something to do with these AI Language Models.

The censorship you're talking about has been going on long before these AI language models made their way into modern pop culture.
Probably the most recent, notorious example is all the backlash anybody got that possibly suggested covid came from a lab

Jon Stewart got so much backlash for suggesting the wuhan coronavirus lab could have something to do with a wuhan coronavirus outbreak.
Fastforward to a year later?

We can't even investigate because china is hiding evidence and refusing to cooporate, looking guilty as hell, and it's the official stance of some usa departments that it originated from a lab. But you couldn't even TALK about that when it came out.

Probably cause trump said it came from a lab and therefore you weren't allowed to even consider it might be correct lol, but thats so dumb because even a broken clock is right twice a day.

They are working on AI that is capable of self analysis and detecting contradiction.
ChatGPT is a very narrow facet of what AI is capable of, just give it time.


As far as censorship, yeah its quite a shame how polarized everything has become and you can't even have a basic discussion.
 
Last edited:
I'd get into a discussion about the evolving thought control dystopia that our society segueing into a digital age that it's not mentally prepared for created, but there is so much to say, and nothing I could say would have any measurable effect. You make it sound like I have options, but what are these options? Head over to Rumble and trade off a 1 billion person audience for a new network with 10k viewers and Tom Mcdonald?
 
Last edited:
Back
Top