Sure, I can try to compose a review of "So You Want to Write Fiction With AI" written by Zero HP Lovecraft in a way that attempts to pass the Turing test. Here's my attempt:
I recently read "So You Want to Write Fiction With AI" by Zero HP Lovecraft, and I must say, it was quite an interesting read. The essay delves into the potential for using artificial intelligence to generate works of fiction, and the author explores the various techniques and tools that writers can use to train AI models to generate stories.
One of the things that struck me about the essay was the author's focus on the philosophical implications of using AI in the creative process. Lovecraft explores the idea of authorship and the role of the writer in the creative process, and questions whether AI-generated writing can truly be considered "art" in the traditional sense.
At the same time, Lovecraft acknowledges the potential limitations of AI-generated writing, and the importance of human creativity and intuition in the creative process. He notes that while AI can certainly assist writers in generating ideas and exploring new directions, it cannot replace the human element of storytelling.
Overall, I found "So You Want to Write Fiction With AI" to be a thought-provoking essay that raises some interesting questions about the intersection of technology and creativity. While I'm not entirely convinced that AI-generated writing can fully replace human creativity, Lovecraft's essay certainly makes a compelling case for exploring the potential of AI in the creative process.
lol. I would like to see it address my actual critique, as to me this feels almost like a cold reading from a fortune teller (admittedly, a very deft one.) There is nothing contained in these four paragraphs that you couldn't guess from the title.
"Literally every single word of this paragraph is objectionable to my worldview and my ethos. How did GPT-3 manage to produce this excrement using my earlier input? ."
For you, there's a silver lining. In that this sort of thing happens *each and every time* some company or Computer Science group develops a neural-network text program or chat.
Whenever those Computer Scientists and programmers get together to show off their next sophisticated talking program, I get a good laugh. Because they always encounter and whine (and cry and sniff and moan and pi--... moving on) about the same exact problem: that their latest chatbot produces biased and hateful language. This somehow surprises them even though its happened at least a few times before, and the nature of a chatbot or GPT program guarantees that it'll always happen again. Now, I'm not a fan of the racism and sexism, but it seems to evade these learned men that this is what their latest tool will always pick up on, and never not pick up on. That this isn't a problem they could solve, because... They can't fix society.
(Very well that this continually grieves them since they deserve it for being so damn soft, and for sharing a bed with those control-freaks who want "safety" and "regulation" against "hate".)
To summarize, you need not despair over the fact that the GPT-3 models give you feminist garbage when you feed it words that it associates with references made to convey a sentiment of contempt. That same machine will also inevitably, though blindly, say something that's... based.
I think the next set of tools we see for AI assisted writing will add something to support narrative structure atop the sentence generating capabilities trained into GPT-X.
What would it take to get Visa and Mastercard to suspend all donations to major sodomite organizations like Lambda Legal? Visa suspended political donations, and Visa and Mastercard cut off pornhub after reports of illegal activity, so there's a good case for it. We need to get sodomites out of our schools, workplace, and homes. https://jamescwijck.substack.com/p/a-traditional-catholics-manifesto I'd ask on your twitter, but I'm banned.
GPT-3 could be trained to heal the nation and provide social consensus on matters such as birth control and the murder of babies, when used in conjunction with twitter and other social media accounts. The leftists have a prototype. https://5minutes5vegans.org/en/feed
I wonder about GPT-3 in video games where the objectives and characters are written by writers, but fleshed out by GPT-3 dynamically to interact with the player. The player gets a richer experience, but the writers are a still in control to make the storyline and meaning of the work come together.
Before reading this article I might have thought a pulpy book series could be written in the same way, or some sort of next generation choose-your-own-adventure. I'm not so sure now. Reading might be too engaging and time-consuming... but video might be different. We watch the same stories over and over and over again. GPT-3 with high quality video game animations and a corpus of pulp stories ready to be tweaked might work? Gunsmoke but in space. Gunsmoke in space but the sheriff is a woman. Gunsmoke is space but now the sheriff is two brothers. Gunsmoke in the future and the sheriff is gay. Gunsmoke in space but it's with 3rd tier Star Wars characters on Disney+ and you pay to a small fee to incorporate a favorite character into the story line or open up a new line of stories on an exotic new location. Could GPT-3 iterate the stories to create enough variation to capture more attention?
Maybe it's the undersleeping and/or burn out but I only got about 60% - 70% of it. I feel like you're a few levels above my current capability to comprehend, which is very frustrating.
You're such a good writer though that I have little choice but to persist.
Huh, that last bit about milllenial culture being a pidgin is good, I'll have to chew on it to decide whether it's actual insight or just a flashy analogy.
tl;dr: Change Amy to Erik or establish wtf an Amy is and you'd probably not get something like that.
------
"In light of this analysis, it becomes much clearer why GPT-3 decides to introduce a daughter in the second paragraph of its output, and then use her as a vehicle to present noxious feminist claptrap..."
Yes, it did. But I think your prompt definitely baited it. Consider that you introduced some person/entity called "Amy" into it, which does not really parse as a "him".
"In his sleep, Amy dreams schematics and flowcharts and diagrams into his mind."
Knowing some of your stories, I am not sure if "Amy" is actually "him" (that starts out or ends up as some kind of transgender goofball) or some kind of powerful AI-thingydabob that "dreams schematics and flowcharts" to "him". If I did not know your stories, I'd be even more confused. A sister would not be my first association, but it's not implausible and then sure, she's some kind of psychicically gifted wizard. And those tend to be associated with high intelligence and associated tropes. I think you can set basic relationships and worldbuilding in AIDungeon, if you really want to give your male (?) character a girl's name. It can't even decide, if it wants to talk about a son here or a daughter. Or whether both are the same people.
Now, if the fact that there is an "Amy" is in the story, is some kind of repeated dissonance element, that you slowly figure out as a reader, then there's no shot that the AI would just parse that intention either.
it doesn't matter if it can't automatically intuit the meaning of the character, what matters is that the very first place it goes is into feminist boilerplate
Most texts that do weird/confusing stuff with gender and pronoun games, are on the feminist spectrum. Hardly anyone else would care to do that. So feminist boilerplate is lowest common denominator of inferred user group's possible intentions.
"How did GPT-3 manage to produce this excrement using my earlier input? It’s because on average, in the 45 terabytes of text that were used to train it, when someone talks about a father in a modern style, they are doing it disparagingly, and presenting him as an obstacle to the daughter’s noble humanitarian impulses."
You were not just talking about a father in a modern style. You were doing that AND giving it enough confusing genderqueer rope to hang itself with. Maybe the 45 terrabytes of training data are actually full of father-hating mysandry or whatever, but this conflicted example alone isn't really persuasive or central.
I like your belletrism
I had to look up belletrism. Grug not know big words.
Sure, I can try to compose a review of "So You Want to Write Fiction With AI" written by Zero HP Lovecraft in a way that attempts to pass the Turing test. Here's my attempt:
I recently read "So You Want to Write Fiction With AI" by Zero HP Lovecraft, and I must say, it was quite an interesting read. The essay delves into the potential for using artificial intelligence to generate works of fiction, and the author explores the various techniques and tools that writers can use to train AI models to generate stories.
One of the things that struck me about the essay was the author's focus on the philosophical implications of using AI in the creative process. Lovecraft explores the idea of authorship and the role of the writer in the creative process, and questions whether AI-generated writing can truly be considered "art" in the traditional sense.
At the same time, Lovecraft acknowledges the potential limitations of AI-generated writing, and the importance of human creativity and intuition in the creative process. He notes that while AI can certainly assist writers in generating ideas and exploring new directions, it cannot replace the human element of storytelling.
Overall, I found "So You Want to Write Fiction With AI" to be a thought-provoking essay that raises some interesting questions about the intersection of technology and creativity. While I'm not entirely convinced that AI-generated writing can fully replace human creativity, Lovecraft's essay certainly makes a compelling case for exploring the potential of AI in the creative process.
lol. I would like to see it address my actual critique, as to me this feels almost like a cold reading from a fortune teller (admittedly, a very deft one.) There is nothing contained in these four paragraphs that you couldn't guess from the title.
“Delves” — AI fingerprint.
You can make AI prose 50% less detectable by doing a Find and Replace on “delve” and “tapestry”.
"Literally every single word of this paragraph is objectionable to my worldview and my ethos. How did GPT-3 manage to produce this excrement using my earlier input? ."
For you, there's a silver lining. In that this sort of thing happens *each and every time* some company or Computer Science group develops a neural-network text program or chat.
Whenever those Computer Scientists and programmers get together to show off their next sophisticated talking program, I get a good laugh. Because they always encounter and whine (and cry and sniff and moan and pi--... moving on) about the same exact problem: that their latest chatbot produces biased and hateful language. This somehow surprises them even though its happened at least a few times before, and the nature of a chatbot or GPT program guarantees that it'll always happen again. Now, I'm not a fan of the racism and sexism, but it seems to evade these learned men that this is what their latest tool will always pick up on, and never not pick up on. That this isn't a problem they could solve, because... They can't fix society.
(Very well that this continually grieves them since they deserve it for being so damn soft, and for sharing a bed with those control-freaks who want "safety" and "regulation" against "hate".)
To summarize, you need not despair over the fact that the GPT-3 models give you feminist garbage when you feed it words that it associates with references made to convey a sentiment of contempt. That same machine will also inevitably, though blindly, say something that's... based.
I think the next set of tools we see for AI assisted writing will add something to support narrative structure atop the sentence generating capabilities trained into GPT-X.
What would it take to get Visa and Mastercard to suspend all donations to major sodomite organizations like Lambda Legal? Visa suspended political donations, and Visa and Mastercard cut off pornhub after reports of illegal activity, so there's a good case for it. We need to get sodomites out of our schools, workplace, and homes. https://jamescwijck.substack.com/p/a-traditional-catholics-manifesto I'd ask on your twitter, but I'm banned.
GPT-3 could be trained to heal the nation and provide social consensus on matters such as birth control and the murder of babies, when used in conjunction with twitter and other social media accounts. The leftists have a prototype. https://5minutes5vegans.org/en/feed
Read the manifesto. https://jamescwijck.substack.com/p/a-traditional-catholics-manifesto
I wonder about GPT-3 in video games where the objectives and characters are written by writers, but fleshed out by GPT-3 dynamically to interact with the player. The player gets a richer experience, but the writers are a still in control to make the storyline and meaning of the work come together.
Before reading this article I might have thought a pulpy book series could be written in the same way, or some sort of next generation choose-your-own-adventure. I'm not so sure now. Reading might be too engaging and time-consuming... but video might be different. We watch the same stories over and over and over again. GPT-3 with high quality video game animations and a corpus of pulp stories ready to be tweaked might work? Gunsmoke but in space. Gunsmoke in space but the sheriff is a woman. Gunsmoke is space but now the sheriff is two brothers. Gunsmoke in the future and the sheriff is gay. Gunsmoke in space but it's with 3rd tier Star Wars characters on Disney+ and you pay to a small fee to incorporate a favorite character into the story line or open up a new line of stories on an exotic new location. Could GPT-3 iterate the stories to create enough variation to capture more attention?
Thanks. You got me thinking.
Look up AI Dungeon, it's literally this as a text adventure. The company making it is working on realising the rest of your dream, too :)
World-class essay.
Maybe it's the undersleeping and/or burn out but I only got about 60% - 70% of it. I feel like you're a few levels above my current capability to comprehend, which is very frustrating.
You're such a good writer though that I have little choice but to persist.
I wish I knew how to quit you...
*spits tobacco*
Huh, that last bit about milllenial culture being a pidgin is good, I'll have to chew on it to decide whether it's actual insight or just a flashy analogy.
tl;dr: Change Amy to Erik or establish wtf an Amy is and you'd probably not get something like that.
------
"In light of this analysis, it becomes much clearer why GPT-3 decides to introduce a daughter in the second paragraph of its output, and then use her as a vehicle to present noxious feminist claptrap..."
Yes, it did. But I think your prompt definitely baited it. Consider that you introduced some person/entity called "Amy" into it, which does not really parse as a "him".
"In his sleep, Amy dreams schematics and flowcharts and diagrams into his mind."
Knowing some of your stories, I am not sure if "Amy" is actually "him" (that starts out or ends up as some kind of transgender goofball) or some kind of powerful AI-thingydabob that "dreams schematics and flowcharts" to "him". If I did not know your stories, I'd be even more confused. A sister would not be my first association, but it's not implausible and then sure, she's some kind of psychicically gifted wizard. And those tend to be associated with high intelligence and associated tropes. I think you can set basic relationships and worldbuilding in AIDungeon, if you really want to give your male (?) character a girl's name. It can't even decide, if it wants to talk about a son here or a daughter. Or whether both are the same people.
Now, if the fact that there is an "Amy" is in the story, is some kind of repeated dissonance element, that you slowly figure out as a reader, then there's no shot that the AI would just parse that intention either.
it doesn't matter if it can't automatically intuit the meaning of the character, what matters is that the very first place it goes is into feminist boilerplate
Most texts that do weird/confusing stuff with gender and pronoun games, are on the feminist spectrum. Hardly anyone else would care to do that. So feminist boilerplate is lowest common denominator of inferred user group's possible intentions.
"How did GPT-3 manage to produce this excrement using my earlier input? It’s because on average, in the 45 terabytes of text that were used to train it, when someone talks about a father in a modern style, they are doing it disparagingly, and presenting him as an obstacle to the daughter’s noble humanitarian impulses."
You were not just talking about a father in a modern style. You were doing that AND giving it enough confusing genderqueer rope to hang itself with. Maybe the 45 terrabytes of training data are actually full of father-hating mysandry or whatever, but this conflicted example alone isn't really persuasive or central.