AI-generated South Park episode may be a hoax


There are doubts about the authenticity of the project. Indications include the non-existent address “500 Baudrillard Drive, San Francisco, CA 94127” and the team listed by Fable Simulation, whose profile pictures are AI-generated.

Also, the name of the alleged CEO “Julian B. Adler” can be read as an anagram of Baudrillard. Jean Baudrillard was a French sociologist, philosopher, cultural theorist, political commentator, and photographer known for his analyzes of hyperreality and simulation.

The other alleged team members listed also appear to be familiar references to well-known figures in history. Although Fable Studio lists a phone number on its website, the company cannot be reached at that number, as it sends you directly to a voicemail.

Fable Studio exists and was launched in 2018 when Facebook shut down its VR film studio. The people mentioned in the article also exist, including co-founder Edward Saatchi, who recently gave an interview to VentureBeat about the South Park episode.

It’s Show-1-Time: AI creates a new South Park episode

The creative capabilities of Stable Diffusion or GPT-4 are well known. However, they lack the consistency for complex stories. SHOW-1 aims to change that.

The AI ​​company Fable Studio has combined several models into a new model called SHOW-1. It is capable of generating several coherent episodes of a series.

They prove that their concept works with a 22-minute episode of “South Park” that, surprisingly, is about the impact of AI on the entertainment industry.

To get started, the model only needs a title, synopsis, and main events

Creating a complete South Park episode is a complex process. The storytelling system is started with an abstract-level idea, usually in the form of a title, synopsis, and major events that should take place within a simulated week (about three hours of play). Generating a single scene can take a “significant amount of time”, up to a minute.


  • The system automatically generates up to 14 scenes based on simulation data.
  • A showrunner system organizes the cast of characters and shapes the plot according to a predetermined pattern.
  • Each scene is assigned a plot letter (ABC) that is used to switch between different groups of characters.
  • Each scene defines location, characters, and dialogue.
  • After the initial setup of the staging and AI camera system, the scene plays according to the plot pattern.
  • The characters’ voices were pre-trained and voice clips were generated in real-time for each new line.
Picture: Fable Studio

Fable Studio’s work is based on another research paper, “Generative Agents,” published in April by Stanford and Google scientists. In it, they simulated a virtual city and observed how many defaults the so-called agents – the inhabitants – needed to follow a realistic daily routine and interact with each other.

GPT-4, custom diffusion models, and cloned voices

Among other things, SHOW-1 uses OpenAI’s GPT-4 to influence the agents in the simulation and to generate the scenes for the South Park episodes.

Because transcripts of most South Park episodes are part of GPT-4’s training data set, it already has a good understanding of the show’s character personalities, speaking styles, and general humor, according to Fable Studio. This dramatic fingerprint is important for the consistency of a show, the team says.

Prompt chaining, or the linking of multiple prompts, provides another foundation. Deepmind’s Dramatron, which writes scripts for film and television, also uses this technique.

In the case of SHOW-1, GPT-4 acts as its own discriminator for answers, similar to the concept of Auto-GPT. But generating a story is a “highly discontinuous task” and requires some “eureka” thinking, according to the team.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top