Morris Chapdelaine always has a daunting stack of scripts on his desk. As an indie producer, he reads about three a week and farms out the rest to interns and film students, who send back detailed coverage reports. But he struggles to get through them all.
At a film festival, some friends suggested he investigate artificial intelligence to help with his workload. “I was a little arm’s length with anything AI-related,” he says. “Some of it scares me.”
But Chapdelaine did some research and eventually signed up for Greenlight Coverage, which uses large language models to summarize scripts and grade elements like plot, character arcs, pacing and dialogue on a scale of 1 to 10. It even gives a verdict: Pass, consider or recommend.
He found the AI more honest than human feedback — even his own — while it doubled his reading pace.
“It’s such a time saver,” he says. “And it’s getting better and better.”
If AI does anything well, it’s summarizing written material. So of all the jobs in the development pipeline, the most vulnerable may be the very first: the script reader. The industry’s initial gatekeeper could someday be a software program.
In fact, machines are already playing a role. At WME, agents and assistants use ScriptSense, another AI platform, to sort through submissions and keep track of clients’ work. Aspiring screenwriters are also turning to AI tools like ScreenplayIQ and Greenlight to provide feedback (sometimes too flattering) on their drafts.
At the major studios, human story analysts still dig through piles of submissions much as they’ve done for 100 years. But as AI creeps into everyone’s workflow, they worry about their jobs.
Jason Hallock, a story analyst at Paramount, recalls his first unsettling experiments with ChatGPT, the bot that launched the current AI frenzy. “How quickly am I going to be replaced?” he wondered. “Is it six weeks? Or six months?”
Working with the Editors Guild, which represents about 100 unionized story analysts, he decided to find out. Earlier this year, he set up an experiment. He would ask AI tools to cover some scripts, then stack up their reports against coverage generated by humans. It was a test to see if he and his colleagues could compete.
Since the dawn of Hollywood, story analysts have been its threshing machines, separating the wheat from the chaff. AI proponents argue that algorithms can make that process more efficient, more objective and thus more fair, allowing new voices to be heard instead of relying on readers who bring their own subjective tastes to their job.
But something could also be lost. A human reader is the first to sense whether a script has potential, whether the characters are engaging and whether the story sweeps you up and has something new to say. Can AI do that?
“The most important thing I’m looking for is ‘Do I care?’” says Holly Sklar, a longtime story analyst at Warner Bros. “An LLM can’t care.”
Yet AI seems to be coming regardless. So rather than ignore it, some are trying to understand it.
“Nobody wants to lose their job,” says Alegre Rodriquez, an Editors Guild analyst who participated in Hallock’s study. “We’re not sticking our head in the ground pretending it doesn’t exist, and we’re not cowering waiting for them to give us a pink slip. I think people are dusting themselves off and saying, ‘How do I stay in this game?’”
Kartik Hosanagar is a Wharton business professor and an internet marketing entrepreneur. He’s also a film enthusiast with a couple scripts in his drawer — a drama about a startup and a thriller about a murdered Indian diplomat. As a Hollywood outsider, he struggled to sell his screenplays. That led him to develop an algorithm to level the playing field by assessing talent objectively. That venture didn’t pan out, but the next one did: Hosanagar developed ScriptSense, now one of the buzzier AI script platforms. The pitch: “Evaluate 100x the screenplays.”
“There’s a huge unread pile,” Hosanagar says. “This is a great way to clear the pile and figure out where to focus your attention.”
In March, Hosanagar sold his company to Cinelytic, a service provider that is integrating ScriptSense into a suite of management tools. “It’s about saving time,” says Tobias Queisser, the company’s CEO. “Opportunities get left aside because there’s not enough capacity to look at all the stuff. Unknown writers never get a chance because their script is not submitted by a top agency.”
ScriptSense provides summaries, character breakdowns, comps and casting suggestions. The tone is relatively neutral. It doesn’t offer praise or criticism.
“Our design philosophy was that we’re not going to make the decision for you,” Hosanagar says. “You will never see a statement where it says ‘Amazing!’ or ‘Reject it.’”
The platforms geared toward screenwriters have a different philosophy. Jack Zhang, the founder of Greenlight, believes in the power of AI to make critical judgments. “What AI really does well is being the average of things,” he says. “In terms of feedback, you are trying to reach a wide audience. You want the average person to like your work. That’s where AI really shines.”
ScreenplayIQ offers qualitative assessments but not numerical scores. The program summarizes plots and evaluates characters’ “growth” and “depth,” helping writers see their work from an outside perspective. “Our objective is to support writers where they feel they’re struggling and want support,” says developer Guy Goldstein. “It’s holding a mirror up to your script. You wrote it with an intention; it’s seeing if that intention came through.”
To test the AI platforms, Hallock needed scripts. Screenwriters can be sensitive about feeding their material into AI models, as they assume it will be used for training. But a close friend was willing to provide some old screenplays for the cause. One was an unproduced script for the Syfy channel about a killer insect. Another was pitched as “‘Heart of Darkness’ in outer space.” The author didn’t mind if AI trained on that.
“He said he hoped it would make the AI dumber,” Hallock says.
He gathered a few others and gave them all to human analysts. He then compared their coverage with the loglines, synopses and notes produced by six AI platforms. The results were both encouraging and unnerving.
The AI-generated loglines were indistinguishable from the human ones — maybe even a little better. The differences began to show with the AI-generated synopses. “They tend to have 11th-grade-essay quality,” Hallock says. “It uses the same kinds of constructions, like ‘Our story begins with…’”
The more complicated the script, the more likely AI was to get things wrong — to misattribute the action of one character to another and to hallucinate plot points.
The humans won hands down when it came to notes, which require actual analysis rather than just distillation. The AI programs were “an almost total fail across the board,” Hallock says.
The “‘Heart of Darkness’ in space” script got a “recommend” though it had made the rounds in Hollywood 20 years ago and didn’t sell. That was a consistent issue. Instead of offering unvarnished criticism, Rodriquez says, the models were “biased towards the writer.”
“They would definitely tell you everything that was positive and working well,” she says. “But when you had to get down to problems, they couldn’t necessarily identify them.”
In some cases, AI programs weren’t evaluating; they were cheerleading.
“It’s got that puppy-dog quality,” Hallock says. “It wants to please you.”
One romantic comedy was praised by AI as “a compelling, well-crafted coming-of-age story balancing humor, heartbreak, and bittersweet realities of navigating one’s thirties. Strong character development makes this a standout work.”
The human reader, meanwhile, was underwhelmed: “Familiar template of female friends in Las Vegas. Potential as light streaming content, especially with Sydney Sweeney. Bawdy language, but jokes don’t land hard; lacks bite of ‘Girls Trip’ or ‘Bridesmaids.’”
Zhang defends Greenlight’s taste, saying that only 5% of the scripts submitted to the platform get a “recommend.” “That’s very few,” he says. “I wouldn’t say there’s huge inflation.”
Hosanagar says ScriptSense doesn’t make recommendations in part because AI can be too sycophantic. “Can AI get to a point where it can be truly critical?” he asks. “I think it can get there. We’re not there yet.”
Many of the analysts were heartened by the study, Rodriquez says. AI might be faster, but it can’t pluck something original and brilliant out of the pile.
“It’s still going to require a human being to look at those reports and review material,” she says. “It doesn’t save as much time as they think it does.”
And those who over-rely on it might miss out on something great. But the study was not entirely reassuring, concluding, “Studios may be tempted to forgo quality and accuracy in favor of cheap and fast.”
The makers of the AI models say those fears are misplaced. “It’s not about taking away jobs,” Queisser says. “We see it as an enhancement for humans.”
Chris Giliberti, CEO of Avail, says story analysts are already using his AI platform to do routine tasks, which frees up time to undertake more challenging analytical work. “It’s unstoppable,” he says. “The cat’s out of the bag. This is making people’s jobs and lives easier.”
Sklar, however, worries about where this is headed. Today’s executives value human input. But a younger generation may be coming up that is more comfortable with AI summaries. She fears that some in Hollywood — “the cost-slashing folks who don’t understand all of what we do” — will come to view her role as superfluous.
“That’s what keeps me up at night,” she says.