October 20, 2023

Cinema Ex Machina

 

the rapid development of artificial intelligence looks set to transform every aspect of film production, from screenwriting and visual effects to the ability to cast dead actors in new roles, but each brings with it a host of ethical and practical challenges


DOMINIC LEES

Artificial intelligence has captured
our imaginations in film ever since
HAL 9000 appeared in 2001: A
Space Odyssey (1968). Stanley Kubrick
continued his exploration of the
technology when he optioned a Brian
Aldiss short story that was eventually
developed into the film A.I. Artificial
Intelligence (2001), directed by Steven
Spielberg after Kubrick’s death.
For these filmmakers, AI meant an
imaginative exploration of a sci-fi
concept, a robot boy who could feel love.
But today’s development of ‘generative
AI’, the term for systems of code that
can learn autonomously and create novel
texts, images and videos, is impacting
on filmmaking itself. This approach to
computer coding deliberately mimics the
brain, building artificial neural networks
that unleash the potential of algorithms
to produce ‘synthetic media’ in a way that
is remarkably similar to human creativity.
It is a development both inspiring
and contentious: already arguments
have ignited about the rights and
wrongs of using algorithms to replace
human imagination and labour in
scriptwriting, performance and the
creation of moving images. This article
looks at how AI is opening up new
possibilities in film production processes
and explores the remarkable and highly
challenging ideas that confront film
culture in the artificial intelligence era.

SCREENWRITING

In 2023, the public accessibility of
ChatGPT has raised awareness of AI’s
potential to generate new storylines
and screenplays from simple prompts.
Automated production is a pathway
to increased profits in any industry,
and screenwriters are acutely aware
of their vulnerability in the face of AI.
When the Writers Guild of America
began its strike on 2 May, control of the
status of AI-generated scripts was a key
demand. The union does not want to
ban the use of ChatGPT, but insists
that any text created by AI cannot be
classed as either “literary material” or
“source material”. So if a studio exec
asks a screenwriter to develop a script
from an idea generated by ChatGPT,
the resulting work must be classified as
the writer’s own original screenplay.
But how good is a script written
by ChatGPT? The weakness in this
technology is that it is only designed
to generate plausible text – the chatbot’s
designers at OpenAI emphasise that
it understands nothing. This means it
is very useful for writing a genre movie
based around clear formulae, but has
no autonomous capacity for originality.
My experience of the system illustrates
how its lack of understanding leads to
pitiful outputs. I asked it to write a script
sequence in which a desperate young
woman knocks on my door, on the run
after taking part in a botched robbery.
ChatGPT wrote the part of the criminal
quite well, but my character was a mess:
at first, I was eager to hide her from the
approaching police, but half a page later
I was moralising to her and saying she
should hand herself in. This algorithm
has no concept of character consistency.
Specialist algorithms can be ‘trained’
to learn the style and characteristics of a
data set – for instance, the screenplays of
a great writer. The implication is that we
could input all the work of Billy Wilder,
and then write a prompt to the AI to
generate a new script, imitating exactly
the style of the late great screenwriterdirector.
The result would be a little like
discovering the pages of an unproduced
masterwork in his attic – ‘Some Like
It Hotter’? This experiment would
require the consent of the Wilder
estate, but the potential for abuse of
the AI system is obvious and there is a
dearth of clear regulations governing
it. Dr Mathilde Pavis, an expert in
intellectual property law and ethics,
says that judges in the US and the UK
will soon be asked to respond to a key
question: “At what point does using AI
models to imitate the look, feel or style
of source materials become infringement
of copyright?” She emphasises that the
rights of writers are relatively secure,
but actors are often unprotected from
having their image used to train AI
systems, especially after their deaths.

DEEPFAKES AND THE CAMERA

‘deepfake’, a digital video in which the
faces have been swapped, is a form of
AI that has been part of screen culture
for the past six years. Originating with
very low-resolution clips created by
amateur users, primarily to swap faces
in pornographic videos, the technology
has advanced rapidly in the last year,
progressing from the world of online and
mobile-phone content into mainstream
television and now the film industry.
In March 2021, I reported online for
Sight and Sound on the extraordinary
achievements of a Belgian deepfaker,
Chris Ume, whose utterly convincing
deepfake videos of Tom Cruise became
a TikTok sensation. The following year,
Ume deployed his skills on the television
show America’s Got Talent. On to the shiny
stage floor was wheeled a studio pedestal
camera, setting up in front of a singer for
the live broadcast. When he broke into
song, the face of the show’s judge, Simon
Cowell, was deepfaked simultaneously
on to the singer’s image on the huge
screen above. It was a jaw-dropping
moment of magic made possible by
television’s special quality of immediacy.
Ume had integrated the studio camera
with a computer that could create the
deepfakes. The unique advance was to
achieve deepfakes instantly, in real-time:
until this moment, the production of
high-resolution deepfakes had required
long periods of time to train the AI,
inputting thousands of source images.
Ume’s company is aptly named
Metaphysic and, together with
co-founder Tom Graham, he is now
bringing this technology to Hollywood.
Their success on America’s Got Talent
led to venture capital investment and
an agreement this year with one of
Hollywood’s largest talent agencies,
Creative Artists Agency. CAA is
convinced “artificial intelligence will
have a transformative impact on content
creation and intellectual property”.
The first outing for Metaphysic’s
AI camera technology is with director
Robert Zemeckis, who is in production
on Here, starring Tom Hanks and Robin
Wright. The project, developed from
a graphic novel by Richard McGuire,
is set in a single room across a huge
range of time periods. Zemeckis is using
Metaphysic’s technology to age and
de-age his stars in real time while they
perform in the studio. The advantage
for the director is that he can see the
digital transformation immediately.
Earlier productions that used VFX to
age or de-age actors, such as Martin
Scorsese’s The Irishman (2019), required
weeks of work in post-production
before the results could be seen.
This use of AI challenges our
understanding of the film camera.
Throughout the history of cinema, we
have assumed a link between recorded
image and reality: what we see on
screen is what happened in front of the
movie camera during filming. This has
been termed the ‘indexicality’ of the
image in both photography and film
studies. Cinematographers may have
nuanced the image using filters, lenses
and matte paintings, but live-action
filming conveys to the audience what
was there, linking us to the actors
who performed before the camera.
Metaphysic now gives us a film camera
that represents the actors as they are
not – it is a technology that transforms
reality instead of recording it. No
longer are digital visual effects applied
in post-production to the recorded
images of an actor, as with Brad Pitt
in The Curious Case of Benjamin Button
(2008); the originating image itself is
an unreal version of the actor, quite
different from how they appeared
when performing in front of the lens.

AI AND VISUAL FFECTS 

profoundly affected by AI is visual
effects. Previous advances in high-end
VFX have made possible the
development of studio franchises such as
the Marvel Cinematic Universe, James
Cameron’s five-film Avatar series and
Michael Bay’s eight Transformers movies.
However, a feature of these effects is that
they are very slow and expensive,
requiring a huge amount of labour by
large teams of VFX artists and
technicians, meaning that this form of
digital image-making has been limited
mostly to big-budget productions.
The VFX giant Framestore has
recently worked on Marvel Studios’
Guardians of the Galaxy Vol. 3, written
and directed by James Gunn. Its chief
technology officer, Michael Stein, spoke
to me about the ways in which AI
will impact VFX processes. ‘Machine
learning’ can be embedded within
existing technologies used to create
VFX, accelerating workflows: “The
majority of where we think AI will
impact the business in the near term is
to make everything that we do slightly
faster... What we’re really trying to do
is to get to the better, more quickly.”
Speed and automation imply a
reduction in costs, which will not only
support the bottom line of businesses
such as Framestore but could, Stein
argues, have a democratising effect
on film production: “The tools will
open up possibilities to a wider range
of creatives. I think a lot of the new
AI-based technologies in visual effects
are actually targeted at a whole new
generation of content creators.”
AI means that advanced visual
effects will become part of the palette
of creative options for all film directors,
not just those working in the major
studios, meaning we may see shifts in the
style of cinema across screen cultures.
But AI technologies threaten
employment in this section of the
film industry. VFX workforces will
shrink: the work of previs artists –
‘previsualisation’ artists who help plan
out how a film will look – in the early
stages of VFX conceptualisation will
be replaced by accessible systems such
as Stable Diffusion, which generates
instant synthetic still images; the work
of roto artists, who manually trace areas
of the live-action frame where the CGI
will be inserted, will be automated.

FULLY SYNTHETIC FILM

Can, or should, complete movies be
made by artificial intelligence? The
technology offers this as a reality, with
the capacity to generate synthetic
actors, images, voices and music. You
may well have enjoyed playing with
ChatGPT, writing short prompts and
receiving back fully formed texts from
the algorithm; now, the idea of a parallel
system that delivers videos based on
your prompts is quite conceivable.
Earlier this year the company Runway
released ‘Gen-2’, an AI that provides
this service. A filmmaker can prompt
Gen-2 using text and by uploading a
still image. The AI then provides the
director with an original, synthetically
generated shot, which it has created
by searching the internet and merging
the visual data it finds into a video clip.
Many first-adopters are producing highly
stylised content, short clips in action or
sci-fi genres. Los Angeles-based director
Paul Trillo takes a different approach.
He told me how he is developing a style
of synthetic filmmaking that responds
to the nature of artificial intelligence.
His starting point is a theory about how
an AI engine searches the web before
building its videos. Trillo describes
how he conceives the AI as conducting
a ‘hall of mirrors’ search across the
internet, rather like a person’s mind
trawling through its memories: “It’s
trying to reconstruct reality – the AI is
amalgamating reality based on memories,
the computer is trying to dream.” Trillo
believes synthetic film should embrace
a specific cinematic aesthetic that
responds to these qualities of AI.
Last month, he released a short film,
Thank You for Not Answering, with every
shot generated in Gen-2, and a voiceover
through the AI speech-cloning company
ElevenLabs. The story is about an elderly
man making a phone call to a lover from
long ago, leaving a voicemail that revisits
the relationship through memories. Trillo
describes the rushes the Gen-2 algorithm
delivered: “There’s a murkiness at first
glance. It looks like reality, but when you
look a little harder the details of life are
kind of missing. And I thought that was
kind of what memories are like, where
you try to remember something and can
only feel the person, the place, the mood.”
The moving images created by Gen-2
have strong similarities to animation,
but have a special characteristic:
“I think AI video has this inherent
haunting, almost grotesque quality.”
Trillo’s comment reflects my own
experience experimenting with deepfakes
in film drama. When I attempted to
replace the face of an actress with that of
Margaret Thatcher, the first image that
the algorithm delivered was a distorted
face of the prime minister, with two
mouths, like a Francis Bacon portrait.
The image was horrifying yet had
a quality that was strangely resonant,
for some of us, of the political realities
of the 1980s. Yet the subtexts, tone
and emotions we read in AI-generated
images don’t originate from filmmaker
intention; the style is generated by a
machine. The AI does not know an
image is horrific: it understands nothing,
merely obeying prompts and delivering
images that the algorithm decides will
best reflect the commands. Its work
is a form of automated randomness.
Trillo believes that directors using AI
should not just embrace its particular
aesthetic, but also welcome the “oddities”
it generates. He singles out one shot in
his film – a woman in a dimly lit room or
hallway, her face almost featureless under
a wash of green light – that he regards as
a wholly original creation by the AI. “It
wasn’t something I asked for, it wasn’t
even close to what I had asked for, but it
was a really haunting and striking image.”
The AI had behaved like a wayward
director of photography, delivering
rushes the director did not want. But
when Trillo saw the shot, he found it
fitted beautifully into the language of the
film, and he incorporated it into his edit.
This leads to an interesting question
about the role and status of the director
in AI film. The creative practice
involved can be entirely solitary, with the
filmmaker working alone at a computer,
uploading reference images and crafting
prompts to the AI to generate the
rushes for the movie. This implies a
form of pure auteurism, replacing our
normal understanding of collaborative
authorship in filmmaking. But how
much credit should we give to the AI?
“It’s hard for me to take full credit for
some of the decisions it’s making because
there’s things that I didn’t direct,” Trillo
observes. “I find it closer to curation in
some ways than directing, because of that
randomness.” For this reason, in the film’s
credits he puts the director attribution in
inverted commas: “Written and ‘Directed’
by Paul Trillo.” The question links back to
the issues involved in the screenwriters’
dispute, but whereas the Writers Guild
of America demands that AI-generated
text should not be allowed creative
status, Trillo suggests we should give
AI more credit for its originating role.

THE FUTURE OF AI AND FILM

Joe Russo, co-director of Avengers:
Endgame (2019) and an enthusiast for
new technology, recently speculated that
AI will ultimately lead to the creation
of on-demand personalised movies. He
suggests that in the future he will return
home and say to his voice recognition
system, “Hey, I want a movie starring my
photoreal avatar and Marilyn Monroe’s
photoreal avatar. I want it to be a romcom
because I’ve had a rough day,” and the
AI will deliver a 90-minute feature. It
is certain that the technology will take
us down extraordinary paths and we
will argue over the desirability of the
alternative forms of cinema culture that
it will present. AI enthusiasts disagree
among themselves: Framestore’s Michael
Stein argues against the personalisation of
film described by Russo: “As humans, we
like to share in stories, right? If everyone
is living their own unique stories, how
do we share them?’ This brings the
discussion to the vital issue of how film
audiences will respond to new uses of AI.
Will they engage with synthetic actors?
Will they embrace a new cinematic
style that is particular to AI-generated
moving images? There is huge risk for
filmmakers both in finding ways to use
the new technology and in negotiating
new relationships with their audience.

Ai systems in Films

the issue is how
film audiences
will respond to
new uses of AI.
Will they engage
with synthetic
actors? Will
they embrace
a new cinematic
style particular
to AI-generated
moving images?

THE INVISIBLE BOY
Arrival of the malicious
supercomputer.
ALPHAVILLE
Howard Vernon’s sentient
computer Alpha 60.
2001: A SPACE ODYSSEY
HAL 9000.
WESTWORLD
Faulty AI leads to androids
running amok.
ALIEN
AI mastercomputer
MU-TH-UR (or ‘Mother’).
TRON
A power-hungry Master
Control Programme.
WARGAMES
An AI researcher automates
the nuclear arsenal.
THE TERMINATOR
Resistance against Skynet, the
AI reaching self-awareness.
THE MATRIX
Intelligent machines have
created a simulated reality.
A.I. ARTIFICIAL
INTELLIGENCE
An AI boy feels love.
I, ROBOT
An AI wants to save humanity
from self-destruction.
MOON
An AI controls clone workers
on the moon.
RA.ONE
Anubhav Sinha’s games
computer goes rogue.
HER
A lonely male writer falls in
love with AI ‘Samantha’.
EX MACHINA
Ava, the humanoid who has
passed the Turing Test.
CAPTAIN MARVEL
The ‘Supreme Intelligence’
rules an alien race, the Kree.
BIGBUG
Jean-Pierre Jeunet’s comedy
about domestic AI.

SIGHT & SOUND

No comments: