Shuffling the Deck: Gary Hustwit on ENO

Gary Hustwit’s ENO traces the musical career of Brian Eno, from Roxy Music to solo rock and ambient to producing David Bowie, John Cale, U2, David Byrne, and more. The affable Englishman airs his mind-expanding insights on creativity and the perplexities of life. But multiple other versions of ENO exist thanks to the generative software used to assemble the movie, varying its order and selection of scenes and archival footage. (Hustwit estimates he’s seen 32 versions of ENO with audiences.) The open-endedness calls back to the creative technique that Eno invented with painter Peter Schmidt: “Oblique Strategies,” a deck of cards with prompts, like “Emphasize repetitions” or “Try faking it!”

Hustwit has made a number of documentaries about design, his most recent being RAMS (2018), about Dieter Rams. Ahead of the release of ENO, I spoke with him about the film’s generative approach, its dizzying possibilities, and how these affected the documentary filmmaking process.

First thing’s first: do you have a favorite Brian Eno track?

[laughs] There are a lot that I like, but I can’t say that I have a favorite. I like a lot of the stuff on ANOTHER GREEN WORLD. Obviously the first three solo records are amazing and still hold up. I like a lot of the ambient stuff too. And I love some of what Brian’s been doing recently, like the collaboration with Fred again... And I love all the songs that he made for the soundtrack of my previous film, RAMS. He's insanely prolific. He's in the studio every day making music.

When did you decide upon a generative approach for the movie?

From the get-go—before shooting, before I'd even approached Brian about it. Five years ago, I was questioning why films have to be the same every time. Mostly out of very selfish reasons, because I was going on screening tours for RAMS in 40 or 50 cities, and I couldn't watch the film anymore because I'd already spent years working on it and hundreds of hours just watching it over and over again. My background was in music before I got involved in film, and music doesn't have that problem because musicians, even if they're having to play their same hit song every single night, it's still different every single night. I had some problems with film being so static and was trying to think of a way that film could be more performative.

And we had the technology. When everything went digital, both filmmaking and exhibition, this constraint of a film having to be the same every time or having to be a fixed piece of art was gone. So I reached out to my friend Brendan Dawes, this amazing digital artist and creator who I'd known for 15 years. And he was game to try [a generative film]. First, we started experimenting using all the raw footage from RAMS, including Brian's music. We both realized that Brian would be the perfect subject for a generative documentary and ended up showing Brian a demo using the RAMS footage. He was excited to get involved. I don't think he was excited about having a documentary about himself, but I think he was excited about the possibilities around the generative film system.


Still from ENO

So you shot the film, and then did you use custom generative software or tailor a preexisting program?

Oh it's a custom piece of software that Brendan and I have spent almost five years developing. It's a proprietary thing. We recently launched a software startup called Anamorph, which is going to be pushing that software and the capabilities, and collaborating with other filmmakers, studios, and streamers to innovate this idea forward. It’s a bespoke system that we developed to do a very specific thing, which was create this film and have it be different every time, but still have an arc to it and be an engaging documentary watching experience.

I wasn't trying to make an experimental mash-up of random Eno footage. We did do something like that at the Venice Biennale last October, where we took all the rules off the generative software and just chucked all the footage and all Brian's music into it and let it make a film that went on for a week. It was a 168-hour-long film. But I wanted ENO the film to be just like any other documentary that I've made, just different every time. We had incredible documentary editors who were challenged to think, well, how do I create a story arc here if I don't know if the scene I'm editing is going to appear in the film, and if it does appear, what's going to be before or after it?

What’s an example of a rule that the generative software follows to assemble the scenes?

A lot of it is about the type of footage that it is, whether it's an archival music performance or it's in talking about creativity, or it's a big idea that has nothing to do with music, and establishing a rhythm of those types of scenes. We expect there to be a rhythm of information and story pieces in a documentary. And we give it a three-act structure, even though you maybe don't realize that when you're watching it—it has some thematic grouping that's happening throughout. One simple rule is that there are a dozen different Oblique Strategies cards that may come up in the film, and if one does come up, then that unlocks certain scenes or pivots the film's direction for a little while.

In the version I watched, we see Laurie Anderson draw an Oblique Strategies card and read it out: “Gardening, not architecture.”

Yeah. So, that unlocks certain scenes that you wouldn’t be able to see if David Byrne had pulled a card that said: “Take a break.” But I try not to demystify the software part of this, because in some ways, I just want the focus to be on the story and what you're learning about Brian, and for you to sit back and relax and watch it. Pay no attention to the software behind the curtain.

There’s this intriguing notion of the unpredictable starting points that our creativity can have. In a clip Eno talks about tie-dye and the idea that doing something “badly” can be creatively interesting. That comes right after he talks about his musical inspirations like Little Richard.

Yeah, that's a great example of what I'm talking about. I think the fact that it's all about one person also lends itself to this approach. You can learn about him and Bowie at the 20-minute mark or at the 60-minute mark, and in some ways, it doesn't really matter. By the end of the film, you'll have gotten that information and put together this composite portrait of Brian in your head.

What I'm super interested in is how do you take that approach and do a fiction film, a narrative story? We can also adapt this approach to existing films. How many alternate takes and cutting-room floor stuff happens with any new film now? What if there's a way to use all of that material in a generative platform? I want to see the generative MULHOLLAND DRIVE, because that film kind of plays like a generative film anyway.


Gary Hustwit, photo by Ebru Yildiz

Eno’s Oblique Strategies can function as a way of releasing unconscious connections. The strategy “Honor thy error as a hidden intention” feels like another way of saying “follow your Freudian slips.”

Yeah, definitely. I think part of that unconsciousness is a little bit about our brains making connections that aren't necessarily there and bringing out things in the footage, or in this case, bringing out things in Brian and his thinking. We're doing that as the audience—I'm not doing that as the creator of the film. It is a lot about how we want to try to find patterns and solve puzzles and figure out what the connection is between this scene and the next scene.

What version of the movie did Eno watch?

Brian saw the Sundance premiere version, and then he saw the London premiere. And I would send him pieces of things to watch during the making of the film. So he’s seen two very different iterations, and he remarked on it in the conversation after the U.K. premiere at the Barbican Centre. He was like, “That version was very wordy and poppy.” It was less music and more of the intellectual conversation. And sometimes you get much more music and less talking. Both times he saw Laurie Anderson. In the Sundance generation, it was all Laurie and then Byrne came in later, and there's even someone else that we're getting ready to film.

Knowing that you were going to use this approach, did that affect how you did interviews or gathered material?

I don't think it did. Other than the fact that I talked to Brian about generative filmmaking because I knew it would be interesting to hear his ideas about using generative software in this process, I just approached it like any other film that I make. I wanted to focus on Brian's ideas about creativity and how he enables it in other artists. I figured that if we just got great stuff, it would work.

So the individual sequences that go into the algorithm are edited beforehand?

Some are edited and some are being created on the fly, so it's a combination of the two. How long should the scene be? Can you have a 10-minute scene in this film or several 10-minute scenes back-to-back? Is that too long? Again, there's a rhythm. For the Film Forum run, I'm making dozens of different versions. Or I can create it live in the theater in real-time.

How would you distinguish between generative software and what AI does?

There are so many different flavors of generative and AI software. You can have a generative software program that is run by an algorithm programmed by humans, or artists in this case. Or you can have something where the decision-making is based on a model that is trained on other people's data that's found on the web or whatever, ChatGPT, for instance. Both those things are generative. One is using actual intelligence to program the algorithm, and one is using artificial intelligence to make those choices. So in our case, we programmed the algorithm with our knowledge as filmmakers of how to tell documentary stories. We didn't train the system by feeding it 10,000 documentaries and letting it figure it out.

And the data set of ENO is kind of a closed system. We're using this software that we created on our own material. We're not using other people's footage here. It's all our stuff from Brian's archive or things we shot or things we've licensed or whatever. So it is different from something like a large language model or a text-to-video generator. These other things have amazing potential but also have real ethical questions. It’s always what your motivations are and the way you're using the technology. It's not “all technology is bad.” In this case, we were trying to make a capability that didn't exist before. It wasn't about making films quicker or easier or cutting a bunch of people out of the process by using technology.

How do you know when the movie’s done?

I don't know. I'm sure at some point I'll want to stop, but there's still so much footage: so much of Brian’s archive, new things coming out from European television archives or whatever, people approaching us with new material too. And we can also continue doing new filming. Brian's involved in a lot of interesting projects now with this Hard Art group that he co-founded in England. So we'll see. It’s part of the experiment. Does it need to be finished?


More from Sloan Science and Film:

TOPICS

SHARE