Ed Ulbrich is fairly self-effacing about his calling as a builder of bridges between entertainment and technology. “I’m trying to help save the world from boredom,” he explains.
That semi-facetious description sells him short, of course. Prior to landing last month at Moonvalley as head of strategic growth and partnerships, Ulbrich spent more than three decades shepherding visual effects for films like Top Gun: Maverick, Black Panther, Avengers: Infinity War, The Curious Case of Benjamin Button, and Titanic. He also produced the sci-fi film Ender’s Game and helped pioneer live digital human performances with the fabled “Tupac Shakur hologram” at Coachella.
Ulbrich was previously head of content at Metaphysic, whose technology gained attention for enabling a series of puckish Tom Cruise deepfake videos. He also held senior roles at Deluxe and spent two decades at Digital Domain (the effects house co-founded by James Cameron, Scott Ross, and Stan Winston), where he also served as CEO.
After joining Moonvalley, a firm known for its AI tool Marey, which relies only on fully licensed materials, Ulrich spoke with Deadline about the state of AI in the industry. He also described what attracted him to Moonvalley, which recently acquired Bryn Mooser’s filmmaker-focused Asteria Film Co.
This conversation has been edited and condensed for clarity and length.
DEADLINE: Can you tell me about your views about and experience with AI as this technological revolution has been unfolding over the past few years?
ED ULBRICH: I’ve been through this weird journey of creating visual effects, and then it was really during the pandemic when I had this big pivot. And I’m all in now. But what was interesting with the pivoting to AI was, a strike happened, right? And the sentiments around generative AI shifted quite significantly. But at the same time, some other things were happening around it that were quite exciting and interesting. I do think a key thing for me which was the game-changer and why I was just compelled is that at its core the idea of [Moonvalley’s] clean model of an ethically sourced, ethically trained, a legit, proper thing, you know, no stolen pixels, no scraping of the internet. It’s done in a great way. And it is so important that it happened.
DEADLINE: What are you able to do in this new role that you didn’t have a chance to do before?
ULBRICH: There are a couple layers to generative AI. There’s the foundational model, which is the thing you hear about, right? It is an enormous amount of data that we’ve used to train models. That is what enables it to create things that look mind-boggling, amazing and incredible. And there’s this new thing, which is this model Marey. This is all I ever wanted. I was at Metaphysic, we were doing faces, but we hadn’t gotten to the whole frame. I just kind of saw the professional filmmaking industry hitting a fork in the road, where we’re seeing all this investment capital going into these amazing tools. I won’t name them, but you know who they are. People ask, ‘Aren’t you worried about those tools?’ Not really. I think in many ways they validate what I’m saying. See, it works. See, doesn’t that look amazing? The problem is you can’t sell that to a major motion picture studio. You can’t sell that to a brand or an advertising agency. What are they going to do with it? They’re exposing themselves to – we know there’s all kinds of lawsuits being filed around, there’s all that stuff because we’re peddling stolen pixels. I would say, go to any filmmaker, go to any studio. If you have a choice between, you can use this scrape thing that delivers amazing things, or you can use this thing that’s built on a license agreement. It’s not even a choice. This is where it’s all going. These large video models is where it’s at. And the fact that this is being done ethically and it’s to me, this is a game changer.
DEADLINE: Does this time remind you of other inflection points?
ULBRICH: I go back to the beginning of my career and the early days in the early dawn of time in the late 80s, early 90s of computer graphics, when it wasn’t a thing. And, you know, I remember when it was a computer scientist that had to sit and make this technology work and then it wouldn’t look very good, so we had to hire art directors to sit next to their computer. Eventually, we started create interfaces that sat on that software that you don’t need to be a computer scientist or a mathematician. You just need to know how it should look. And now these are accessible. This is for professional filmmakers. There’s a big difference between that and what I call ‘slot-machine’ creative. ‘Whee! You get some amazing stuff.’ When you go to the David Finchers of the world, there is an exactness of vision there, where ‘close enough’ doesn’t work. You have filmmaker tool that directors can give specific, precise notes that we can address as part of an iterative creative process. That’s the difference.
DEADLINE: I’m glad you brought up Fincher because Benjamin Button is one of your signature films. For a director and a project like that being conceived of today, what are the benefits exactly? Is it mainly cost?
ULBRICH: Benjamin Button was the first use of machine learning and computer vision to create an character in a movie. And largely that’s still the same kind of basic methodology and tech-stack approach with CGI, right? I’m going to quote a visual effects friend of mine, Rob Legato. He said that doing CGI for that kind of human performance, it’s sort of like composing music one note a week. It is not a fluid, real-time, spontaneous thing. Each step is a prerequisite for the next and you can’t cut to the end. It has to go through each of these things, and final approvals need to be made at each stage. Now, we’ve crossed from CGI rendering into real-time. You’ve heard all the stuff that the LED walls and the game engines, things like that, that’s all fine. I’m now talking about real-time people. And also being able to create the entire frame. So, this is where you start to see filmmakers saving an enormous amounts of money because a lot of things are being automated. On Benjamin Button, I think we did about 51 minutes of screen time of Brad Pitt at various ages when he was the little guy. We had four different looks of Benjamin and then Brad would give us a live performance that we would have then apply to his live performance to his CG self. So, really exciting, really interesting, but a slow process. There were also about 200 people involved. The post-production was almost two years, so we had two years of R&D going into the movie. So you’re talking about hundreds of people, you know, two-and-a-half, three years of production and post-production time and just for Brad’s one character and the four different looks, it was a significant amount of money.
DEADLINE: It sounds like you’re gaining not only flexibility, but efficiency.
ULBRICH: Then let’s look at the [2024] movie Here, directed by Robert Zemeckis. You have four cast members. And Tom Hanks, his character is shown from 17 to 85 years old. So, there are way more looks. But now, it’s all generative AI. And with four cast members, it’s like 35 or 40 times more work. But it was done, I think they had 50 people on it.
DEADLINE: And the budget’s been reported to be $50 million.
ULBRICH: Exactly. So AI automates things and makes them near-real-time. If there are 20 steps, it automates 18 of them. So, from a filmmaking perspective in post, in dailies, we get back into it’s very high-speed, but it’s all very iterative and it’s all very, very quick. You’re seeing final results immediately. Here was Metaphysic. Now, extrapolate from the human face to the whole frame. Moonvalley is all for professional filmmakers. They have built controls, camera controls, lighting, all the things that you need as a filmmaker. And the wonderful part of this is, while lots of things can be automated, we can still put these hands in the tool, that you need the DP can still light something on the day. We still need photography at the source to create things. For department heads of films, it gives them incredible amounts of power. It’s fewer people and lets us do more in a concentrated way. It also brings budgets down. And, you know, we hear about all this as, there’s a dark side. ‘Oh, it’s going to cause all these problems. We’re going to the dogs.’ I saw all of that in the late-’80s and the early-’90s, with all the model makers and motion-control programmers and scenic painters and all the traditional visual effects, the stop-motion people, they were outraged.
DEADLINE: Don’t forget animators.
ULBRICH: Oh yeah. I remember at Pixar, if you mentioned motion capture it was heresy. If you look historically, and zoom out, all those people, some of them actually transitioned into the digital realm into CGI and went on to win Oscars. Many didn’t transition over, but from a historical perspective, it created an enormous number of jobs, hundreds of thousands of jobs that didn’t exist prior to the new world order. So, here we are at another one of these times. I compare this with electricity. It’s kind of hard to fight against electricity. It’s pretty powerful. Here we are.
DEADLINE: After all of the storms of the last couple of years, particularly with the strikes in 2023, do you get a sense that the creative community is getting a bit more pragmatic about AI?
ULBRICH: That’s a big topic. During the strike, I never thought this would happen, but I became like the boogeyman. Because I was right out in front, you know, like, ‘Hey, there’s all this amazing stuff happening.’ And our movie [Here] was supposed to come out. And then we couldn’t even show it. The strike ensued. You’ve got studios, you know, AMPTP, fighting with SAG, you and you’ve got AI. No one was talking to the AI companies. So as that happened, as we began talking to SAG, we realized we were aligned. Like, we’re here, we’re with you. We need to we need these issues to be resolved so that everyone can carry on. And everyone’s very much align on this issue of consent. What sort of that sort of makes sense? Right. What’s wrong with that? Like, oh, licensed data. You don’t steal anybody’s stuff. We already don’t do that. So this is just, there’s a new framework.
DEADLINE: Zemeckis and Fincher have long been in the vanguard, but is a wider circle starting to embrace AI?
ULBRICH: During that whole strike period, I met with some of the most important directors in film today. All secretly coming in on their own, wanting to see this stuff, which is just miraculous. It’s not something that happens in the visual effects space. It is undeniable. There are premium filmmakers and directors and independent filmmakers of substance and financiers thereof, particularly at the high end, there is genuine interest. You’re going to see some things that will be coming out in 2026 because of the strikes, they’re being moved into production now. But you know, AI is here. It is at the studios. Not all of them are talking about it, but some of them are. They’re just beginning to talk about it. And that’s why, to me, this is such an exciting time because, you know, eventually the stigma is going to be gone because we’re doing it. It’s happening. Studios are doing it. Filmmakers are leaning in these tool to tell their stories. And I do believe this and this idea of democratizing creativity, I don’t know about that, because it doesn’t make me a better creative. It just give me better tools. But in the right hands, that’s what I think is exciting. Put this tool, these tools, you demystify this stuff, and we get away from just engineering shots and we put people making tools in filmmakers’ hands, that’s super interesting. That’s super exciting. And that’s what we’re seeing happening. The response has been significantly positive.
DEADLINE: It does seem a lot more feasible to make something imaginative given the availability of these tools.
ULBRICH: That’s what I think you’re going to lookout for in the next 12 to 18 months. You’re not going to see suddenly an entirely AI movie. My personal view is a lot of this is going to disrupt the visual effects space first because those are the things you cannot shoot in camera. So, those are going to be things that will be in many cases much less expensive. And much less precious and much faster. And with that, that saves money. People say, ‘Argh, the jobs …!’ Well, look, the flip of this is that I think more films get made. A lot of things that wouldn’t have been made before because, ‘Well, it’s a little risky. It doesn’t fit our formula and it isn’t existing IP and it isn’t a franchise piece.’ You know, it’s tough. Right now, being a producer is really hard. It is really, really hard to get a movie or a show greenlit, or show greenlit. And it comes with, you know, compromise after compromise. You’ve got to compromise to back them into a budget. That’s always been the case, but it’s particularly pointed right now. So, in that context, you start to look at a lot of films, like Here. Without this technology, it wouldn’t have been made.
DEADLINE: Areas like pre-visualization, location scouting, all of these things are also seeing benefits. Or period scripts that might have stayed in a drawer, you can think about bringing them to life.
ULBRICH: I see a day soon, and I’ve already seen it happen, where you’ve got multiple department heads using AI. And I think this is really a significant thing in the movie Here and other films, in the makeup trailer, you’ve got something we call an “AI mirror.” It’s just a monitor. It’s a camera off to the side. But when they’re doing hair pieces and fittings, they’re actually seeing the right face on the character in the trailer. So they can tell if it actually works, they’re dialing in the look. So it’s analog meets digital. Live AI in the makeup trailer because you can do it on a laptop now. That makes that character look better. That actually advances this because, by the way, the DP lighting on set, if you see that the light that falls on that subject is the light that light up on the AI. Now, imagine you can blow that out to other parts of the frame: the environments, the backgrounds, locations and things like that. That’s where this is all going, very fast. And that’s why you need a clean model.