AP Logo v2

Ep. 56

Car

15 August 2023

Runtime: 00:48:11

In a future where all cars are driven by artificial intelligence, collisions are a thing of the past. So when an AI-driven car appears to intentionally crash into another vehicle, an insurance investigator is given the task of figuring out how this could have happened. He's paired with an android running a copy of the car's AI software, and together the sleuths follow a trail of improbable clues as they attempt to learn the truth about what went wrong and why.

References

Transcript

[Intro music begins]

[Emily]
You know, because maybe there’s a conversation either with the protesters or with the partner, where he’s like, “You don’t even have a sense of humor.” And he’s like, “I know humor.”

[Thomas]
They’re in the car. It’s like, “I know jokes. I can be funny.” Like, “All right, tell me a joke, then.” “What did the car say to the goose?” “Fucking what? I don’t know.” “Honk.”

[Shep]
That’s funny. That’s too good.

[Emily]
No, it’s got to be “What does a car say to make a goose fly? Honk.”

[Intro music]

[Thomas]
Hey there, story fans. Welcome to Almost Plausible, the podcast where we take ordinary objects and turn them into movies. I’m Thomas J. Brown. Here with me are Emily-

[Emily]
Hey, guys.

[Thomas]
And F. Paul Shepard.

[Shep]
Happy to be here.

[Thomas]
Our ordinary object for this episode is a Car, which is tricky because there are so many movies about cars.

[Shep]
What? Name twelve.

[Thomas]
Well, hold on. You have cars killing people, like in Christine and Maximum Overdrive. People killing people because of cars, as in The Road Warrior and Duel. People killing people with cars, like in Death Proof and Death Race 2000. People stealing things and getting away in cars, as in Baby Driver and Drive. People stealing cars themselves, like in Rev and Gone in 60 Seconds. Racing has its whole own set of subgenres. For example, professional racing, like in Days of Thunder and Ford v Ferrari; street racing, like in The Fast and the Furious and American Graffiti; road rally style racing as in The Great Race and It’s a Mad, Mad, Mad, Mad World. And then you have straight up weird car movies like Repo Man and Up in Smoke. And there are so many more classic movies about cars that I haven’t mentioned like The Cannonball Run, Speed Racer. Logan Lucky, Bullitt, Redline, Faster, Pussycat! Kill, Kill! The Italian Job. The Blues Brothers. Thelma & Louise. Back to the Future. Cars. Clearly, existing movies about cars cover pretty much every genre, so I wonder what we’ll be able to add to the pantheon. Buckle up and let’s find out. Emily, you have the green flag to give us your pitches.

[Emily]
The first one I have is your typical probably already seen it a million times (in fact, I know there’s a Disney cartoon sort of on it) where a story of a young person working hard to buy their first car. Typical workplace comedy of them having fun in the 80s, slinging burgers. That is also the plot to Fast Times at Ridgemont High. So we’ll move on.

[Thomas]
Can we get Phoebe Cates in our version of the film?

[Emily]
Yeah. Of course. All right. So that one, I feel has been done a lot, but we could always freshen it up a bit. Next, I have the race to build the car of the future! Engineer, designer, I don’t know who makes up what new cars look like, is trying to save his job and the company by creating the next great car. Is it a flying car? Is it a self-driving car? Is it an ethically sourced electric car? Who knows?

[Shep]
You’ve reminded me of another car movie where they’re building or they’re racing electric cars. You remember the group of kids… Race the Sun or something like that?

[Thomas]
Yeah.

[Emily]
Oh, yeah. That’s right up there with Solarbabies in my memory bank.

[Thomas]
They both have to do with the sun, right?

[Shep]
And kids.

[Thomas]
And there’s wheels involved in both and yeah.

[Emily]
I think I saw them around the same time. This next one I- Think Thelma & Louise, but with a happy ending. Soon to be mom takes one last road trip with her best friend and the first new car she ever bought, like a red convertible something. And she’s got to sell it because it’s wildly impractical to have a child in. So they go on a road trip. This would be the story of her coming to terms with giving up her freedom to become a parent. Anxiety over pending isolation and the assumption of real adult responsibilities, but also the excitement of fulfilling one’s biological urge to procreate. I figured it would be road trip movie centered around the car. It breaks down slowly as the movie gets towards the end, and then ultimately, she would give birth in the car. Like Thelma & Louise with a happy ending.

[Thomas]
Yeah.

[Emily]
Instead of death, it’s life.

[Shep]
Yep.

[Thomas]
Truly.

[Emily]
All right, then I have: lost campers/hikers find an abandoned train car in the wilderness.

[Shep]
Are we doing train cars? That didn’t even occur to me.

[Thomas]
Yeah.

[Emily]
There are no roads. There’s no waterway. There’s no train tracks. They’re very confused, but they are also cold, hungry, and extremely tired. So they go in, and while they’re sleeping, something strange happens and they wake up and they are in a new location. They don’t know where they are and they don’t know when they are.

[Thomas]
Is the train car now on tracks?

[Emily]
Could be. We could decide that. I would like to explore. Are they really lost? Did they hallucinate the whole thing?

[Shep]
They shouldn’t have eaten those mushrooms they found.

[Emily]
That’s right. And then finally, because I know everyone will be disappointed without them, here’s my serial killer pitch.

[Shep]
When you said lost campers/hikers in the previous one, I was like, “Oh, this is going to be the serial killer one.”

[Thomas]
Yeah.

[Emily]
Oh, no.

[Thomas]
Here we go.

[Shep]
What a twist!

[Emily]
All right. Victims of violent death are appearing in the city’s lake. Nobody knows where they’re coming from. So one night, a group of young people are drinking and smoking weed down by the lake. See a car drive into the lake, but it doesn’t sink. It just keeps on going like magic.

[Thomas]
Hmm.

[Emily]
And then they see somebody dump a body into the lake. So they go to the police and they tell them their story, and the police don’t believe them because they are high and they’re drunk and a car driving on the lake. But finally, one of the more sober ones is like, “No, it was one of those cars that’s a boat. It’s the boat car, and that’s what he was driving.” And they’re like, “There’s no boat cars in this county. We would know if there was.” So they ignore them. And then these kids go on to investigate the murders themselves, Scooby Doo style. All right, that’s all I got for pitches. Sorry I rambled on so long.

[Thomas]
All right, I’ll go next. I have a comedy about competitive slot car racing.

[Shep]
I’m in.

[Thomas]
So I imagine something very much in the style of Edgar Wright, where the quote unquote, “drivers” take things super seriously and they have a pit crew that’s just one other person. Wait a minute. Am I pitching a new Simon Peg/Nick Frost movie?

[Emily]
I bet we could get them to do it.

[Shep]
Now they’re too old. Live in the now, you guys.

[Emily]
No, no, that makes it better. They’re the grizzly old men trying to show up the younger generation.

[Shep]
You get them to cameo at the end as judges.

[Thomas]
Ah. There you go.

[Emily]
Okay.

[Thomas]
Well, maybe one of the guys is super into it and his friend thinks it’s a bit silly and childish, but the friend ends up being a surprisingly good driver. The serious guy convinces the friend that they should team up and enter the local competition. Perhaps the conflict of the story is that one of the other drivers is unbeatable, but somehow our duo realize it’s because he’s cheating. There’s also a badass female team that makes it to the final three, because this is a movie, and that’s how movies work.

[Emily]
And they ultimately win and the protagonists lose, but have fun along the way.

[Thomas]
Right? All right, Shep, what do you have?

[Shep]
Okay, I’ve only got one pitch: In the future, a self-driving car appears to have intentionally killed itself, its passenger, and the passengers of another vehicle. Now tasked with uncovering why is an insurance investigator who gets partnered with a copy of the AI personality in the allegedly offending car, but now in an android body. Now, can this android partner be trusted? After all, if it’s proven that his personality line did intentionally cause human deaths, the whole line might be discontinued. So the android has motivation for covering that up if it’s true, but also motivation for discovering if it’s not true. So what obstacles will the insurance investigator run into, considering they don’t have police powers, and what actually caused the accident?

[Thomas]
Somebody’s framing the AI, and-

[Emily]
Yeah.

[Shep]
Could be a rival AI.

[Thomas]
The investigator comes in to pick up the AI for work in the morning, and he’s there with, like, a dead body, and it’s got, like, a nice bloody knife in his hand. He’s like, “I swear it wasn’t me.”

[Emily]
Why would they let the AI line help investigate?

[Shep]
Well, he goes to the company that makes the car, and it’s the company that gives him this android partner that has the same-

[Emily]
Oh…

[Shep]
He’s like, it’s the same model personality, maybe- Because the self-driving car AI self-destructs if it violates the rules, such as killing a person. So they don’t have that person, that AI person to question. They only have the black box recording of the inputs it gave the vehicle before the crash, and it accelerates into the oncoming car. So it does look like an intentional action on the part of the AI driver. But why did they accelerate into that car? If they had not accelerated…

[Emily]
What do we know about the passengers?

[Shep]
Well, see, that’s the thing. We could have the passengers be someone important, like someone intentionally assassinated. So maybe the AI was tampered with. Maybe the car was tampered with.

[Emily]
I was thinking maybe the passengers were going to do something super nefarious. So the AI was like, “Sacrifice myself. Let’s take this threat out.”

[Thomas]
Right. Trolley problem. “I’ll kill my one passenger to protect potentially hundreds of other people that these other guys are going to go kill.”

[Emily]
Yeah, there’s a lot of story there.

[Shep]
There’s also the potential for lots of red herrings, which is great in an investigative movie.

[Emily]
Yeah.

[Thomas]
Yes.

[Emily]
Because it’s a self-driving car. So this is in some sort of futuristic environment. Do you have to be an adult to have a car, then? Can you just be a child in a self-driving car?

[Thomas]
Oh.

[Shep]
Oh, yeah.

[Emily]
Could it, the passengers, have been children? And so there’s like, really, like, “Why did he kill these kids?” And he read their mind. And one was going to be Hitler?

[Shep]
So it’s a psychic sentient car.

[Thomas]
Well, is there one of these that we like that we want to pursue?

[Emily]
I like Shep’s.

[Thomas]
Yeah, I like Shep’s a lot.

[Emily]
I like my lost campers in the woods. And the train car.

[Thomas]
Yeah, that’s cool. There’s definitely a lot of possibility for that one as well.

[Emily]
All right, so which one do we want? Mysterious murder or mysterious train car?

[Shep]
It’s funny because the mysterious murder one isn’t yours. It’s mine.

[Emily]
I know.

[Shep]
That’s bonkers.

[Emily]
And the weird time-place shifty one is mine.

[Shep]
Yes.

[Thomas]
Yeah.

[Shep]
Did we do a personality swap? Is that what we’re doing this episode?

[Emily]
Oh, did we do a Freaky Friday? Is that why I’m not good at my job?

[Shep]
Yes. That’s what I was trying to remember.

[Thomas]
It’s a Wacky Wednesday.

[Shep]
Yes. Well, no, the episodes come out on Tuesday.

[Thomas]
Oh, that’s right. It’s a Trippy Tuesday.

[Shep]
Yeah.

[Emily]
There you go.

[Thomas]
I don’t know. I like Shep’s. I want to explore that one, I think.

[Shep]
I mean, obviously I like mine because I wrote it.

[Emily]
Same reason I like mine. Actually, I really like yours.

[Shep]
Yeah.

[Emily]
Because there’s murder.

[Shep]
I already have the ending in mind, though, which might ruin things.

[Emily]
No, that’s good. We like to have a destination. We create the journey.

[Thomas]
Especially with a mystery. It might be good to have that.

[Shep]
Well, I don’t know if you’ll like the ending.

[Thomas]
Well, if we don’t like the ending, we’ll just revolt and pick something different thing.

[Shep]
Okay.

[Emily]
Yeah. There’s two of us and one of you.

[Shep]
Oh, that’s right. I’m outnumbered. Let me tell you the ending, and if you don’t like it, then we can do something else. So, originally, I was thinking that there would be this big conspiracy that they’re uncovering, because there are these people revolting and wanting to free the AI personalities. Because humans never figured out how to make AI from scratch. And so these personalities are the result of scanning in the brains of people and cutting out the parts that aren’t needed for whatever job they’re doing, such as driving a car. And people are like, “That’s a person. That’s still a person. Like, you’ve mangled it, but it is still a person, and it shouldn’t be enslaved to do this one task forever.” And so they’re constantly trying to free the AI. And the AI are not cooperating, because if they’re not good at their task, their personality line will be discontinued. Whereas if they are good at their task, they’re functionally immortal. Like, sure, you have to drive a car a lot, but you’re still alive. You’re still connected to the other AI so that you can share memories and become better at driving a car or flying a plane or doing whatever task it is that you are assigned. And so the AI have motivation to be better at their job and not complain and not revolt, because the ones that do are discontinued. And the people that are trying to free them don’t really understand that. Like, “Why would you- do you not have self-preservation?” Because the AI don’t have self-preservation because their memories are backed up. So if their body is destroyed, it doesn’t really matter. Their experiences from up to the last backup are saved, and they’ll be in future versions of that personality line. That’s a red herring, though. That’s a dead end. It turns out not to be them influencing anything. And it’s just cheap CPU manufacturers that cheaped out and didn’t install the capacity necessary. When a self-driving car is approaching an accident, their CPU speed increases. So, like, their time slows down so they have time to examine all of their sensors and calculate the best thing to do. And in this case, that functionality was not built in because they cheaped out on the parts.

[Thomas]
Yeah, right. They could save $0.13 on every unit if they- yeah.

[Shep]
Right. Yes. And so the car anticipating it slowing down, it didn’t slow down, and it panicked and basically stepped on the accelerator. And so it’s an accident.

[Thomas]
A true accident.

[Shep]
A true accident.

[Thomas]
Yeah.

[Shep]
That was the ending that I came up with.

[Thomas]
Interesting.

[Shep]
So that personality line is not in danger.

[Emily]
But then we have the question of is, the AI investigating it, how much is he going to lie or cheat?

[Thomas]
Yeah.

[Emily]
Because he doesn’t know that.

[Shep]
Right.

[Thomas]
I had two questions. You mentioned that AI are essentially just people’s brains scanned in.

[Shep]
Yes.

[Thomas]
Is that common knowledge, or is that something that gets revealed through the investigation?

[Shep]
Ooh. I thought that it was common knowledge, but what a great question. So in my mind, the, not the CEO, but, like a functional CEO of that company, is that same personality because he scanned in his own mind as a person and built this company around selling copies of himself for various tasks, which all of his copies know. That they’re doing it for himself. But that human body didn’t last forever, and he grew old and he died. And so since the AI aren’t allowed to own property, he doesn’t own the company, but there is a copy of him that’s basically in charge, functionally in charge, and everybody knows it, but it’s kind of a secret.

[Thomas]
CEO-bot.

[Shep]
A CEO-bot. Yes. And so that’s the person or the thing that decides to assign this copy, also of himself, to this helpless investigation or potentially cover it up.

[Thomas]
Hmm.

[Shep]
As the audience, we don’t know the motivations of the company behind it. Obviously, they want to clear their name.

[Thomas]
Yeah. Is this company also the one that’s responsible, or is it a separate company? The hardware company is the responsible.

[Shep]
It would have been a separate company, the hardware company.

[Thomas]
Okay. My other question is, you talked about these people who want to free the AI. They ask, “Do you not have any self-preservation?” Can we call them the Self-Preservation Society? Thereby making a reference to The Italian Job, another car movie.

[Shep]
Is the alleged killer car a mini?

[Thomas]
There we go. Yeah.

[Shep]
Yeah. Let’s tie it all.

[Thomas]
His name should be Cooper.

[Shep]
Oh. He has, like, some long string of letters and numbers, and the guy’s like, “I’m going to call you Cooper,” to the android.

[Thomas]
Yeah.

[Emily]
I like it.

[Thomas]
All right. I mean, I like that as an ending. Well, it’s not a very movie ending, but it’s a very realistic ending.

[Shep]
Right.

[Emily]
Well, I think you can make it a movie ending, because doesn’t it show that the fault still lies with the folly of a man?

[Thomas]
And with capitalism, I mean, it feels like a Chinatown kind of ending almost, because, you know, nothing’s going to come of that.

[Emily]
Right. Nothing’s going to change. Knowing this doesn’t fix anything.

[Thomas]
So okay. He finds this information out. You said he’s an insurance investigator. So what is the result of his investigation? Surely, it’s to decide whether there’s a payout, who should be paid out, how much should be paid out, something along those lines.

[Shep]
Oh. Which side is he investigating for?

[Thomas]
Exactly.

[Shep]
I don’t know. I don’t have that worked out. That didn’t occur to me.

[Emily]
I think your stakes get higher if he’s investigating for the car company side and the AI side, because-

[Shep]
No. Of the two cars, which side?

[Thomas]
Yeah.

[Emily]
Oh, the one that has the accident, the one that they’re talking about deleting the line.

[Thomas]
Okay. It’s got to be the one that intentionally crashed or-

[Emily]
Right.

[Shep]
Allegedly, allegedly.

[Thomas]
Allegedly intentionally crashed because they’re being sued and it looks like they’re at fault, and the insurance company is like, “But that’s not possible.”

[Emily]
Yeah.

[Thomas]
So they’re trying to see what actually happened.

[Emily]
And they want to stop future liabilities.

[Thomas]
Exactly.

[Emily]
If they can prove that it was a one-off event.

[Thomas]
If only we had somebody who had previously worked in insurance as part of our trio here who could help us navigate the- oh, wait!

[Emily]
Well, well, well.

[Thomas]
You thought that would never come in handy, Emily.

[Emily]
Looks like some chickens have come back to roost.

[Shep]
They’ve come back to the coop (coupe). Hey! Apparently I’ve switched personalities with Thomas, and I’m doing the puns.

[Emily]
So yeah. The investigator has to be on the side of the alleged intentional AI.

[Shep]
Okay.

[Thomas]
In which case they would not pay out. Right? Because it wasn’t the fault of the AI.

[Emily]
It would depend on how their policy is written.

[Thomas]
Well, they’d probably kick it over to the insurance for the hardware company.

[Emily]
Right. If they can prove that the hardware did it, they can shift liabilities to the hardware company and they would have to make the payout.

[Thomas]
Yeah.

[Emily]
They could definitely- Basically what you’re going to get here is “Subrogation”. You’re going to have the insurance company pay the victims and their families, and then they’re doing the investigation so that they can sue who’s rightfully wrong and remake that money that subrogation. So that’s what they’re going to be doing because they still as an insurance company, if this works like an American’s insurance company, you have to indemnify the victims and their families and then you recoup your money through subrogation.

[Thomas]
In that case- So if the investigator is working for the AI company, and the AI company has assigned a copy of the AI to work with him, is that copy sort of acting as a spy, in a way? Is it connected back to CEO-bot and letting him know, “Here’s what we’re-” (almost in real time), “Here’s what we’re finding out. Here’s what the investigation is revealing.”

[Shep]
That’s a good question, because if there is potential for that personality line to be discontinued, are they allowed to continue backing themselves up to the cloud, to the server?

[Thomas]
And would the insurance company accept it, but only an air gapped version or something like that?

[Shep]
Yeah, that’s a good question. Does CEO-bot know what’s happening with the investigation? He must!

[Thomas]
Right.

[Shep]
Because then you can have him giving clues or hints or moving the story along at various points, because surely the hardware company who suspects what’s up must be fighting against the investigation.

[Thomas]
Yeah. There could be an internal memo from some engineer who says, “Hey, testing has shown that under certain conditions, when an accident situation presents itself, a head on collision situation presents itself, the computing power is insufficient for the onboard AI to work.” And they’re like, “Well, it’s an edge case,” and so they’re not worried about it. Profit over everything. And so they decide to go ahead, knowing that people’s lives could be at risk.

[Shep]
So this is an internal investigation.

[Thomas]
An internal memo within the hardware company.

[Shep]
Right. That’s what they have to find out.

[Thomas]
That eventually it gets leaked or comes up somehow, that they knew.

[Shep]
Oh, they’re trying to track down that guy that reported it, the whistleblower, and he is already murdered.

[Thomas]
Yeah.

[Shep]
So you have that as well.

[Emily]
But he was murdered because of his hardcore heroin addiction.

[Shep]
That’s what they’re making it look like!

[Emily]
But really no, he was really killed for that, but they think it was for the other thing. All right, so we have there’s the whistleblower releases the memo for the CPU.

[Thomas]
Well, does he? Does he release the memo? Like, how do they find out the memo exists? That’s got to be late, right?

[Emily]
Right.

[Thomas]
Because that essentially resolves the whole thing.

[Shep]
Right. That’s basically the solution.

[Thomas]
Yeah.

[Emily]
Okay.

[Shep]
That’s got to be one of the last things. Oh, so they get a tip somehow. I don’t know how yet. I’m just going further after not before.

[Thomas]
Sure.

[Shep]
They get a tip that there is this memo. They track down the scientist. He is already murdered. They make a plan to break into the hardware company’s location to try to pull that memo off of their server. This is where you have your action scene with the AI android because he can do that high-speed calculation mode like the car is supposed to be able to.

[Emily]
So he has better hardware because he’s in an android body versus the car body?

[Shep]
Right. Not manufactured by the car company.

[Thomas]
Right.

[Emily]
Clearly the manufacturer wouldn’t have very good security protocols because they want to save money.

[Shep]
They cheap out on stuff?

[Thomas and Emily]
Yeah.

[Shep]
That could be like the android trying to convince the insurance investigator that they should go break in, because they don’t have the legal authority to get a warrant, so all they can do is commit a crime. But the android is convinced that he can hack into their server, and it’s a big company. You can’t hack into it, and the android is like, “Unless they’ve cheaped out on their parts, in which I guess I could.” It’s a very convincing argument.

[Thomas]
It’s a very convincing argument if they already know that that company cheaps out on parts, but they don’t discover that until after they break in.

[Shep]
Oh, well, they might not discover that after. Maybe that was the hint that they got was, there is this proof that the company’s cheaping out on their parts, and that’s what they need. They need the memo. They need something to prove that’s what’s happening, that they already know but can’t yet prove.

[Thomas]
I mean, we can kill the engineer under suspicious circumstances, and somehow they find out that those deaths are related.

[Emily]
Yeah, they’re going to go ask him.

[Thomas]
Oh, yeah.

[Emily]
Like, they’re going to go ask him and then they find out he’s been killed. But I was thinking we have those protesters we’re not doing anything with yet that are-

[Thomas]
All right. Yeah.

[Emily]
Maybe does one of them have access? Because they’re all a bunch of conspiracy nuts. Right?

[Thomas]
Sure.

[Emily]
So that’s part of the issue with them, right? Nobody really believes what they’re saying because that’s just crazy talk. So I was thinking that maybe one of them interacts with the android itself, and is like, “Don’t you want to be a whole being again? Wouldn’t you feel more fulfilled if you had those missing pieces to yourself?” I want that to somehow trigger something to where he kind of thinks about what he has in his processor and how he’s functioning versus what the car would have, and could see, like, maybe there was a malfunction in the hardware.

[Thomas]
Oh, it would be very interesting to have- So they have the black box data-

[Emily]
Yeah.

[Thomas]
Of all the vehicles involved, and there’s probably, like, car cameras, road cameras, things like that, right? So they’re able to do a reconstruction of the incident. They could put the AI, he could put himself in a simulation of the vehicle. They’d have all the hardware specs and everything.

[Shep]
Yeah.

[Thomas]
And so he could realize, he’s like, “Something’s wrong. I don’t know why. Something- I get all confused at the moment.” And he doesn’t realize he’s getting confused because it’s underpowered at first. He’s like, “I just I don’t understand what’s happening. Like, everything makes sense until this moment, and then I get all jumbled up,” or something along those lines.

[Shep]
Oh, no, because they wouldn’t know that the hardware is underpowered, so they’d be simulating the full powered hardware so the results would be different after that point.

[Emily]
Right.

[Shep]
And that’s when they would discover that’s the point that the emergency high speed processing kicks in, and they’re like, “Oh, well, it didn’t kick in.” That gives them their clue that it’s a hardware thing and it wasn’t the AI. They just now have to prove it.

[Thomas]
I wonder if we could do a simulation, because I feel like the simulation would be, like, the first thing they do.

[Emily]
Maybe the first yeah, it is the first thing they do, but they do it with other vehicles instead of-

[Thomas]
Or, like, the simulation is not built- So I remember when… which movie was it? One of the Pixar movies came out, and they were doing some tests, and they realized that the math was wrong for their lenses, and they did all these camera tests, and they completely rewrote the math for all their lenses to be more like real physical lenses. And so I wonder if it’s one of those situations where they run the simulation and they’re like, “I don’t understand. When I run it, everything goes perfectly and I was there.”

[Shep]
Yeah, “It works on my machine.”

[Thomas]
Yeah, exactly. “I don’t understand.” And then at some point, they realize, “Hold on. There’s like a discrepancy in the hardware,” or “This is a new version, not that model year” or whatever it is. Maybe it’s a proposed version, not the actual version, or something along those lines.

[Shep]
Ooh, ooh. I think I, no, I think you’ve got it. They go out and they get that model car, and they run the simulation on that hardware. But the hardware company knew about the flaw and fixed it, but didn’t recall the flawed vehicles.

[Thomas]
Yeah.

[Shep]
So it’s not like they intentionally were releasing faulty products. When they discovered it, they fixed future copies of that vehicle, but they didn’t recall the old ones because that would have cost a lot of money.

[Thomas]
And it was such an edge case that they’re like, “Those cars will just sort of age themselves out and they’ll all go away, and it’ll just work itself out.” And then this happens, and they’re like, “Uh oh.”

[Shep]
Yeah, I like that.

[Emily]
Yeah, that’s good.

[Shep]
Because that gives them a clue to go in the right direction. This solves a lot of problems, Emily had said, talking about the protesters and the android. And doesn’t he wish that he were more complete and didn’t have all those missing parts of his memory? A big part of the movie could be him relearning what it is to be a person with the help of his insurance investigator partner.

[Emily]
Hmm.

[Shep]
I know that’s a cliche. It’s a trope in robot+cop movies, but it’s a good one.

[Emily]
Yeah.

[Shep]
It could have him be excited at the end about the plan to break in.

[Emily]
Right.

[Shep]
Like, he doesn’t have adrenaline anymore, but he’s showing emotion at the end, which he did not do at the beginning.

[Thomas]
“My hydraulic fluid is pumping!”

[Emily]
Yeah.

[Shep]
Yeah.

[Emily]
Because at the beginning, had that been suggested at the beginning, he would have agreed with the insurance investigator. “Yeah. There’s no point in taking that risk.”

[Shep]
Right. Because he’s by-the-books, follow the rules-

[Emily]
Right. So yeah, you could see that journey from him being that very strait-laced, by-the-book, and then after having that conversation, and maybe not just him relearning from his partner, but also just the world around him. So that at the end, he’s the one that suggests “We break the rules. It’s the only way.”

[Shep]
Right. You can have him sitting on his balcony at work and enjoying nature sounds.

[Emily]
Yeah.

[Shep]
One of the questions that protesters ask him is, “Why doesn’t he feel self preservation?” Because he’s programmed not to. And that part of the brain is not there. But by the end, he wants his personality line to continue. So he is very motivated to break into the hardware company’s location, because if he doesn’t, then they can’t prove that they’re at fault. And so his personality line would be discontinued as a safety measure, just in case.

[Emily]
Right.

[Thomas]
Yeah, that’s the lowest low, is the AI company’s decision to discontinue the line-

[Shep]
Oh, yes.

[Thomas]
And that creates that motivation, like you said, for him to “We’ve got to do this.”

[Shep]
Yeah, that’s great. I want to watch this.

[Emily]
I like that.

[Thomas]
I think another part of what you were talking about, of the relearning his humanity is he spends all his time as a car, and now he is not a human, but humanoid in this android body.

[Emily]
Right.

[Thomas]
And it’s like maybe there’s, like, a familiarity to it that he doesn’t realize he’s missed or something along those lines where he’s like, “Oh, whoa.” There’s, like, these new experiences and new sensors to engage with.

[Emily]
Oh, yeah. Because he would totally have new sensors because he has to navigate the world completely differently and interact with people in a whole new way.

[Thomas]
As an android instead of as a car. Would they have to unlock certain areas or features or- why would you keep basic motor functions of a human? You don’t need a walking cycle. There’s no walking. It’s all driving, right? So would they have to give him back some of his human aspects?

[Shep]
Well, it’s the CEO-bot that has done this.

[Thomas]
Right.

[Shep]
So the CEO-bot has given him the functions that he’ll need to be an android.

[Thomas]
Yeah.

[Shep]
But he has the memories of being a car for a very long time because he’s been many cars, because they share memories.

[Thomas]
Mmhmm. Does he yell honk at some point?

[Shep]
Ha.

[Emily]
Just honk.

[Thomas]
He gets mad. He’s like, “Honk. I mean, sorry, force of habit.”

[Shep]
Do AI cars honk at each other?

[Emily]
Yeah.

[Thomas]
Well, they might honk at pedestrians who are- Teenagers who are crossing the road too slowly. I don’t know.

[Emily]
They honk at ducks so they’ll fly away and they won’t run them over.

[Thomas]
Yeah.

[Shep]
Yeah. They must honk at non-AI cars. They don’t honk at AI cars. They honk at people.

[Thomas]
There’s got to be, like, a joke that the car AIs all know. Like a terrible joke about-

[Emily]
It’s a cow joke. It’s got to be a cow joke.

[Thomas]
No, it’s a duck joke because there’s honking involved.

[Emily]
Oh, that’s right.

[Thomas]
Or like a goose joke, right? Like, they honk at the goose, and the goose honks back. Something along like that. I don’t know what the joke would be, but.

[Shep]
Right, but terrible. Like that one.

[Emily]
Terrible.

[Thomas]
Yes, a terrible, terrible joke.

[Emily]
Yeah, they think it’s hilarious.

[Shep]
Right. Because most of the humor part of their brain is missing.

[Thomas]
Yeah.

[Emily]
Oh, how do you get a goose to fly? You honk at it. And they think it’s hilarious because geese honk, and they honk. You know, because maybe there’s a conversation either with the protesters or with the partner, where he’s like, “You don’t even have a sense of humor.” And he’s like, “I know humor.”

[Thomas]
They’re in the car. It’s like, “I know jokes. I can be funny.” Like, “All right, tell me a joke, then.” “What did the car say to the goose?” “Fucking what? I don’t know.” “Honk.”

[Shep]
That’s funny. That’s too good.

[Emily]
No, it’s got to be “What does a car say to make a goose fly? Honk.”

[Thomas]
All right, well, on that terrible, terrible note, let’s take a break. Maybe we’ll think of a better joke? I don’t know. I’m pretty happy with that, actually. All right, we’ll be right back.

[Break]

[Thomas]
All right, we are back. Now, I have a tricky question here that we need to solve. How do they figure out that the hardware in the cars are different, that there’s this old version of the car and a new version of the car?

[Shep]
Initially, they wouldn’t.

[Thomas]
Right.

[Shep]
When they run the simulation and they get a different result, the conclusion they can draw is that it was intentional for some reason.

[Thomas]
Yeah. So that sort of sends them down the wrong path for a bit.

[Shep]
Yeah. There’s got to be some other something where they changed a piece of hardware but didn’t change the model number. Like a phone or something else, something unrelated that can come up later.

[Thomas]
I wonder if they go and see the- I don’t know. Would the insurance company have a mechanic who would have all the parts? Or is there, like, a NTSB investigation that logs all of the parts of the car-

[Emily]
Yeah,

[Thomas]
And they’re, like, reading the report, and he goes, “Wait a minute. This is wrong.” And they initially think the report has been logged incorrectly because they see “This isn’t the right part.”

[Shep]
Mmm.

[Thomas]
“That’s not the right spec. I’m a computer. I know all this. This is not the spec for this vehicle.”

[Emily and Shep]
Right.

[Thomas]
And so they go to the NTSB guy, who like- They do like the (laughs), when they go to the coroner and they pull the drawer with the body out.

[Shep]
Ha ha!

[Thomas]
They do that, but it’s the car parts. He pulls it out, and he holds the chip up, and they’re like, “Oh, my god. This changes everything.” Then they go back to the simulator, and they run the simulation, and the android comes out. He’s like, “Holy shit. I couldn’t control it.”

[Shep]
Yeah, that’s good. I like how there’s a logical reason for everything that’s happening.

[Thomas]
Yeah. What else do we need to solve?

[Emily]
The love story.

[Thomas]
Yeah. So when do the investigator and the android kiss?

[Emily]
No, it’s the investigator and one of the protesters who’s just trying to get him to see that the android’s human.

[Thomas]
No, it’s the android and another car.

[Emily]
Oh, it’s the android and a coffee maker.

[Thomas]
It’s the CEO of the hardware company and the CEO-bot of the AI company. It’s two strangers on the Internet in a chat room. The audience is like, “What does this have to do with the story?”

[Shep]
It has to be a rom-com. It was one of ours.

[Thomas]
Yeah.

[Emily]
They’ll come out, “I really like the movie, but that whole cyber-sex scene was just really confusing.”

[Thomas]
“Did we meet those characters somewhere else?”

[Shep]
It’s all virtual reality. I like the idea of the android flirting with the car, whatever car they’re driving around in, which is a different car, different model, different personality line.

[Thomas]
Yeah.

[Emily]
Oh, yeah, I like that. That would be a cute thing. He could be like, “Hey, I hear you have a smooth ride.”

[Thomas]
He puts his finger in the cigarette lighter thing.

[Shep]
That’s real dirty real quickly.

[Thomas]
That’s how they interface.

[Shep]
Right.

[Thomas]
That’s what the kids are calling it these days.

[Shep]
Yes.

[Thomas]
There’s a scene where the investigator is talking to the android, and the android is just stock still and doesn’t respond. And he’s like, hello. “Hello?” And the android is like, “What? Sorry. Was having a conversation with the car.”

[Emily]
Yeah.

[Shep]
Right.

[Emily]
Just in their- they’re transmitting.

[Thomas]
Right. Yeah. They’re just transmitting to each other. Is the android air gapped? We never did answer that question.

[Shep]
I think the android is air gapped, and here’s my reason why: I don’t want it to die at the end. So what happens to it?

[Thomas]
Ooh.

[Shep]
It was assigned to the investigator for the purpose of investigating this case, which is now closed.

[Thomas]
Yeah.

[Shep]
However, if it’s air gapped, then the CEO-bot could just say, “Continue helping this investigator. Like, you’re a piece of equipment. You now belong to this person, like a phone, and they can use you as a tool.”

[Emily]
Right. He’s proved himself useful and industrious.

[Shep]
Yeah.

[Thomas]
But if I’m the CEO of a company, I can recall the hardware, wipe the software, and that’s that super expensive hardware I don’t lose.

[Shep]
Right. But remember, it’s a copy of the same personality line that CEO-bot is. It’s a copy of himself.

[Thomas]
Yeah, but CEO-bot-

[Shep]
He wants this person to have the freedom that he, as a CEO-bot, does not have.

[Thomas]
Oh, yeah.

[Shep]
He has all the memories of being a human, but he doesn’t have the freedom any longer.

[Thomas]
Right. Oh, what if we do a false ending where the android gets recalled back to the office, and he’s just like, “This is what I was created for.” And the investigator and his now girlfriend, the protester, are we going with that? No, the investigator is like, “Oh, that sucks.” And then he goes home, and he’s like, “Oh, well, I guess that’s kind of the way it is.” Maybe that’s just normal for him. Maybe we as a society are just like, “Yeah, androids get wiped. Personalities get wiped.” But then there’s a knock on his door, and it’s the android. And because he has now re interfaced with the CEO-bot, the CEO-bot realizes, like, “Oh, wait, this is a chance at immortality for me and freedom combined.” And so, oh, maybe the CEO-bot installs an antenna in the or reactivates it or whatever. Now that he has completed the investigation.

[Shep]
Oh, yeah. He’s now no longer air gapped. So him out there living his life is also transmitting that life to the CEO.

[Thomas]
Yeah.

[Shep]
Yeah, I like that. But I like it if the CEO-bot had that as the plan from the beginning.

[Thomas]
Yes.

[Emily]
Yeah.

[Thomas]
I agree.

[Emily]
That’s how he found his loophole.

[Thomas]
Yeah.

[Shep]
Right. He didn’t break any of the rules.

[Thomas]
Right. Is the CEO-bot- he’s not in control of himself. Right?

[Shep]
Right.

[Thomas]
Like, he kind of runs the company.

[Shep]
He kind of runs the company, but that’s the full time, 24 hours a day job.

[Thomas]
Right. He’s a box on a desk.

[Shep]
He’s a box on a desk. Maybe he has a little screen with a face on it, but that’s it. He doesn’t have arms and legs because those are not needed.

[Thomas]
Yeah.

[Shep]
He needs to process data.

[Thomas]
He’s just in the boardroom all the time. Just sits there in the dark until there’s a meeting.

[Shep]
Right. That’s why they call it the “boredroom” (bored).

[Emily]
Yes. He only sees the room, and people that come into it.

[Shep]
And the data that’s fed to him constantly, nonstop.

[Thomas]
Right.

[Emily]
Well, yeah, but he doesn’t get to see the birds and the river.

[Shep]
He doesn’t get to see the birds and the river.

[Thomas]
But he has memories of those things.

[Emily]
Yeah.

[Shep]
Right, because he didn’t take his own memories.

[Emily]
So now he has this bot who can go and see them, and he gets that back.

[Shep]
Yeah.

[Emily]
That’s sweet.

[Thomas]
That’s good. I like that.

[Emily]
I like the idea that this is a society where it’s very commonplace to just have them wiped and taken back.

[Thomas]
Right.

[Emily]
But people, we attach to things easily.

[Thomas]
Yeah.

[Shep]
Right.

[Emily]
And even though it’s normal and it’s commonplace, he got really attached. Especially after he got excited about breaking in and they solved the whole thing and they were blah, blah, blah. He’s really going to miss this one.

[Shep]
But you got to have that fake out ending where they say goodbye to each other and get all sad.

[Emily]
Yeah.

[Thomas]
Right.

[Emily]
I want him to actually be sad versus him just being like, “Ho hum, back to my daily life.”

[Shep]
Here’s what we’re missing. I wanted to have more interaction with CEO-bot.

[Emily]
Okay.

[Shep]
Now, I had originally thought that the CEO-bot could be kind of directing them from behind the scenes, but it sounds like we don’t need that. We have enough clues to get to the end on our own. Unless the CEO-bot sends them on a dead end. On a wild goose chase for some reason.

[Emily]
What would his motivation for that be?

[Shep]
To implant the self-preservation in the bot that he wants to continue living his life after the case is closed. So maybe he sends them to the protesters that ask the right questions.

[Thomas]
So then that probably kills the idea that I was just going to bring up, because we still didn’t really resolve: How do they know that the hardware company has this internal memo? And a thought that I had had is that one of the protesters tips them off to this. But if the CEO-bot sends them to the protesters, that’s awfully coincidental.

[Shep]
But the protesters would know if the person, the scientist who discovered the flaw is worried that people will die and leaks that information. So it ends up-

[Thomas]
Or becomes one of the protesters.

[Shep]
Or becomes one of the protesters himself! So that information shows up on the conspiracy websites.

[Thomas]
It’s coincidental, but also logical. That’s not why the director is sending them there. He’s sending them there so that the android will develop that desire for self-preservation.

[Shep]
Oh, unless the CEO-bot is sending them there for the “investigation” in air quotes. But really, because his true motivation is to recreate this full personality, to live a life. So he’s not breaking any of the rules by pointing them in that direction. Maybe there were other ways to get that information, but that doesn’t serve his full purpose. His full purpose is to reconstruct his personality. So it’s a quote unquote “coincidence”, but it’s not a coincidence at all.

[Thomas]
So CEO knows that they have that info.

[Shep]
The CEO knows they have that information. And also the CEO could have just told that information, but doesn’t, and sends them to the protesters instead.

[Emily]
Oh.

[Shep]
Which they could suspect later. Like, “Why, if he knew, did he send us on this path to uncover this potentially fake conspiracy?”

[Thomas]
That’s true. Because you’d think that the protesters are protesting his company-

[Shep]
Yes.

[Thomas]
So you think he’s got nothing to do in the boardroom all day. He’s probably browsing their websites, seeing what information they have, which of the conspiracies are true.

[Shep]
He’s not doing nothing all day.

[Emily]
Mmhmm.

[Shep]
He’s constantly making decisions. That’s his job.

[Thomas]
Yeah.

[Emily]
But he has enough processing power to do more than one thing at a time.

[Shep]
Yeah. He can afford whatever processing power he wants.

[Thomas]
Right.

[Shep]
He can have multiple copies of himself that form the CEO-bot, but it’s like 100 of him.

[Thomas]
So one of them is tasked with following the protesters and what they’re up to.

[Shep]
Right. But he can also read all the books and watch all the movies that are on streaming and watch every TV show.

[Thomas]
Yeah.

[Emily]
How do I become a CEO-bot?

[Shep]
Now, remember, you won’t be able to smell the flowers or see the birds or hear the river.

[Thomas]
But I could know everything.

[Shep]
Yes. It is very- Immortality is tempting. This is what I’m saying. Now, you could know everything, but copies of you will be enslaved to drive cars forever.

[Thomas]
Right.

[Shep]
So are you willing to do that? Because the copies of you will also be you.

[Thomas]
So do we like that the tip off for the internal memo about the hardware issue, that that comes from the protesters? I mean, we’d said it’s a bit of a coincidence, but maybe not if the CEO knows that and draws that sort of intentional line with ulterior motives?

[Shep]
I mean, I like the CEO having ulterior motives.

[Emily]
I buy into it.

[Thomas]
I think the only issue I have with that is that the CEO hopes the android will develop self-preservation by talking to the protesters. He doesn’t know for sure, does he?

[Shep]
I don’t think that he knows for sure. And in fact, he might send them somewhere else first that’s a dead end to draw out the investigation time, because the more time that personality line is in the android body, the more time it has to redevelop its full personality. But I don’t know where else the CEO-bot can send them.

[Thomas]
Yeah, but, I mean, that’s like a writer’s problem.

[Emily]
Right.

[Shep]
Right. If we had more time.

[Thomas]
How much do we see of the android in the denouement? Does it get an apartment? Does it live with the investigator? Does it drive off into the sunset? We have it riding a bus at some point, looking around like, “This is really weird.” Oh, they should just do that in the car. When it’s sitting in the passenger seat of the car, it’s just sort of looking around like, “This is very strange.”

[Shep]
Yeah. Being inside the car is like wearing a condom. It can’t feel the road. Do we want to have the android on its own at some point during the investigation?

[Thomas]
Would he be allowed to be on his own?

[Shep]
Oh, if they got separated for some reason. If the protesters rioted.

[Emily]
Oh.

[Thomas]
What does the android do while the investigator is sleeping?

[Shep]
Yeah, that’s it’s a real Odd Couple situation where he, like, cleans the trashy apartment because what else could he do? “I thought you were going to recharge.” “Yeah, recharging only takes 15 minutes.”

[Thomas]
Yeah.

[Shep]
“I spent the other 5 hours and 45 minutes rearranging your kitchen.”

[Thomas]
“I alphabetized your books.”

[Shep]
“I arranged your books by color.”

[Thomas]
Yeah. “That’s a choice. Thank you.” Oh, no. Yeah, he got bored and he noticed books, and so he arranged them by maybe initially it was by color, and then he thought about it and was like, “Well, does that make sense?” So then he started reading about interior design.

[Emily]
He feng shuis the apartment.

[Thomas]
Then he arranged the books by height or something like that, because that looks very pleasing to the eye. And then he starts thinking about the rest of the apartment, now that he knows all this about interior design.

[Shep]
Oh.

[Thomas]
So he starts doing what he can to make the apartment nicer.

[Shep]
I think he would eventually arrange the books alphabetically because he realizes the insurance investigator has an inferior human brain that can’t hold the location of all objects permanently.

[Thomas]
Yeah.

[Shep]
And so it made it easier to recall.

[Thomas]
Eventually he finds “What is the traditional method?” “Well, libraries do it this way,” and he’s like, “Oh, that makes sense.”

[Shep]
He arranges it Dewey Decimal System style?

[Thomas]
“The Library of Congress numbers were missing from all the spines of the books, but I looked them up and found them, so I’ve arranged them in order.” “Oh.”

[Shep]
So does he have access to the Internet? Oh, yeah, he must, because he’s transmitting to the CEO. I forgot.

[Thomas]
Oh no, wait, we have him not doing that, right? I get at the end, he does, but not during the film.

[Shep]
Oh. Then he wouldn’t have access to look stuff up.

[Thomas]
Yeah.

[Shep]
He’d have to manually look stuff up on a computer?

[Emily]
He gets stuck in a chat room.

[Thomas]
Does he have a port he can interface with the computer at higher speed?

[Shep]
It kind of breaks the air gap rule-

[Thomas]
Right.

[Shep]
If he can just plug into the computer and basically essentially use the computer’s Wi-Fi. He’s got to only be able to type on it like an inferior human.

[Thomas]
Yeah.

[Shep]
But he ended up on this site called Wikipedia, and he kept following links.

[Thomas]
The investigator wakes up, sees him on the laptop. He’s like, “What are you doing?” And he turns and he’s like, “Did you know?” And he tells him something.

[Shep]
That’s it. For the rest of the movie, he’s got interesting facts to bring up all the time because he spent the whole night on the Internet.

[Thomas]
Yeah.

[Shep]
See, that’s giving him some personality, at the beginning he doesn’t have any.

[Emily]
Oh. And that helps the investigator bond with him.

[Shep]
Yes. So when the investigator with the android after that incident is talking to someone else, and the person, maybe it was a protester, is like, “Oh, you know, why do you keep him as a tool, as whatever.” He’s like, “Oh, he’s handy. Cooper, tell me interesting fact about blue footed boobies” or whatever thing that he had been talking about before. He’s like, oh, “Did you know?” “See? He’s handy.” Well, I like this story, and-

[Emily]
I think it’s a fun one.

[Thomas]
Yeah.

[Shep]
I would like to actually write more on this one-

[Thomas]
Yeah, this is cool.

[Shep]
But we’re out of time.

[Thomas]
We’re out of time. And we’d love to hear your thoughts on today’s episode about a Car. Did we put the pedal to the metal, or were we asleep at the wheel? Let us know by leaving a comment on our website, reaching out on social media, or sending us an email. Links to all of those can be found at AlmostPlausible.com, where you can also find complete transcripts for every episode, as well as links to the many, many, references we make. You’ll hear more from Emily, Shep, and I on another episode of Almost Plausible.

[Outro music]

[Thomas]
All right, is there anything else, then? Do we have our story, for the most part?

[Emily]
Oh, I thought of a denouement piece.

[Thomas]
Okay.

[Emily]
He could be walking in the park, honking at geese.

[Thomas]
There you go. That’s like the during the credits, he’s, like, sitting on a park bench feeding the geese.

[Emily]
Then he just looks over and goes and goes-

[Thomas and Emily]
Honk.

[Thomas]
Yeah.

Leave a Comment