AI as Collaborative Tool, Not Replacement
Thinking aloud and putting words on the page in terms of AI as a collaborative tool, not a replacement, mainly in the space of self-driving cars.
This is a rough post where I'm primarily thinking aloud and putting words on the page.
One thing I've thought (and tweeted) a lot about over the years is the idea that when people try to develop AI tools that fully replace humans, we're doing the wrong thing, and not just in an ethical "AI will replace us / replace our jobs" way.
With this assertion comes a question: can AI actually replace us? One of the areas this often comes up is with self-driving cars: can we actually build a car that's fully autonomous? About a decade ago, many optimistic technologists believed so, and believed that by now, we'd be able to make it happen. But the reality is that this has been much harder than expected, and... a lot of self-driving car technology only performs well within a very narrowly constrained set of tasks and environments.
There have been a lot of discussions about the capabilities of AI over the years, as well as job replacement, but I've always taken a slightly different approach to the sort of black-and-white "AI can replace us" or "AI can't replace us" dichotomy. I advocate for the stance that AI can serve as a collaborative tool that helps us in the tasks that we perform, and instead of thinking of questions of replacement, we should be focusing on questions of collaboration.
This doesn't come from nowhere. Back in 2016 when I was still finishing up my Master's in Artificial Intelligence, I was doing a lot of readings on state-of-the-art AI work in game-playing AI, and it turned out that when you had an AI agent and a human play together, the combo could outperform solo AI agents. I have to go back and dig out the citations and research on this, and look to see how this space has evolved over the years, but I have a suspicion that this still holds true. In 2016, folks were obsessed with AlphaGo, which was the first AI agent to beat humans at the game of Go. If I recall correctly, even against an AlphaGo + AlphaGo team, an AlphaGo + human team could still perform better.
So that brings me to this idea that what if we're doing it all wrong, by framing the task as building an AI to replace humans, whether in driving or in automating certain jobs? What if from the start, we direct our efforts to building AI tools and systems that aid us in what we do?
If we could show that there are actual performance benefits to this – to building systems that are collaborative instead of AI-only – then we're pursuing a suboptimal solution by pursuing some apartheid-rich idiot's pipe dream of an AI-only car. We're literally making technology that's worse just because a mis/underinformed person thinks that AI-only is better or more futuristic.
I think there are other benefits, in addition to better performance, that we could have with designing for collaborative AI + human systems. For one, I think we'd be able to spend a lot less time debating about whether people are being replaced. Instead of thinking about AI as automating away jobs, we can actually begin to talk about AI augmenting jobs and making jobs easier or better (which... is tied to a complexity of a lot of other labor issues, of course). There might be less resistance to adoption of these technologies – for example, we already have new "smart" technologies that assist us on the road: detection of cars in the lanes next to us, and tech that helps ensure we don't accidentally swerve out of our lane when we're driving. And you know what? Some of this is accomplished by AI. But somehow when you talk about it just as "lane detection" as opposed to "AI", it feels more grounded and acceptable. We don't get into all these conversations about sentience, and instead we can better grasp the technology for what it is. These technologies already exist and already are deployed in cars today, and they're the same technologies that make things like Auto-Pilot possible. We can still pursue the advancement of AI-equipped cars! And I think we should! Because there are possibilities for safer vehicular travel if we do. But we shouldn't think of the AI systems as some sort of sentient driver that takes over for us. We should design it and think of it as a tool for assisting us in our driving. Because very likely, the technology can't actually replace us, and by believing so, we actually become worse drivers on the road. I think this shift in how we think about this technology can help us be safer drivers, make clearer goals for how we develop AI-powered cars, make adoption of these technologies easier (let's just call the specific technologies for what they are, instead of calling them "AI" broadly), and I think it helps address a lot of the accountability and responsibility questions as well.
One last thing: I was talking to a lead researcher at one of the self-driving car companies (I can't say who because it could get them in NDA trouble), and one of the things they told me was that while people often think of AI self-driving car capabilities as a superset of skills that fully encompasses all the skills that human drivers has, this is actually not true. What is more accurate is that it looks like a Venn Diagram: there are things that AI technologies are better at, and there are things that humans are better at. And though AI technologies might cover much of what we can do, it doesn't cover all of what we can do. There are still some tasks that humans will be able to do that the AI technologies cannot (perhaps given the nature that so much of driving is a human endeavor, which I've wanted to write about in a different post for years now), and a big ethical fear of researchers is what if they develop an autonomous vehicle that ends up crashing because of a sliver of error that a human could have prevented. This is why I return to my assertion that we should instead develop systems that incorporate both, so that we can cover as much ground as we can, instead of thinking it has to be either-or. I think about this in relationship to the new Apple Car idea – one that comes without a steering wheel – which makes me feel like we're going down the wrong path and need to come back. For the sake of better performance, safety, etc.