In life, it’s a good idea to address immediate threats while keeping an eye on long-term risks. So I was glad to see a paper by Seth Baum entitled "Reconciling Between Factions Focused on Near-Term and Long-Term Artificial Intelligence."
As Baum explains, there are two camps of people who are concerned about AI’s impact. The first, which Baum calls the “presentists,” believe that money and attention should go towards addressing the impact of AI that is already widely in use. The second, or the “futurists,” believe we should focus more time and energy on potential threats posed by AI capabilities that don’t yet exist, but when/if they are developed, will have a transformative and perhaps disfiguring effect on humanity. Presentists tend to be populated with more economists, elected officials, and legal scholars; futurists tend to have more philosophers and AI engineers than presentists. Baum tries to reconcile the factions by creating two new ones: “intellectualists,” who believe in creating AI for its own sake, and “societalists,” who believe AI should be developed for its impact on society.
I appreciate Baum’s attempt to bridge the divide. But I also feel like there’s a hybrid approach to thinking about AI that borrows from both presentists and futurists that focuses on long-term time horizons, but makes no assumptions that we’ll see any theoretical or scientific breakthroughs in the next several decades. Instead, this approach would analyze the long-term impact of technical capabilities that exist today, but haven’t yet been applied throughout society in a way that would allow their impact to be felt. I haven’t seen such an approach explored in any of the literature I’ve read.
I like the futurists’ focus on transformation; the Industrial Revolution showed us that technology can change society beyond recognition in a short space of time. And I expect information technology to transform society at least as much. But most futurist analyses of AI are based on risks associated with developing artificial general intelligence or superintelligence. And while I don’t dismiss this possibility, it’s extremely uncertain. It’s worth time and money to keep thinking about AGI, superintelligence, and different scenarios that could arise. But there’s nothing any policy or political actors can really do about it before it gets closer to reality. And thinking only about AGI and superintelligence discounts the possibility that very powerful yet narrow applications of AI could yet have a transformative impact.
So the presentists’ focus on the impact of narrow AI applications that already exist is appealing. But most presentists I’ve read or listened to shy away from all but the shortest time horizons. There is a lot of great scholarship on how to address the needs of gig economy workers and on how to reduce algorithmic bias. But in their focus on immediate issues, the presentists seem to preclude the possibility that AI might introduce deep changes to social organization, the mandate of government, and the human experience. To me, that is unimaginative and shortsighted.
I think that the impact of even just today’s AI capabilities on the way people work, think, and interact will be immense. Many such capabilities that have been developed in lab settings have yet to have their impact felt. We will see much larger changes as those capabilities move to the marketplace and seep through society. It would be interesting to see a study analyzing the 30-year time horizon of today’s state of the art in AI becoming cheaper, easier to use, and more applicable to everyday business and personal situations.
For example, can recent breakthroughs in deep learning that have gotten so much attention, like AlphaGo, be adapted to business-relevant applications? What would the ripple effects of this be? And in the future, will it be possible to build and run AlphaGo-comparable programs at a lower cost of manpower and hardware? This question could reflect a misunderstanding of how the technology works, in which case I’d love to be corrected.
Also, there will almost certainly be a greater long-term effect of existing AI capabilities on children who are growing up with them than there has been on adults to whom they were introduced later in life. What will be the psychological impact in adulthood on infants and children who grow up with voice-responsive smarthome technology like Alexa?
Moreover, as far as I know, many narrow AI capabilities can continue to advance without major theoretical breakthroughs. And as these capabilities are fine-tuned and expanded upon, I think we can expect to see even greater transformations. Everything about ourselves and everything that surrounds us can be defined by data, because everything, in theory, can be measured.
As the ability of programs to measure and analyze the world continues to increase, I think it will lead to not just new kinds of work and new kinds of entertainment, but to new ideologies, new forms of spirituality, and new ways of thinking about what it means to be human. And this can take place without the theoretical breakthroughs needed to create AGI or superintelligence.
Both presentists and futurists should keep doing what they’re doing; I think both have tremendous importance. But personally, I want to merge the future-looking orientation of the futurists while maintaining the presentists’ skepticism of superintelligence. By pursuing this path, we can come up with a vision for the future that is transformational, yet measured enough to minimize the chance of a big intellectual swing and a miss. We can be mindful of the possibility of radical transformations without banking on a specific technology isn’t close to existing yet.