Even though my sports coat was made of sweatshirt fabric, I felt totally overdressed as I walked into the Friday night opening reception of the Effective Altruism Global Conference last week. Sneakers, t-shirts, windbreakers prevailed among the other attendees, and there was a much higher concentration of beards and ponytails than even the population of so famously hipster a city as San Francisco.
Looking to ease my transition into this unfamiliar social situation, I looked around for alcoholic beverage. That was when I noticed the disturbingly high number of Cokes and Sprites in the hands of fellow attendees. After a few minutes, the glint of aluminium cans on a long black table finally caught my eye from across the hangar. I walked towards it eagerly, hoping that a wide bucket of ice with two bottles of cheap sauvignon blanc, or perhaps even a keg of Keystone and a stack of red cups, would come into focus. No such luck. Closing in on the beverage array, my worst fears were confirmed: EA Global was bone dry.
But after getting over it and braving some alcohol free conversation, I could see why the event’s organizers didn’t bother with booze. This was not a collection of casually interested people who wanted to learn a thing or two. It was more like a religious gathering. Being a practitioner of Effective Altruism, wherein career choices and charitable giving are quantitatively analyzed to the furthest extent to ensure that you are saving as many human lives as possible, was a major part of the identity of most of the people I met there. They were excited to meet one another.
The effective altruists represented a menagerie of extremely well-credentialed professions. One of the first people I spoke with majored in economics; his masters was in called biomathematics; and he is currently getting his doctorate in philosophy. There were deep learning experts, think tank research fellows, machine learning startup founders, and scholars from every field imaginable. Many of the people I spoke with work at organizations with names like the Future of Humanity Institute, the Future of Life Institute, and the Center for the Study of Existential Risk. These people think SUPER big picture. They are focused on mitigating existential risks to humanity, such as pandemics, nuclear weapons, and superintelligence.
And no matter what you think of the idea of superintelligence, the speakers I saw were undeniably some of the best-qualified in the world to speak about AI. The Future of Humanity Institute is led by Nick Bostrom, author of Superintelligence, and three people from FHI delivered presentations. Additional talks were given by engineers at DeepMind and researchers from the Machine Learning Research Institute. There aren’t many people on the planet who work directly on building AI or understanding its implications, and they were heavily represented at EA Global.
In contrast, I felt completely underdressed when I walked into a different event about AI and government the following Tuesday in my jeans and a gray henley. Hosted by the US Chamber of Commerce’s Technology Engagement Center at a place called the NASDAQ Entrepreneurial Center, this event was for fancy crowd in fancy attire. The floor was white and gleaming, the stage was backgrounded by a floor to ceiling screen on which was projected a futuristic mesh network design in the shape of a human brain, and the pinot noir was served in a tall glass stem (and was very good).
I didn’t have much time to network before or after the panel discussion, but from what I overheard, these attendees were tech and finance industry employees. The panelists were all very smart and accomplished, but compared to the speakers at EA Global, their qualifications to speak on AI and its potential need for regulation were uneven. At the top of the pile was Tom Kalil, formerly of the White House Office of Science and Technology Policy and currently an advisor to the Eric and Wendy Schmidt Group. As an advisor to two presidents, It’s been his job for decades to think strategically and in a long term way about how tech trends affect the public interest.
Next on the panel was Lisa Hawke. As Director of Policy and Compliance at a company called Everlaw that uses machine learning in its products, she’s certainly in a position to know something about AI itself and AI policy, at least in the context of short-term business concerns. But it doesn’t seem like her job to have a holistic or long term understanding of AI and its impact on things beyond her industry. Same goes for James Cham, the third panelist, though as an investor at Bloomberg Beta, he probably looks at AI’s development and impact with a slightly wider lens. Since he’s served on the SHIFT Commission, however, at least I know he’s given some thought policy questions that don’t directly affect business. Finally, the panel moderator was a tech reporter from Financial Times named Hannah Kuchler.
AI is a very trendy topic right now – befitting of this event’s decor and attendees. This is deserved – AI’s potential impact needs greater discussion among businesspeople and policymakers and that’s one of the main reasons I’m writing this blog. So I was hoping that the panel would dig through the trendiness and provide insight.
Unfortunately, the panel discussion ended up being a celebration of the trendiness AI’s trendiness than a reasoned exploration of it. There were restatements of vague concerns about AI, with mentions of Superintelligence, AlphaGo, automation, and algorithmic bias, without much clear explanation of any of them or how they are or aren’t related. Conflation of AI risks into a generalized panic about machines getting smarter is something that bothers me a lot. I was sad to see this reinforced.
Unlike at the NASDAQ event, I felt smarter after attending EA Global. The speakers had deep understanding of AI and associated policy issues. Their breakdown of these issues into categories of short and long-term was extremely helpful for considering possible solutions timelines for different AI risks.
But the effective altruists’ focus on existential risks did feel a bit alien to me, in the same way that it felt alien to have a dry reception at a ticketed conference. In considering potential harm from AI, most people I spoke with were worried about artificial general intelligence – machines with human-level intelligence. And I’m not as interested in focusing most of my energy on something that’s predicated on developing a technology that, while theoretically possible, is not something we’re close to achieving.
I’d prefer to focus on the potentially transformative impact of AI capabilities that already exist, or that we are on the cusp of discovering. There are many ways in which these capabilities have yet to change our economy and the way we live, and this will start to happen as they evolve from theoretical breakthroughs into marketable technology.
AI can still be transformative even without further breakthroughs towards AGI. My goal is to merge the expertise and understanding of the attendees at EA Global with the more down-to-earth timescale of the folks at the Discussion on AI and Government Regulation. That’s something I’ll be writing more about in the next few months.