My 2017 So-Far Reading List

A few people recently asked me what I’ve been reading to stay informed on all this technology / public interest stuff I’ve fulminating about for the last six months. So today I present to you my a list of all the research papers and some of the blog posts I’ve consumed in 2017, with a summarizing thought about each.

As you can see, I’ve read a lot about artificial intelligence. But I don’t want to focus on AI exclusively.

What I’m interested in is how information technology, broadly speaking, transforms the way we live, work, and think. Information technology is a factor in many of the most glaring problems we face, including inequality, privacy, and polarization. I’m interested in AI insofar as AI represents the creeping growth of the impact information technology can have.   

Despite many papers on this list about superintelligence, it’s become a background feature in the landscape of my concern about the future. Implications of superintelligence are dire enough that we should still pay attention to it, but it’s a background feature because of the high level uncertainty that surrounds it, and the fact that many scientific breakthroughs are necessary for it to become realistic.

Foreground features are scenarios that have a little more certainty. The one area I’ve spent the most time trying to understand is the impact of algorithms on the economy: the potential for automation, skills-biased technical change, and the implications of value destruction in in software-disrupted industries.

In the next few months, I’m going to focus more on understanding the long term, systemic implications of algorithmic capabilities that either already exist or can be created without any theoretical scientific breakthroughs. And I’m going to ponder the kind of institutions we should look to build, given how IT has transformed the 20th century world everyone liked so much. This blog post from Stratechery from last year is an example of the kind of analysis I want to do, though I don’t agree with everything in it.

Without further ado, here’s my reading list sorted by topic:


AI, general

“Artificial Intelligence and Life in 2030,” Stanford University One Hundred Year Study Panel on Artificial Intelligence

This report, which was authored by over 20 AI experts in fields from computer science to law, gives an overview of AI technology and near-term AI policy issues. The authors take a tough line against fear-mongering about AI’s potential impact, which they think will lead to overregulation. And when regulation is necessary, they recommend government issue broad mandates, with strict transparency requirements and tough enforcement, as opposed to detailed rules.

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies," Matthew Scherer, Spring 2016

Weighs the pros and cons of different approaches to regulating AI applications of all kinds: either more or less interventionist, or whether regulation by legislative, executive, or judicial means makes the most sense. Proposes the creation of an AI regulatory agency responsible for certifying AI systems offered for commercial sale as “safe, secure, susceptible to human control, and aligned with human interests.” Uncertified AI systems would not be banned, but subject to much greater levels of liability. I liked this proposal’s attempt to find a balance between promoting safety and hindering innovation. Extra credit to the author for thinking through a specific proposal, instead to the “raising of questions that need to be answered” or “descriptions of high-level principles” that dominate AI policy thinking.

“Big Data and Artificial Intelligence: The Human Rights Dimension for Business,” Official Conference Notes, February, 2017

Summary of a conference about AI and corporate social responsibility. Concluded that if industry is going to avoid government interference, they’ll have to come up with their own standards for ensuring that AI programs serve the common good and reflect values shared across different cultures. This dovetails pretty nicely with the “Artificial Intelligence and Life in 2030” paper’s advocacy for broad legal mandates that stimulate proactive self-regulation by businesses.

“Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence,” Seth Baum, May 2017

Among AI experts, there are those who are seriously worried about superintelligence ending humanity, and those who think superintelligence is barely more imminent than teleportation. The two groups seem to have contempt for one another. This paper wants everyone to stop fighting and be friends. I think that’s a good idea.
 

“Artificial Intelligence Policy: a Roadmap” (Draft), Ryan Calo, August 2017

An overview of ways AI could transform society in the medium-term and the questions government needs to answer before regulating it. Doesn’t contain recommendations for specific regulatory approaches. Raises the question of whether we should be concerned about the danger of superintelligence only to demean it as equivalent to “focusing on Skynet or HAL.”
 

Advanced AI, AGI, and Superintelligence

“Machine Super Intelligence,” Shane Legg, June 2008

Intelligence is notoriously hard to define, but in his dissertation, one of DeepMind’s co-founders takes a really good crack it it. His mathematical definition of intelligence (and AI) is extremely well-thought through and applies in almost any situation I can think of.

“Racing to the Precipice: a Model of Artificial Intelligence Development,” Stuart Armstrong, Nick Bostrom, Carl Shulman, October 2013

Nick Bostrom’s works actually are like what Christopher Nolan’s movies are intended to be but never are: abstract, theoretical, and logically sound. That makes his work really interesting, even if I wish it were a little more down to earth sometimes. In this case, he and his co-authors model the optimal conditions for multiple teams work on creating superintelligence while minimizing the chance of existential risk. Factors include the benefit of risk-taking vs. skill level in building AI; the level of enmity among competing teams; and the amount of information sharing among teams. Counter-intuitively, the more teams know about one another’s progress, the more likely one of them is to scrap safeguards, increasing the risk of a catastrophic accident.

“When Will AI Exceed Human Performance?” Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans

Survey of computer scientists around the world working on AI about the progress of AI capabilities of certain tasks. Among its most interesting findings: Asian AI researchers were much more likely than North Americans to believe high-level machine intelligence (the point at which machines can perform any work task better than humans) is just a few decades away.

“AI Policy Desiderata in the Development of Machine Superintelligence,” Nick Bostrom, Allan Dafoe, Carrick Flynn, 2016

One of the only written attempts to answer the question, “how should our institutions address the possibility of superintelligence?” Lays out principles so broad that their relationship to any specific policy proposal is about as direct as the relationship between a rain cloud and the puddle outside my front door. Future of Humanity Institute is going to put forward some more specific ideas based on these desiderata shortly, and I’m excited to see them.

“AlphaGo and AI Progress,” Miles Brundage, February 2016

This is a blog post that examines AlphaGo, the DeepMind-developed algorithm that defeated the world Go champion in four out of five matches. It lightly critiques the common response that a machine Go champion was developed many years earlier than expected. In addition, this post first raised to my attention the fact that teaching and running AlphaGo demanded extremely high levels of power and computing resources, making its success seem slightly less impressive than many headlines had taken it to be.

“Some Background on our Views Regarding Advanced Artificial Intelligence,” Holden Karnofsky (Open Philanthropy Project), May 2016

“Potential Risks from Artificial Intelligence: the Philanthropic Opportunity,” Holden Karnofsky (Open Philanthropy Project), May 2016

These two blog posts lay out the case for why philanthropists should bother investing in long term AI safety. I thought it was very persuasive, and think it’s an important area of research, even if it’s not what I’m going to focus most of my time on in the next couple of months. I also thought author’s definition of “transformational AI” was more helpful than others like “high level machine intelligence” or “artificial general intelligence,” which analogize algorithms with humans.


AI & the Economy (Automation)

“The Future of Employment: How Susceptible are Jobs to Computerization?” Carl Benedikt Frey and Michael Osborne, September 17th, 2013

The most famous study on automation. Its conclusion that 47% of US jobs are at high risk of automation (even though that’s not really what the conclusion was) really sounded the alarm for a lot of people about the possibility that robots might take all of our jobs very soon. I think it’s a great study, but has been badly misinterpreted, as I’ve written.

 

“How Computer Automation Affects Occupations: Technology, Jobs, Skills,” James Bessen, October 2016

I read this paper only recently and I wish I had sooner. Most studies of automation assume that if machines can do something as well as a human, all humans doing that thing will be out of their jobs. This study is one of the only ones I’ve read that questions that assumption and investigates what’s happened to employment in occupations once their functions have become automatable. Read this if you are worried a lot about automation and you want to sleep better at night.

“Can Robots be Lawyers? Computers, Lawyers, and the Practice of Law,” Dana Remus and Frank Levy, November 2016

Deep dive into one of the categories of work that many people think will soon be lost to robots. The conclusion: no, lawyers will not be automated away anytime soon. According to their model, only 13% of lawyers hours would decline if all of the newest legal tech were adopted immediately. This was a good examination not only of how technology develops but how it can change the workforce, from the perspective of a specific job.
 

“Artificial Intelligence, Automation, and the Economy,” Executive Office of the President, December 2016

This paper got a ton of attention because, well, it was the White House basically saying, “AI is a real thing that will have an impact on the economy and we should prepare for that.” Its conclusions: implement the mainstream Democratic Party agenda while taking a “wait and see” attitude towards the possibility of even bigger policy shifts. That conclusion was a little disappointing but this was still a big deal coming from the White House.

 

“A Future that Works: Automation, Employment, and Productivity,” McKinsey Global Institute, January 2017

McKinsey’s analysis of the automation question goes a lot deeper than most. Instead of analyzing entire occupations and their potential for automation, it looks at the likelihood of automation of specific work tasks. This gives you more nuanced and measured view of automation than Osborne & Frey. Their headline conclusion was that 60% of jobs will have at least 30% of their tasks likely to be automated within the next several decades. I liked how this report provided alternate timelines for how quickly these changes might take place. I also liked its point that if we’re going to maintain economic growth this century, we need the productivity gains that automation brings. In other words, automation may be a good thing.

 

“Information Technology and the US Workforce: Where Do We Go From Here?,” Committee on Information Technology, Automation, and the US Workforce, March 2017

Assesses AI progress and lays out several hypothetical impacts on the workforce while raising further questions for research. I can’t possibly summarize every relevant point from this 160-page paper, but one tidbit that stuck out to me was its recommendation that we explore new data sources to better measure the pace of AI’s adoption and its impact on the workforce. It got me thinking about what kinds of naturally-occurring data might shine a light on whether automation is happening at all, and if so, at what pace.
 

“The Shift Commission on Work, Workers, and Technology: Report of Findings,” May 2017

Report of the Shift Commission, which was several groups of leaders in various fields getting together to talk about the future of work. The report surmises that there are possible scenarios for what the future of work looks like: 1) less work, mostly tasks 2) less work, mostly jobs, 3) more work, mostly tasks 4) more work, mostly jobs. I like how this report didn’t make a single prediction, but analyzed different scenarios that are all very possible.

 

Other Future of Work

“Recommendations for Implementing the Strategic Initiative Industrie 4.0," Federal Ministry of Education and Research (Germany), April 2013

Really dense German government report on the benefits of companies having their their production machinery, supply chains, shipping, headquarters, and all other physical assets networked, to allow for production of small-batches of goods and instantaneous decision-making based on granular data. Also discusses measures companies of all kinds need to make to successfully transition to such a system. My description is about as dense as the paper itself; please forgive me.

Portable Benefits Resource Guide, Natalie Foster, Greg Nelson, and Libby Reder (The Aspen Institute Future of Work Initiative), 2016

Explores ways for all workers in the economy to have health care and other benefits, even if they work as an independent contractor. Of particular interest are strategies for providing benefits to Uber drivers and other gig economy workers.


“The Role of Unemployment in Alternative Work Arrangements”, Lawrence F. Katz and Alan B. Krueger, December, 2016

Research paper describing the long-term increase of independent contractors as a share of the workforce – from 10% in the 1990s to around 16% today. Gig economy workers still comprise a relatively tiny subset here, though they are growing. Remember, UberX didn’t exist until 2013. That’s a crazy thought.  
 

Autonomous Vehicles

“Autonomous Vehicle Technology: A Guide for Policymakers,” James A. Anderson, Nidhi Kalra, Karlyn D. Stanley, Paul Sorensen, Constantine Samouras, Oluwatobi A. Oluwatola (RAND Corp.), 2016

Comprehensive analysis of AV technology, its implications and various attempts to regulate it around the US. Concludes that, due to positive externalities associated with widespread use of AV’s, at some point it might make sense for policymakers to align public and private costs of AV tech. But for the moment, thinks any aggressive regulatory action will do more harm than good.

“Fast and Furious: the Misregulation of Driverless Cars,” Tracy Hresko Pearl, 2016

Takes a deeper dive than the RAND paper into existing regulations of AVs and why many of them are problematic. Most problematic regulations stem from irrational fears about the safety of AV technology and ignorance of the different levels of autonomy classification, and different treatment they require.

 

Online Voting

"The Future of Voting: End-to-end Verifiable Internet Voting,” US Vote Foundation, July 2015

A big divergence on a reading list overwhelmingly focused on AI and associated technologies. But I read this report about online voting because I believe that eventually, if we’re going to have a fully enfranchised electorate this century, we need to allow people to vote on their smartphones. I can’t tell you how many friends I’ve talked to who said they would vote if only they could do so. This report lays out a framework for, if voting were to be brought online, exactly how it should happen. The critical aspect is that online voting systems need to be end-to-end verifiable -- where every person can independently verify that their ballot was counted accurately, without compromising ballot anonymity. The security challenges to making this happen are immense, but at some point they will have to be overcome.