My (first) take on the current AI conversation.
Starting with the question of who has the power to decide what the world looks like and why it shouldn't be the tech industry.
I wrote my PhD dissertation on technological change in the 20th century United States as tractors dispersed across the country slowly and imperfectly. The day I had the idea for this project, I was running in the Berkeley hills, quite literally staring out over the San Francisco Bay and the metropolis that most would consider the powerhouse of technological change in the 21st century so far. As I lurched my way up the steep hills, I was listening to Ezra Klein interview Andrew Yang. Yang described his fears surrounding technological change that he claimed would massively decrease the number of jobs available to people in the coming decades. As a young economic historian looking for a dissertation topic, I naturally asked myself, “haven’t we been through technological revolutions before? And couldn’t we probably learn something from them?” Now, I feel an urgent need to slow the current process of technological change, especially when it comes to Artificial Intelligence. It is not that I don’t believe it has benefits - rather that I don’t feel like we have the institutions, communities, experts, and healthy political system in place to determine which technologies will be healthy and which will not.
Last year, I contributed to a report issued by OpenAI that offered a framework for thinking through the potential economic impacts of Artificial Intelligence (AI). This was before they released Chat-GPT and DALL-E each of which has gotten tons of headlines and usage over the past couple months. At that time, the technology we were writing about specifically was another piece of AI called Codex which can assist software developers, or really anyone, in writing code. Codex was already out in the world, being used by some coders as part of their daily work. In fact, soon after contributing to the report, I talked to multiple friends in software engineering jobs who had tried out Codex during their daily work. Within a blink of an eye this AI technology was impacting labor markets and people’s work. Most described the tool as marginally helpful and did not seem so concerned about job displacement. While this is a totally fair and legitimate reaction, I couldn’t help but be somewhat taken aback by the nonchalance with which people had accepted this tool into their lives.
I will go into more thoughts on the report put out by OpenAI in a future post but for now, what is important is the power of this one company to build and release into the world a new technology with such power to change society. In podcasts over the last year or so, tech executives such as Sam Altman of OpenAI haveopenly discussed the possibility of massive social change and upheaval as an outcome of the technologies that they are building and releasing into the public. In this post, I want to remain agnostic about the positive and negative trade-offs associated with these technologies. What concerns me more than anything is the fact that for over two decades, we have lived in a society that has already faced massive accelerating changes due to technology and we seemingly do not have the infrastructure, tools, regulations or will to slow down the process of technological change to make sure that those changes are healthy.
I have long been a staunch advocate for Diversity, Economics and Inclusion in Economics. In 2019, my friend Nina Roussille and I wrote a blog post specifically about why Inclusion and Equity in Economics is absolutely essential to building a world that is equitable and just. Now, I see this issue as repeating itself in technicolor as pundits and tech-moguls discuss the possible future society we may all live in without nearly any constraint or democratic input. Now, my concerns about diversity, equity and inclusion have taken the shape of fear, and a bit of disgust, as I listen to predominantly young white men pontificate about the possibilities of a fully automated future world. It is not that I think that they are wrong - again, I want to stay somewhat agnostic to that at the moment. The problem I see is that a very small group of people facing very few political and material constraints have the power to make decisions about how the world should be and will be and there are almost no mechanisms in play to stop this process. We’ve seen this past month that Universities have been thrown into the gauntlet trying to figure out how to restructure their courses and services now that Chat-GPT is in the picture (lucky for me, Chat-GPT wrote a horribly incorrect essay about the history of monetary policy in the US). We’re starting to get a glimpse into how much these technologies could shake up the world order and have been caught wildly under-prepared.
According to the Milken Institute’s timeline of Tech Regulations globally, the first and only proposal for regulating AI so far is the EU’s April 2021 proposal which takes the first step to categorize different risk levels of AI. As of April 2022, China reported aiming to ease regulations on big tech to help with their growth slow-down. The US is unsurprisingly somewhat silent on the matter.
This past May, Pew released survey data showing that among both republicans and conservatives, desire for regulation of the tech industry has fallen. Based on my understanding of the regulatory picture, there is basically no AI-relevant policy at play. The two bills that I know of that is intended to protect workers who lose their jobs due to increased automation and technological change has been stalled in congress for a couple of years now. Now I’ll let my bias show a little bit: my biggest concern regarding this technology is not that robots will turn against us and eliminate humanity. My main personal concern is that increased technology of this nature will continue the process of concentrating wealth among the lucky few, and leave the many to fend for ourselves. But even in the case of Twitter and Facebook, we’ve already seen sweeping changes to the functioning of politics, society, education and human development that leave me scared of the unknown unknowns lurking in the dark.
It is not sufficient for OpenAI to do their own internal research, nor for them to sponsor external researchers to do research on the possible risks and harms that their AI could reap on the economy. Ultimately the techies will need to admit that they can’t build the Utopia they seek without hurting some along the way. The more they include many voices and the more constraints and outside vigilance is put in their way, the stronger our society will be.