Pretty much everyone that I know and follow in tech is mesmerized by the current AI revolution. The new AI tools, and the ones that are just behind the horizon are literally changing everything that we do in this industry: how we write and debug our code, which tools and frameworks we use, what we decide to learn, what kinds of projects to work on, what kinds of career paths to pursue - there is nothing that AI has not touched. There are still a few die-hard holdouts, harping on trees and gliding over the forest, but for almost everyone else AI is now the overarching backdrop for our professional lives. And most, like me, are approaching this new reality with a lot of mixed emotions. Excitement and trepidation are the two base notes of our emotional state of mind, with higher harmonics of amusement, anger, jealousy, and despair. The wide spectrum of emotions, and even the wider swings between them, oftentimes within a single hour, are fueled by the uncertainty that AI brings with its incredible promises, and the menacing threats. Will AI usher the unimaginable utopia - a world free of destruction, want, and fear? Or will it completely destroy all life on Earth? Both outpaces seem equally outlandish. But no one can say so for sure.
In this milieu it would be great to know with some decent level of certainty that we would be spared of the worst case scenarios. Life is hard for all of us, the World seems to be in a perpetual state of mess, and it would be nice if we would not have to worry about one additional calamity. I for one would like to know what would realistically be the worst-case scenario that awaits us if we continue with the current pace of AI development. If nothing else than to at least better mentally prepare myself for the inevitable. And sure enough, there are plenty of voices out there that will tell you how bad things are going to get. But getting the real signal in the noisy cacophony of those voices seems to be nontrivial. Many of them come from the chorus of the professional anti-technology naysayers, whose default knee-jerk reaction to every new technological innovation is to view it with disdain and dismissal. And over the past few weeks it has also become painfully obvious that some of the most prominent (at least by the standards of the mainstream media coverage) AI doomers are significantly technologically illiterate. Now, let me just be clear: I am not saying that people like them should not be given a voice. AI will affect all of us, and we should all have at least some say in how it is developed and used. But I believe that it is a very, very bad idea to base our major AI policy decisions primarily on the concerns of people who have never trained a single machine learning model. I believe that we now have enough examples over the past few decades of disastrous policy decisions based on agitations of people whose conviction and passion far exceed their ken. It would well behoove all of us to not fall into that trap again.
Nice summary. I've been thinking about this as well. Much of the AI criticism is handwavy and breaks down when you start to poke. And yes we really need to implement industry wide standards for AI risk assessment. Software testing/quality used to be a thing, a discipline but has unfortunately disappeared and has been devalued. We need to invent the equivalent of this for AI. And at the risk of tooting my own horn, I led the team at Meta that invented AI System Cards, a way to evaluate AI risk and report on it publicly. It has been adopted by OpenAI - here are the two examples 1) ChatGPT System Card https://cdn.openai.com/papers/gpt-4-system-card.pdf 2) DALL·E 2 System Card https://github.com/openai/dalle-2-preview/blob/main/system-card.md and the original that my team at Meta AI shipped Instagram Feed System Card https://ai.facebook.com/tools/system-cards/instagram-feed-ranking/
AI System Cards can and should be further improved but it's the best we have at the moment.