Are We There Yet?
A series of tweets from OpenAI founders indicates that something big is already here
Cryptic tweets are the common currency of X/Twitter. At their worst they serve as an engagement bait, but the really good ones provide us with a peak behind the closed doors of some very important organizations and events. Over the weekend we have been treated to a few of those from the founders of OpenAI. First Sam Altman tweeted on Friday a fairly innocuous-sounding koan:
The gist of which is, in my take and in the context of AI, that the practically-driven AI startups are running circles around the well established AI researchers in the industry and even more so in the academia. The above tweet could have been written and been relevant at any point over the past four to five years, if not longer. But then Sam went on to tweet this:
Now, those 10X engineers are the fabled mythical creatures of the tech world, engineers who can supposedly accomplish work of 10 or more of their less capable brethren. Whether you believe that those kinds of engineers are real, being an order of magnitude better at something than others is still humanly plausible. However, being four orders of magnitude better is just simply not in the realm of human capabilities. To make sure that you get what Sam was implying here roon responded with the following reply:
So, in other words, we are talking about an artificial 10,000X engineer/researcher. Or, in other words, what many consider to be the definition of AGI. And OpenAI seems to be implying that they have achieved such a breakthrough. To ensure that people are not panicking, Ilya Sutskever, another OpenAI cofounder, tweeted this on Saturday:
Very reassuring.
Later that day Sam tweeted a somewhat more concrete message:
Now, AI timeline refers to the times that it takes AI systems to achieve certain milestones, while takeoff is the point where AI will start to recursively self-improve. Again, there are many possible interpretations of the above tweet, but my take is that we are at the point where various AI benchmarks are being passed by at a much faster rate than most AI futurists had speculated, but this is not necessarily leading to the AI being used to improve the AI, as is commonly understood.
There are many things to unpack here. There have been several additional purported leaks on Twitter, Reddit, and other forums of what is really going on at OpenAI, and the most common consensus is that some kind of AGI has been achieved internally. However, it is unclear what this system can really do and what OpenAI intends to do with it. Most likely for the foreseeable future it will continue to iterate with improved consumer and enterprise-facing products, gradually building on those capabilities. But it’s anyone’s guess where we may end up 6 - 12 months from now.
yes