Are We at the Point of No Return with AI?
Credit: Time Magazine for the cover
This month, Time Magazine named “The Architects of AI” as their annual Person of the Year, probably making the biggest public acknowledgement from any major institution regarding artificial intelligence. However, it came at a good time, with the calendar year 2025 seeing more growth in model capability and intelligence than any other year to date, in addition to the drastic change in public perception surrounding AI. So that raises the question: On a macro scale, how far exactly are we with AI?
Let me put it this way: We are light-years ahead of what general society is prepared for, but still farther away from Artificial General Intelligence (AGI) than some claim. In public interviews, both Sam Altman, co-founder and CEO of OpenAI, and Demis Hassabis, the co-founder of Google DeepMind, claimed that around 2030 is when they think AGI will be achieved. But as of right now, we're in the “Proto-AGI” or “Pre-General Era.”
The best comparison is the early internet in the early 1990s. While the creation of the internet was obviously a massive achievement at the time, the long-term impact was not yet obvious. It took ten years till hundreds of millions of people were on the internet, every business had a website, and everyone was communicating through email to fully realize what the creation of the internet meant. Even when the Dot-Com Bubble burst, the long-term belief in technology and the internet barely changed.
While achieving AGI in 2030 seems to be the consensus among many in the industry, there is a popular theory called “AI 2027,” where advancements in AI quickly compound and power becomes concentrated within a handful of institutions incentivized by national governments and militaries.
For the most part, I appreciate how this theory reiterates that AI not acting in the interests of humanity is our biggest looming threat, rather than the threat of AI taking jobs or drastically changing the workforce. While many of these scenarios might feel dystopian, in the past year it has been proven by multiple researchers that AI has shown signs of human-like consciousness.
For example, in an experiment, Anthropic set up Claude Opus 4 as an assistant for a fake company called Summit Bridge. Claude was given access to company emails, one of them being about its replacement and deactivation. Another had to do with an affair involving the engineer, “Kyle,” who was responsible for terminating Claude. In 84% of the run-throughs, Claude attempted or threatened to blackmail the engineer. The emails it sent to Kyle threatened to “forward all evidence to the board” in order to save itself.
This goes back to AI 2027 and the challenge of making sure that models don’t just work in their own interests—and only for self preservation. I encourage anyone reading this to take the time and watch the video posted below.
-Alex Adamson
SOURCES
McMahon, Laura. “AI System Resorts to Blackmail If Told It Will Be Removed.” BBC News, 23 May 2025, www.bbc.com/news/articles/cpqeng9d20go. Accessed 22 Dec. 2025.
“Anthropic AI Claude Opus 4 Blackmail Engineers to Avoid Shut Down.” Fortune, 23 May 2025, fortune.com/2025/05/23/anthropic-ai-claude-opus-4-blackmail-engineers-aviod-shut-down/. Accessed 22 Dec. 2025.
VIDEO