Notes on AI 2027
Over the past few years, interest in AI has grown dramatically. As someone who works in the field, I have watched this growth come with an explosion of headlines that are often sensationalist, speculative, and emotionally charged. Many of them were created by people who either did not understand the subject very well or were simply trying to ride the wave of attention that AI attracts (usually both). At the same time, these narratives were consumed by an audience that was not suffering from a lack of information, but from too much of it, making it difficult to see clearly.
Given that scenario, I developed a strong skepticism. Whenever I came across content wrapped in sci-fi imagery or dramatic claims about the future of AI, I would instinctively roll my eyes. It all felt exaggerated, shallow, and disconnected from how real systems are built and deployed.
AI 2027 challenged that skepticism.
It offered a perspective that was calm, structured, and surprisingly grounded. It also made me realize that I may have closed myself off too much, as a reaction to the overwhelming amount of poorly argued content that circulates online. What I found instead was something intellectually rich and clarifying. For the first time, I saw why the idea of simply “pulling the plug on AGI” may not make sense. The transformation may not happen overnight. It may not be a sudden switch from one day to the next. Our lives may become so tightly interwoven with these technologies that the option to simply turn them off will no longer exist. Not because of malice or conspiracy, but because of dependency, scale, and integration.
It also touches on how difficult it is to define goals for AI agents. If you have never tried to set a goal for a RL agent and felt completely fooled by the end result, then you probably have not really done it. My first experience with DRL was with Atari. I trained an agent to play Freeway, the classic chicken-crossing-the-road game. It was fun, but by the end of most training runs, after witnessing all sorts of unimaginable score-hacking behaviors, I remember looking at the agent and thinking “how dare you, son of a chicken?!”
In the end, what resonates with me most about AI 2027 is not the specific date, nor how accurate the prediction ultimately turns out to be. What truly impressed me was seeing a fully plausible path laid out with clarity. A future that does not rely on fantasy or fear, but on mechanisms that already exist and trajectories that are entirely believable.
That plausibility, more than any prediction, is what stayed with me.
If you haven’t seen it yet, here’s the complete version and the great video by AI in Context YouTube channel.