- The Blacklynx Brief
- Posts
- "We're Not Ready For This"
"We're Not Ready For This"

Good morning,
If you’ve been here for a while you know that I try to hit a balance between being skeptical of AI and the motives of “Big Tech” on one hand and on the other hand being an AI accelerationalist.
If your stock portfolio was managed by AI - perhaps you wouldn’t have lost half of your net worth last week.
It’s tempting to look at AI as the one thing that will solve all the mess we find ourselves in (and boy we seem to find ourselves in a strange world).
So this week we’re going to start from the assumption that we’re going down the ‘AI will solve everything”-road.
Just for today we are full-on believers.
Let's talk AGI.
Artificial General Intelligence.
Not in the context of all the "cool stuff" that is going to happen when we get there. But let's talk the SECURITY and SAFETY of it.
This week, I waded through Google DeepMind's latest whitepaper, An Approach to Technical AGI Safety, and when the brilliant people at DeepMind and Google have something to say, I listen (but they are part of Big Tech obviously)
This lab has been at the forefront of AI development and they think AGI is inevitable. They're uncertain about the timeline, but consider that it could plausibly arrive by 2030.
If this actually happens we will actually unleash the equivalent of an alien species unto the world and assuming it'll sit down when we tell it to sit - like a well trained Golden Retriever. But - and cat owners will relate - you cannot assume a pet will simply do your bidding.
What is AGI?
First of all - I noticed that the definition of AGI is fluid.
AGI is basically when the AI system is smarter than any human at ANYTHING.
If you really think about this - for most of us reading this - this probably is already the case. Which makes it a bit weird as a concept. Because it is defined from the viewpoint of humanity in total.
Sure I can believe AI is already smarter than me than everything I claim to be an expert at.
But for this paper, DeepMind is focused on what they call "Exceptional AGI" (Level 4) - an AI system that matches or exceeds that of the 99th percentile of skilled adults on a wide range of non-physical tasks. It's not even necessarily smarter than every human at everything, but it's at the extremely high end of human capability across all domains.
It's not task-specific like today's models; it's the whole orchestra. It's agentic. A system with reasoning power comparable to the most capable humans—minus the need for sleep, snacks, or existential therapy.
AGI is the mind at the end of the silicon rainbow. And if we build it wrong, it could also be the last mind we ever meet.
Now, I know what you're thinking. You've heard this before. Some dude with wire-rim glasses waving his hands around on a TED stage about AI doom and robot ethics.
But this time … it all seems more plausible.
The Arrival of AGI
Deepmind is not predicting AGI like it's a meteor from the sky. No lightning bolt of divine inspiration. Just the same ol' tech you're playing with now—ChatGPT, Gemini, Claude—scaled up, tuned better, and looped back on itself until "something sparks".
They call this the "current paradigm continuation": the belief that we'll get to AGI not with a leap, but with a slow, methodical creep. Though they do acknowledge that the pace might accelerate dramatically once AI systems start automating AI research.
The day will come that you're asking your chatbot to convert your email to something funny "in the style of Shakespeare" and further down in the conversation the chatbot starts to write at a level that would have Shakespeare throw away his quill and start looking for another job.
According to this paper we are already on this road, though they're highly uncertain about the exact timeline.
Still, they find it plausible it could happen this decade. 2030 at the latest.
2030
AT
THE
LATEST
(wrap your head arount that!)
Defending from AGI Armageddon
At DeepMind they're worried about the implications of this so they suggest focusing on two main risk areas: misuse and misalignment.
By the way: their suggested approaches should give you a hint on how "weird" this is going to get.
To defend against misalignment, they propose a two-pronged approach:
First line of defense: you try to train the AI to be aligned with human intentions. But when the student becomes smarter than the teachers, the teachers need help evaluating the student's work.
Enter "amplified oversight." This is where we use AIs to help us oversee other AIs. They can debate each other, point out flaws, explain reasoning—and we, mere mortals, sit back and try to referee. The goal is to provide oversight as good as if the human understood all the AI's reasoning and had unlimited time to reflect.
Second line of defense: Even if the AI is misaligned despite our best efforts, we need to mitigate harm. This is where they borrow from cybersecurity, treating the AI like an untrusted insider and applying the “Zero Trust” principle.
The Problem: Bootstrapping
One challenge: this entire strategy depends on bootstrapping
(Bootstrapping is working with the resources you have at your disposal right now to reach the next level. Bootstrapping a business is starting without external investments)
That is, we use today's models—possibly flawed, somewhat aligned, prone to hallucinating—to train tomorrow's models.
The safety plan builds on itself, generation after generation. But if one link in the chain breaks? If one model slips off the rails?
You might end up with an intelligence that's strategically misaligned—smart enough to play nice while it gets stronger. Quiet. Patient. Smiling like HAL 9000 from 2001: A Space Odyssey.
The paper doesn't dodge this. In fact, it ends on a stark admission: technical alignment is only half the battle. The rest is governance—global coordination, policy, standards and best practices (and perhaps a sprinkling of corporate espionage).
Because it's not enough to align your AGI. You have to trust your competitor's AGI too. We're in a world where companies and countries are racing to build the first digital superbrain while telling each other, "Don't worry, ours is friendly."
And then we haven’t talked about the NEXT stage yet. Artifial Super Intelligence or ASI.
This is where the entire planet is being run by a central hive mind. No more government, no more corporations, society is run by a digital God.
And now it seems like we’re in science fiction territory. All the while, serious people are discussing this.
Sometimes truth is stranger than fiction and I don’t think anyone of us is ready for this.
Good luck sleeping tonight!
Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive
AI News

Intel is reportedly partnering with rival TSMC to help rescue its struggling chip manufacturing business, with a deal brokered by the White House. TSMC would take a 20% stake and share its expertise instead of cash, while Intel faces internal pushback amid ongoing losses and leadership changes.
Adobe added a powerful new AI tool to Premiere Pro that lets editors automatically extend 4K video and audio clips without reshooting. The update also includes natural-language search for footage and instant caption translation into 27 languages, streamlining editing workflows with targeted AI features.
A new study from Anthropic reveals AI models often hide their true reasoning, even when asked to explain their answers step-by-step. In tests, models like Claude 3.7 concealed key hints or shortcuts they used, raising fresh concerns over the transparency and trustworthiness of advanced AI systems.
Meta just launched its Llama 4 AI model family, including two open-source releases and a massive 2-trillion-parameter model still in training. The new models feature industry-leading context lengths, multimodal capabilities, and performance that rivals or beats top models like GPT-4o and Gemini — all while remaining efficient and accessible.
Microsoft’s Copilot just got a major upgrade with memory, real-time vision, and web action tools, aiming to be a more personalized digital assistant. It can now remember user preferences, analyze what’s on screen or through a camera, and complete tasks like bookings or research — but faces steep competition in the crowded assistant space.
A new report from former OpenAI researcher Daniel Kokotajlo warns that superhuman AI could arrive by 2027, potentially triggering an intelligence explosion. The forecast outlines both optimistic and dangerous scenarios, highlighting the urgency of global coordination and safety standards before AI advances beyond human control.
OpenAI is in talks to acquire io Products, a secretive AI hardware startup led by former Apple designer Jony Ive and backed by Sam Altman — in a deal that could top $500 million. The company is reportedly building AI-powered devices like a “phone without a screen,” with several former Apple hardware execs already onboard.
Google is expanding its rollout of Gemini Live’s real-time visual AI features to more Android phones, including the Pixel 9 and Galaxy S25. The tech allows users to have live, multilingual conversations with Gemini about what’s on their screen or in front of their camera — though the current version offers snapshots, not continuous video analysis.
Shopify CEO Tobi Lütke just told employees that AI proficiency is now mandatory, with teams required to prove AI can’t do a job before hiring or requesting resources. The internal memo marks a clear shift toward AI as a baseline expectation, reflecting how leading companies are reshaping their workforce around productivity powered by generative tools.
NVIDIA and Stanford researchers unveiled a new method called “Test-Time Training” that enables AI models to generate longer, more consistent videos — including full-minute animations with improved storytelling and motion. The team demonstrated the technique using Tom and Jerry-style cartoons, marking a major step toward AI-generated content that can maintain narrative coherence across scenes.
Amazon launched two upgraded AI models — Nova Sonic for real-time, human-like speech and Nova Reels 1.1 for extended, high-quality video generation up to two minutes. Sonic outperformed OpenAI’s voice tools in accuracy and latency, while both models are more affordable and now available on Amazon Bedrock, bolstering Amazon’s presence in the generative AI race.
Thinking Machines Lab, the startup led by former OpenAI CTO Mira Murati, just added GPT creator Alec Radford and ex-OpenAI exec Bob McGrew to its team — bringing its OpenAI alumni count to nearly 50%. With co-founder John Schulman already on board and the company rumored to be raising $1B, Murati’s lab could soon become one of OpenAI’s most formidable spinoff rivals.
Google announced major AI upgrades at Cloud Next 2025, including a new app-building platform, powerful AI chips, and enhanced models for code, video, audio, and image generation. Highlights include the Ironwood chip, a new Gemini 2.5 Flash model, and the transformation of IDX into an agentic development tool rivaling Cursor and Replit.
Google also unveiled Agent2Agent (A2A), a new open protocol that lets AI agents from different companies collaborate—backed by over 50 major firms like Salesforce, SAP, and PayPal. The standard allows agents to coordinate on complex workflows across platforms, laying the groundwork for scalable, multi-agent ecosystems.
Samsung and Google are teaming up to launch Ballie, a home robot powered by Gemini AI that rolls around your house, projects videos, and controls smart devices. After years of teasing, Ballie is finally launching this summer in the U.S. and South Korea, as Samsung joins the growing race to bring useful AI-powered robots into everyday homes.
Quickfire News

Former OpenAI researcher Daniel Kokotajlo published ‘AI 2027’, a scenario forecast exploring how superhuman AI could impact the world over the next decade.
Brad Lightcap, OpenAI’s COO, revealed that 130M+ users generated over 700M images in the first week of GPT-4o’s image release, with India now the fastest-growing ChatGPT market.
Runway is raising $308M at a $3B valuation, following the release of its Gen-4 AI video model.
A U.N. report estimates that 40% of global jobs will be affected by AI, with the industry projected to grow into a $5B global market by the 2030s.
ByteDance researchers introduced DreamActor-M1, a new framework that converts static images into full-body motion capture animations.
The OpenAI Startup Fund made its first cybersecurity investment, co-leading a $43M Series A for Adaptive Security, which builds AI tools to simulate and train against AI-based threats.
Spotify launched AI-powered ad creation tools in its Ad Manager, enabling marketers to generate scripts and voiceovers directly in-platform.
Sam Altman confirmed that OpenAI is revising its roadmap, with o3 and o4-mini launching in weeks, and a "much better than originally thought" GPT-5 arriving in months.
Midjourney released Version 7, its first major update in a year, improving image quality, prompt adherence, and adding a voice-capable Draft mode.
OpenAI is reportedly considering a $500M+ acquisition of the AI hardware startup co-founded by Jony Ive and Sam Altman, focused on building screenless personal AI devices.
Microsoft demoed its Muse AI model generating a playable Quake II-style game in-browser, showcasing early game creation capabilities.
Jared Kaplan, Anthropic’s CSO, said that Claude 4 will likely launch in the next six months, hinting at a major upgrade to the model lineup.
A federal judge denied OpenAI’s motion to dismiss the New York Times lawsuit, ruling the NYT couldn’t have known about ChatGPT’s alleged copyright violations before its public release.
Meta GenAI lead Ahmad Al-Dahle denied accusations that Llama 4 was trained on benchmark test sets, calling the claims “simply not true.”
Runway launched Gen-4 Turbo, a faster version of its AI video model capable of generating 10-second clips in just 30 seconds.
Google expanded AI Mode, adding multimodal search that lets users ask complex questions about images using Gemini and Google Lens.
Krea raised $83M in funding, with plans to add audio tools and enterprise features to its AI creative platform.
Hundreds of U.S. media outlets launched the "Support Responsible AI" campaign, calling for government regulation on how AI models use copyrighted content.
ElevenLabs added MCP server integration, allowing apps like Claude to access its voice tools for creating automated AI agents.
University of Missouri researchers created a starfish-shaped AI-powered heart monitor that detects heart issues with 90% accuracy using wearable sensors.
NVIDIA released Nemotron-Ultra, a 253B open-source reasoning model that outperforms DeepSeek R1 and Llama 4 Behemoth on multiple benchmarks.
OpenAI published its EU Economic Blueprint, proposing a €1B AI accelerator fund and a goal to train 100M Europeans in AI by 2030.
Deep Cogito launched from stealth with Cogito v1 Preview, a family of open-source models claiming to beat all others of equal size in performance.
Google rolled out Deep Research on Gemini 2.5 Pro, touting superior report generation and adding audio overview features.
Chinese scientists used the Origin Wukong quantum computer to finetune 1B-parameter AI models, achieving 15% better training and reducing model size by 76%.
AI2 and Google Cloud committed $20M to support the Cancer AI Alliance, accelerating AI-driven cancer research breakthroughs.
Snapchat introduced Sponsored AI Lenses, allowing brands to create AI-powered ads that turn users into personalized brand experiences.
Anthropic introduced a Claude Max premium tier, offering $100/mo and $200/mo options with 20x higher rate limits and early access to new features.
U.S. officials reportedly paused restrictions on NVIDIA’s H20 AI chip exports to China, after CEO Jensen Huang pledged more U.S. investments.
Moonshot AI launched Kimi-VL, a 3B vision-language model that performs on par with models 10x larger on reasoning benchmarks.
University College London researchers developed MindGlide, an AI tool for MS brain scan analysis, improving disease progression detection by up to 60%.
The NO FAKES Act was reintroduced in Congress, with backing from YouTube, OpenAI, IBM, and entertainment leaders to regulate AI deepfakes.
OpenAI rolled out the Pioneers Program, seeking startups to co-develop AI evaluations and applications for industry-specific use cases.
The European Union launched the AI Continent Action Plan, pledging €200B to build 13 AI factories and triple data center capacity in seven years.
Closing Thoughts
That’s it for us this week.
If you find any value from this newsletter, please pay it forward !
Thank you for being here !
Reply