- The Blacklynx Brief
- Posts
- Three Sleepless Nights
Three Sleepless Nights
Good morning,
Something has been bothering me lately.
If you're here I assume you have more than just a fleeting interest in AI.
I'm having a lot of conversations about it and there is this strange attitude towards AI that is emerging.
When Ethan Mollick wrote in “Cointelligence” that AI would give you three sleepless nights—one for excitement, one for fear, and one for trying to figure out what to do—he elegantly captured what I felt when seeing the first GPT somewhere in 2021.
I went through those phases. I work in IT, and even there, only a handful of people have told me they are in inner turmoil about this technology.
Most people? They seem to sleep like babies.
Maybe you’ve seen it, too.
Some people are wide awake, buzzing with ideas about how AI will change everything. They're devouring books, experimenting with tools, and turning their conversations into brainstorming sessions. You mention ChatGPT, MidJourney, or Claude, and their eyes light up. They get it—or at least, they’re trying to.
Then there’s the other group.
“Oh, AI? Isn’t that just a fad?” they’ll say with a shrug. “I’ll worry about it when I have to.”
Or the more pernicious : "I tried it - it can't do what I'm doing. It's not there yet"
It’s not that they’re uninterested—they’re actively dismissive.
When you ask them how they’re using these tools, you notice there’s a kind of laziness in their prompting. They don't get the right answer right away, so they don't even try to rephrase the question. Almost as if looking for a reason to dismiss the technology.
And maybe that’s what’s most fascinating. We’re watching one of the most significant transformations in human history, but some people are reacting to it like it’s just another iPhone release.
Why the split? Why are some of us losing sleep while others hit snooze?
I have a theory.
AI feels invisible … until it doesn’t. If you’re a coder, a writer, or someone whose work intersects with AI directly, you see the ground shifting under your feet. Every new update feels like a crack of thunder.
But for those whose jobs don’t yet feel touched by it—who aren’t bombarded daily with “AI can do this now!” headlines—AI is just… noise. Something for other people to worry about.
Here’s the problem: history doesn’t give free passes to latecomers.
If AI isn’t giving you sleepless nights yet, it’s probably because you haven’t seen the parts of your world it’s already starting to reshape.
So what do we do with this?
If you’re in the sleepless camp, ask yourself: Are you using that excitement and fear to get ahead, or are you just pacing the room? Because knowing AI matters and actually doing something about it are two very different things.
But if you’re someone still sleeping soundly—like someone I talked into using AI "all the time", and who tried AI for an afternoon shrugging it off as “not there yet”—I need to tell you again: that stance is dangerous.
AI doesn’t have to be perfect to change everything. It doesn’t need to replace you entirely to impact your world.
It only needs to be good enough for someone else—your competitor, your client, or even your colleagues—to use it to become smarter and faster.
Dismiss it today, and you might find tomorrow’s wake-up call isn’t an alarm—it’s a foreclosure.
Because here’s the hard truth: AI isn’t waiting for you to catch up. And by the time you decide it’s “there,” it may already be too late.
Think I’m overreacting?
Maybe.
Time will tell.
Welcome to the Blacklynx Brief!
AI News
OpenAI launched a major upgrade to ChatGPT’s Advanced Voice Mode, allowing users to share live video or screens for real-time visual analysis and discussions. This new feature, accessible via a video icon in the mobile app, is available for Plus, Pro, and Team subscribers, with Enterprise and Edu access coming in January. OpenAI also introduced a limited-time Santa voice option for seasonal chats.
Anthropic quietly released its fastest AI model, Claude 3.5 Haiku, to all web and mobile users after API-only availability. Known for speed and accuracy, Haiku excels at coding and data tasks, features a 200K context window, and integrates with Artifacts for real-time content creation. Despite its capabilities, Haiku's launch feels overshadowed by bigger AI updates from Google and OpenAI this week.
Anthropic introduced Clio, a system analyzing AI usage trends by securely summarizing and clustering millions of conversations while protecting user privacy. Findings revealed coding and business tasks dominate, alongside unexpected uses like dream interpretation and gaming assistance. This insight highlights how AI assistants are increasingly customized to real-world needs across different regions and languages.
Microsoft has unveiled Phi-4, a 14-billion parameter language model that excels in complex reasoning tasks, particularly in mathematics. Despite its smaller size, Phi-4 outperforms larger models like OpenAI's GPT-4o and Google's Gemini Pro 1.5 on various math and reasoning benchmarks. This efficiency is attributed to training primarily on high-quality synthetic data, totaling approximately 400 billion tokens. Phi-4 is currently available in a limited research preview on Azure AI Foundry, with plans for a broader release on platforms like Hugging Face.
OpenAI has introduced 'Projects' for ChatGPT, a new organizational tool that allows users to group related conversations, files, and custom instructions into individual workspaces. This feature aims to streamline workflows by providing project-specific folders, enhancing efficiency in managing AI interactions. Initially rolling out to Plus, Pro, and Teams subscribers, 'Projects' will become available to Enterprise and Education users early next year.
Pika Labs has launched version 2.0 of its AI video generator, introducing a new 'Ingredients' tool that enables users to incorporate their own images into AI-generated videos. The update also brings improved motion, prompting, and animation features, offering users greater control over video outputs. Pika Labs has rapidly gained popularity, attracting over 11 million users and securing $80 million in funding. The latest version follows its viral 'effects' launch in October, marking a significant advancement in AI-driven video creation tools.
Google released Veo 2, a video model generating high-resolution 8-second clips with realistic physics and cinematic quality, and Imagen 3, an upgraded image model excelling in color, detail, and text rendering. Veo 2 outperformed OpenAI’s Sora in realism and adherence to prompts, while Imagen 3 surpassed rivals like Midjourney in visual quality. Veo 2 is gradually rolling out via VideoFX, and Imagen 3 is now available globally through Google Labs' ImageFX.
OpenAI made its ChatGPT Search feature free for all logged-in users, adding voice search capabilities and improving mobile integration with tools like Google Maps. The feature offers faster, up-to-date responses and can now be set as a default search engine. This update brings ChatGPT closer to becoming a powerful AI assistant, particularly as Advanced Voice Mode continues to evolve.
AI startup Higgsfield launched ReelMagic, a multi-agent platform that transforms story ideas into 10-minute videos using AI agents for tasks like scriptwriting, editing, and sound production. The tool automates workflows, partnering with major AI platforms like ElevenLabs and Kling, and is already being tested by Hollywood studios. ReelMagic could bridge the gap in AI-driven storytelling, enabling cohesive long-form video creation with minimal manual effort.
OpenAI introduced API access to its o1 reasoning model, offering enhanced features like function calling and structured outputs, alongside lower costs for Realtime API audio and a mini version for voice app development. Developers can now use a Preference Fine-Tuning method for customizing model behaviors, and beta SDKs for Go and Java expand programming options. These updates provide powerful new tools for creating more sophisticated, tailored AI applications.
Nvidia released the Jetson Orin Nano Super Developer Kit, a $249 compact AI supercomputer boasting 1.7x the performance of its predecessor, with added memory and processing power. The device supports multiple AI frameworks and simultaneous tasks like robotics and vision processing, making it ideal for DIY AI projects. This affordable, high-performance platform lowers the barrier for developers to experiment with generative AI and robotics.
Google DeepMind introduced FACTS Grounding, a benchmark testing how accurately LLMs generate responses based on provided documents while avoiding hallucinations. Using a dataset of 1,719 examples and a multi-LLM judging system, the benchmark evaluates factual accuracy and groundedness. Gemini 2.0 Flash Experimental currently leads the public leaderboard.
OpenAI introduced a 1-800-CHATGPT hotline, offering US users 15 minutes of free monthly voice interactions with ChatGPT on any phone, even rotary models. Internationally, a new WhatsApp integration lets users text ChatGPT, running a lighter model with daily caps and potential future upgrades.
GitHub’s Copilot now offers a free tier with 2,000 monthly code completions and 50 chat messages, supporting AI-assisted coding through VS Code. Free users can access advanced features like multi-file editing and project context, with premium models reserved for paid plans. This move aligns with GitHub’s goal to onboard more developers as it celebrates reaching 150M users, aiming to make AI coding a standard tool for all.
AI search startup Perplexity raised $500M, tripling its valuation to $9B, as it challenges traditional search giants like Google. The platform, now boasting 15M users, offers features like one-click shopping and integrates tools like Notion via its acquisition of Carbon. Despite legal battles, its rapid growth and innovative features reflect the AI-driven transformation of online search.
Stay up-to-date with AI
The Rundown is the most trusted AI newsletter in the world, with 800,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg.
Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.
Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.
Quickfire News
Google announced Android XR, a Gemini-powered operating system for mixed reality devices, with Samsung planning to launch the first compatible headset, codenamed “Project Moohan,” in 2025.
ChatGPT head of product Nick Turley suggested in an interview with The Verge that chat-based AI interactions may soon feel as “outdated as ‘90s instant messaging.”
Amazon Prime Video introduced the "AI Topics" beta feature, using machine learning to categorize and recommend content based on viewers’ interests and viewing habits.
Character.AI implemented a safety overhaul, including a separate AI model for users under 18, upcoming parental controls, and enhanced content filtering, following lawsuits alleging the platform contributed to self-harm.
Nvidia expanded its hiring in China, adding over 1,000 employees in 2024, including 200 Beijing-based researchers focused on advancing autonomous driving technology.
Stanford researchers proposed a global effort to create an AI-powered virtual human cell, aiming to transform biological understanding and drug development through advanced computational modeling.
xAI rolled out an upgraded version of Grok-2 to all X platform users, featuring tripled processing speed, improved multilingual capabilities, and integration of web search and advanced image generation tools.
Meta’s FAIR introduced new AI research projects, including Meta Motivo for embodied agent control, Meta Video Seal for video watermarking, and enhanced models for scaling memory and social intelligence.
OpenAI co-founder Ilya Sutskever stated at the NeurIPS conference that AI has reached "peak data," predicting a shift toward reasoning-based systems that will become less predictable and more autonomous.
Google unveiled NotebookLM Plus, featuring interactive audio tools and Gemini 2.0 Flash integration, allowing users to verbally interact with AI hosts during Audio Overviews while expanding enterprise functionality.
OpenAI published emails and a timeline of events with Elon Musk, claiming Musk initially pushed for the company to become a for-profit entity amid ongoing legal disputes.
DeepSeek released VL2, a new vision-language model family leveraging Mixture-of-Experts (MoE) architecture, achieving performance comparable to larger rival models with smaller sizes.
Anonymous-chatbot returned to the LM Arena leaderboard, a platform previously used to test GPT-4o, fueling speculation about a potential GPT-4.5 or an upgraded OpenAI model launch.
Meta updated its Ray-Ban smart glasses with live AI assistance, real-time language translation, and Shazam integration for hands-free music recognition.
YouTube introduced new controls for creators to explicitly authorize specific AI companies, including OpenAI, Microsoft, and Meta, to train models using their videos.
Google Labs debuted Whisk, a creative AI experiment combining Imagen 3 and Gemini, enabling users to remix and transform visuals with image-to-image capabilities.
Former Google CEO Eric Schmidt warned in an ABC interview about the risks of AI's growing capabilities, suggesting that "pulling the plug" might be necessary when self-improving systems emerge.
SoftBank’s Masayoshi Son pledged a $100 billion investment in U.S. AI during a meeting with incoming president Donald Trump, aiming to generate 100,000 jobs over the next four years.
Lockheed Martin launched a new subsidiary, Astris AI, to accelerate AI adoption in both defense and commercial applications.
Midjourney introduced Moodboards, a feature enabling users to create personalized AI generation styles and profiles by uploading or adding images.
Google launched Gemini Code Assist tools, allowing developers to integrate external services and data directly within their IDE.
YouTube partnered with talent agency CAA to develop AI detection tools to help celebrities and athletes identify and manage AI-generated content featuring their likenesses on the platform.
The UAE’s Technology Innovation Institute released Falcon 3, an open-source language model family optimized for lightweight hardware, with 7B and 10B versions outperforming models like Llama and Qwen in benchmarks.
OpenAI’s Romain Huet stated during a community AMA that there are no current plans to release an API for the Sora video generation model.
Databricks raised $10 billion in a funding round at a $62 billion valuation, planning AI product expansions and potential acquisitions.
Microsoft reportedly acquired nearly 500,000 Hopper GPUs from Nvidia in 2024, becoming the chipmaker's largest customer and nearly doubling purchases made by Meta and ByteDance.
Magnific AI released Magic Real, an image generation model focused on producing realistic outputs for professionals in architecture, photography, film, and interior design.
Odyssey unveiled Explorer, a generative world model that transforms images into 3D environments, and added Pixar co-founder Ed Catmull to its Board of Directors.
Open Vision Engineering introduced Pocket, a $79 AI-powered voice recorder designed to capture, transcribe, and organize conversations.
Runway launched a talent network platform to connect AI filmmakers and production houses with brands and studios seeking expertise in AI-driven content creation.
The U.S. Department of Homeland Security launched DHSChat, an internally developed AI chatbot deployed on secure infrastructure for its 19,000 employees and agency users.
How did we do today ? |
Closing Thoughts
That’s it for us this week.
If you find any value from this newsletter. PLEASE FOR THE LOVE OF GOD pay it forward!
Thank you for being here !
Reply