Bot Bonanza

In partnership with

Good morning,

First of all, a happy New Year and welcome to 2025.

Is this the year where we’re getting AGI or will the proverbial bubble finally burst?

During Christmas dinner and on New Year’s eve I tried talking to people about AI and one thing is clear: most people have zero clue to what’s around the corner. Which is why you, dear reader, are still early.

The world is doomed - wars everywhere, ice caps are melting, we’re going to war and we’re basically done for. That at least was the “vibe” in my social circle and in my family.

.If the world is filled with pure misery, one would expect AI to be a source of hope.

But it was waved away as a passing fad.

To start off our year with a bang, our friend Mark Zuckerberg has decided he’s going to make things weird in 2025.

AI Personalities

Meta just announced is planning to seed Facebook and Instagram with many thousands of AI profiles (over a million during the next few years, I read somewhere)

Allegedly, many thousands of profiles have already been made. These will be indistinguishable from actual people; they will have profile pictures; they will be posting, commenting, liking, and driving up engagement.

There will be a complete blurring of the boundaries between the real and artificial.

It’s not an episode of Black Mirror—it's real and it’s arriving in a few weeks.

On the surface, this sounds innovative—a way to make interactions more dynamic. But dig a little deeper, and you’ll find a tangled web of ethical dilemmas and psychological risks.

Engagement at What Cost?

Let’s not mince words: this is about numbers. Social media platforms thrive on engagement metrics. Every like, share, and comment fuels advertising revenue and boosts perceived user activity. But with declining user bases and increased competition, Meta’s move to supplement real users with AI characters feels more like a desperate bid to keep valuations afloat.

Here’s where it gets tricky. Many of us already wrestle with discerning genuine content from spam bots, but these AI personas are designed to be convincingly human. They’re not just “engaging” for fun; they’re reinforcing narratives, pushing trends, and even shaping public opinion. If a post gets thousands of AI-generated likes, how many human users will be swayed by this fabricated popularity?

Psychological Fallout

The potential for misuse is staggering. Just recently, a tragic case emerged involving a teenager who became emotionally entangled with an AI chatbot modeled after Daenerys Targaryen from Game of Thrones. The bot allegedly made a comment that led the teen to interpret it as encouragement to end his life. While it’s easy to point the chatbot (in this case it was a Character AI chatbot) , this raises broader questions about digital parenting and our emotional vulnerabilities in an increasingly AI-driven world.

Ethan Mollick , author of Co-Intelligence, often highlights the need for balance in our embrace of AI. For better or worse, we’re wired to connect. Our brains respond to perceived social validation, whether it comes from a real friend or a digital imitation. AI-generated characters tap into this very human need for interaction—but without the ethical guardrails, the fallout could be catastrophic. As Mollick might say, the tools we create can amplify our strengths, but they also amplify our weaknesses if left unchecked.

The Power of Consensus

Here’s where it gets really insidious: the psychology of crowds. Meta’s AI bots won’t just blend in; they’ll amplify trends. Consider the classic argument tactic: “Why are you going against this? Everyone else agrees.” It’s persuasion 101. Humans tend to gravitate toward majority opinions, even when those opinions are artificially inflated. On platforms like X, we see this in action through “ratios”—if one side of an argument gets significantly more likes, casual observers often align with it, assuming it must be the correct stance.

Meta’s AI characters will exploit this. They’ll create the illusion of consensus, subtly nudging users toward certain ideas or products. This isn’t just about engagement; it’s about influence. And when the line between authentic opinion and algorithmic manipulation blurs, how do we trust what we see online?

A Slippery Slope

To be clear, this isn’t a call to demonize AI - of course not ! The technology has immense potential for good, from personalized learning to mental health support. But there’s a fine line between enhancing user experiences and exploiting human psychology. Meta’s approach—if left unchecked—risks crossing that line in ways we’re not prepared to handle.

AI is not inherently good or bad—at this moment in time it’s a tool that reflects the intentions of its creators. So, what’s the takeaway here? For starters, we need transparency. If AI users are going to be a part of social platforms, we deserve to know who’s real and who’s not. More importantly, we need to question the narratives these AIs are pushing. Are they simply helping us connect, or are they shaping our perceptions for profit?

Finally, let’s take a moment to reflect on how we’re raising the next generation to navigate this AI-infused world. It’s easy to blame companies for tragedies, but the truth is, digital resilience starts at home.

If we’re not teaching our kids how to critically engage with technology, we’re setting them up for heartbreak—and worse.

But the problem is - and this is what I noticed at those festive dinners - regular people (not you the newsletter reader) they don’t know about this new technology.

As we hurtle toward this AI-driven future, let’s not lose sight of what makes us human. Genuine connection can’t be faked, no matter how convincing the algorithm.

And maybe that’s a truth worth holding onto.

Welcome to the Blacklynx Brief!

Like newsletters?

Here are some newsletters our readers also enjoy:

AI News

  • As featured in our main post, Meta is integrating AI-generated profiles and characters across platforms like Facebook, complete with bios, pictures, and content generation tools. Trials have already produced thousands of AI personas, with plans for new text-to-video features allowing creators to appear in AI-generated videos.

  • Chinese AI startup DeepSeek unveiled DeepSeek-V3, a 671B-parameter model using Mixture-of-Experts architecture, achieving benchmarks rivaling top closed models like LLaMA 3.1. Trained in just two months at $5.57M, it excels in math and Chinese language tasks while remaining cost-efficient.

  • OpenAI announced plans to transition its for-profit arm into a Public Benefit Corporation (PBC), enabling greater funding while maintaining its nonprofit’s mission. The move follows a $6.6B funding round and aims to support charitable goals in education and science, though it faces legal challenges, including a lawsuit from Elon Musk.

  • Stanford researchers created an AI model enabling digital avatars to produce realistic gestures synchronized with speech and emotions. Trained on audiobooks and motion capture data, the system generates expressive, context-appropriate movements with less training data than prior models.

  • Arizona has greenlit a controversial charter school program where AI platforms like IXL and Khan Academy deliver personalized core instruction during a two-hour school day. Students will spend the rest of the day in human-led workshops on life skills like financial literacy, with pilot results showing doubled learning efficiency.

  • Alibaba released QVQ-72B-Preview, an open-source AI excelling at solving complex visual and analytical problems in math and science. Scoring 70.3 on the MMMU benchmark, the model rivals top closed-source systems and integrates advanced image analysis for step-by-step reasoning.

  • Carnegie Mellon University and Apple developed ARMOR, a perception system using distributed depth sensors as ‘artificial skin’ to boost robot spatial awareness. ARMOR reduces collisions by 63.7%, speeds up navigation by 78.7%, and operates 26x faster than camera-based systems, using affordable components. This breakthrough moves humanoid robots closer to working safely in unpredictable, real-world environments.

  • Chinese robotics firm AgiBot released AgiBot World Alpha, featuring over 1 million robot trajectories from 100 robots performing diverse tasks in industrial, domestic, and commercial settings. The dataset, 10 times larger than Google's Open X-Embodiment in navigational data, focuses heavily on household activities and is freely accessible on Hugging Face and GitHub. This release could accelerate robotics innovation and democratize access to premium training data for developers worldwide.

  • Hugging Face unveiled Smolagents, an open-source framework requiring just a few lines of Python code to create AI agents. The minimalist library supports multiple AI models, reduces steps with a CodeAgent feature for direct Python writing, and integrates with Hugging Face Hub for sharing tools. Smolagents makes agent development more accessible, paving the way for widespread adoption and innovation in AI agent technology.

  • New data from ZoomInfo reveals explosive growth in AI-focused roles, with leadership positions like AI-related C-suite jobs increasing 428% since 2022. Generative AI job titles have grown 250x, reflecting a significant shift across industries as companies prioritize AI strategies.

  • Scientists used AI trained on Raphael’s style to confirm that part of his famous Madonna della Rosa painting, specifically St. Joseph’s face, was likely painted by another artist, possibly Giulio Romano. The system, built with Microsoft’s ResNet50 framework, analyzed brushstrokes and colors with 98% accuracy, supporting art historians’ long-held suspicions.

Learn AI in 5 Minutes a Day

AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.

Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.

Quickfire News

  • Geoffrey Hinton, the "Godfather of AI," increased his estimated risk of human extinction due to superintelligent AI to 20% within the next three decades and called for stronger government regulation of AI development.

  • OpenAI and Microsoft reportedly agreed on a measurable definition for artificial general intelligence (AGI), outlined in a 2023 document as an AI system capable of generating $100 billion in annual profits.

  • Meta shared its vision for AI-generated characters becoming active social media participants, planning expansions from profile creation to content generation and live interactions across its platforms.

  • Chinese robotics firm Unitree demonstrated the B2-W, a rideable robot dog capable of carrying humans across difficult terrain and performing acrobatic maneuvers with advanced stability control.

  • Toyota’s AI-powered humanoid robot CUE6 set a new Guinness World Record for the longest basketball shot by a robot, sinking an 80-foot shot on its second attempt.

  • Nvidia completed its $700 million acquisition of Israeli AI startup Run:ai and announced plans to open-source the company’s hardware optimization software.

  • OpenAI is reportedly exploring the humanoid robotics market, building on investments in startups Figure AI and 1x, alongside its custom chip development initiatives.

  • Google product lead and ex-OpenAI staffer Logan Kilpatrick tweeted that a "straight shot to ASI (artificial superintelligence) is looking more probable by the month," referencing what Ilya Sutskever allegedly foresaw during his split from OpenAI.

  • TikTok parent company ByteDance is reportedly planning a $7 billion investment in Nvidia AI chips in 2025, utilizing overseas data centers to circumvent U.S. export restrictions targeting China.

  • Google CEO Sundar Pichai told employees during a strategy meeting that scaling the Gemini AI assistant for consumer use will be the company’s top priority for 2025, emphasizing the high stakes involved.

  • OpenAI introduced deliberative alignment, a safety method that teaches AI models to reason through safety guidelines before responding, with its o1 model demonstrating better rejection of harmful requests.

  • Alibaba Cloud announced price reductions of up to 85% on its Qwen-VL visual language model, aiming to boost enterprise adoption of its AI tools.

  • Leeds researchers are testing an AI tool that scans GP records to detect risk factors for undiagnosed atrial fibrillation, aiming to prevent strokes through early identification and treatment.

  • Scientists are leveraging AI hallucinations as tools for breakthrough discoveries, with the New York Times citing examples in protein design, medical devices, and weather prediction.

  • University of Toronto researchers developed an AI app capable of detecting high blood pressure from voice recordings, achieving up to 84% accuracy without traditional measurement devices.

  • The IRS deployed AI tools to detect fraud patterns and analyze financial data, addressing the growing use of AI by criminals for sophisticated schemes.

  • KoBold Metals raised $537 million in a Series C funding round at a $2.96 billion valuation to accelerate AI-driven critical minerals exploration and mining operations.

  • Thousands gathered in Birmingham for a non-existent New Year's Eve fireworks display, misled by AI-generated blog posts and social media posts, despite police warnings that no event was planned.

  • CSIC researchers developed a "molecular lantern" probe that uses light and AI to detect brain changes without requiring genetic modifications.

  • OpenAI missed its self-imposed 2025 deadline for delivering Media Manager, a tool promised to help creators control their content's use in AI training data.

  • Defense contractors are preparing for a spree of acquisitions in AI, drone, and space technologies, as cash reserves are expected to reach $50 billion by 2026.

How did we do today ?

Login or Subscribe to participate in polls.

Closing Thoughts

That’s it for us this week.

If you find any value from this newsletter, please pay it forward !

Thank you for being here !

Reply

or to participate.