This is Why Your Colleague Got Promoted

In partnership with

Good morning,

This week The Economist published an article where it stated that 40% of American knowledge workers are actively using generative AI at work. That is a LOT.

That's faster adoption than the internet. But here's where it gets weird: they asked their employers about it, and only 5% of American businesses say they're officially using it.

While you might feel guilty for even thinking about using at work - your colleague might be doing everything within a chatbot. So that is why your colleague seems to work three times as fast as you and still has time to linger near the coffee machine.

What some of these businesses are doing is running “pilot projects” to see if it’s something that would work for them, but in secret, half the employees are already using it for all kinds of tasks.

Back in the day, the first smartphones, or the abomination that is the Blackberry, entered our homes because we were issued one at work. Soon you saw these things everywhere, and that was basically the death of the 9-to-5.

With AI it seems to be the other way around. People are using the technology at home and are, by extension, bringing it into work.

Of course if you’re “working from home,” that line blurs quickly.

So why are companies so slow in AI adoption ?

A few reasons I see from the top of my head:

There’s not a lot of legislation or rules around it. Sure, the AI Act is here, but everybody is still reading that thing.

It’s evolving quite fast—new tools are released daily , new versions , new capabilities

Microsoft’s Copilot or Google Gemini have an insane price tag (about 20€ per user per month) in relation to the “free” ChatGPT, and to be honest, it’s underwhelming.

There are so many choices: Microsoft Copilot, the copilot for github, Claude, Midjourney, the list goes on and on.

But not addressing this is really not a good idea because if you don’t teach your users to be careful with your data, they will not be careful with your data.

You do not want to see the data that will have leaked to OpenAI, Google, or Microsoft over the last few years. People have proven to be reckless - putting photos of their own children on social media so why would they not massively leak company data.

I have a bit of a pessimistic view on privacy, to be honest. I think that with social media we have started to give away our privacy willingly to large tech companies and have allowed them to train AI models. We have willingly been feeding the beast all along.

I honestly think the privacy ship has sailed. Once we enter the realm of AGI or ASI, nothing will be secret anymore.

Anyway, that was it for this week’s good news show.

Welcome to the Blacklynx Brief!

By the way, there is now a referral program for this newsletter. For every 3 people that you get to sign up, we will gift you a 15€ Amazon gift card.

Scroll to the bottom of the newsletter for details!

Learn AI in 5 Minutes a Day

AI Tool Report is one of the fastest-growing and most respected newsletters in the world, with over 550,000 readers from companies like OpenAI, Nvidia, Meta, Microsoft, and more.

Our research team spends hundreds of hours a week summarizing the latest news, and finding you the best opportunities to save time and earn more using AI.

AI News

  • OpenAI acquired the premium domain name chat.com, now redirecting to ChatGPT, from HubSpot founder Dharmesh Shah in a deal reportedly worth over $15 million, potentially involving OpenAI stock. This rebranding move hints at OpenAI’s broader vision to transcend the GPT era as it pivots towards advanced reasoning models like Strawberry and Orion.

  • Nvidia unveiled new robotics tools at the 2024 Conference on Robot Learning (CoRL), including the Isaac Lab framework, specialized workflows for humanoid robots, and a partnership with Hugging Face. The advancements position Nvidia as a key enabler for the robotics industry, accelerating the development of humanoid capabilities and robot AI integration.

  • Microsoft introduced Magnetic-One, an AI orchestration system that coordinates multiple specialized agents to handle complex real-world tasks like coding or browsing. Released as open-source, Magnetic-One pushes multi-agent coordination closer to practical use, advancing the potential for AI teams to automate daily and professional workflows.

  • OpenAI has integrated web search directly into ChatGPT, allowing users to access real-time information on topics like news, weather, and stocks, complete with source links. Using a specialized GPT-4o model, the feature automatically triggers or can be manually activated with licensed content from major publishers like AP and Reuters. Initially available to Plus and Team users, this update positions ChatGPT as a powerful hybrid of AI chatbot and search engine.

  • At its London Dev Day, OpenAI showcased the capabilities of its o1 model and Realtime API, including demos for app creation and voice-based tasks, while announcing major price cuts for API services. During a Reddit AMA, CEO Sam Altman addressed computing limitations impacting product rollouts, confirming that GPT-5 won’t launch in 2024 but hinting at significant updates later this year.

  • Nvidia unveiled HOVER, a groundbreaking 1.5M parameter neural network capable of controlling whole-body robot movement with diverse input methods like VR and motion capture. Trained in Nvidia’s Isaac simulator, which accelerates robot learning, HOVER works seamlessly across devices and transfers directly to real-world robots without fine-tuning.

  • Decart and Etched have launched Oasis, an AI model that generates interactive video game environments in real-time, including physics, lighting, and item interactions. Running at 20 FPS, Oasis is 100x faster than traditional AI video generation models and includes a playable Minecraft-style demo. This innovation could transform game development by eliminating traditional engines.

  • Runway introduced Advanced Camera Control for its Gen-3 Alpha Turbo model, allowing precise manipulation of AI-generated video with panning, zooming, and tracking features. The tool preserves depth and spatial consistency, marking a shift toward filmmaker-level control in AI video production.

  • Anthropic’s Claude 3.5 Sonnet now supports PDF analysis in public beta, enabling the AI to process text and visuals, such as charts and images, within documents up to 32MB or 100 pages. Available via platform and API, the feature enhances industries like finance and healthcare, where visual and textual data often intertwine.

  • Meta is making its Llama AI models available to U.S. government agencies and defense contractors, marking a policy shift to support national security applications. Early partnerships with firms like Lockheed Martin and Oracle include using Llama for aircraft repairs and threat analysis. The move, framed as a step toward establishing open AI standards, follows reports of Chinese military use of older Llama models and raises questions about the intersection of tech and defense.

  • Anthropic launched the Claude 3.5 Haiku model, improving reasoning, tool use, and coding but with a 4x price increase compared to its predecessor. While the model outperforms previous versions, the pricing has sparked criticism, particularly as similar competitors offer comparable benchmarks at lower costs.

  • AI startup Physical Intelligence secured $400M in funding from Jeff Bezos, OpenAI, and other investors to develop its π0 model for general-purpose robot control. Demos show robots completing multi-stage tasks like folding laundry and packing eggs, powered by training on one of the largest datasets in the robotics field.

  • Apple has introduced developer tools for its upcoming Siri screen awareness feature in Apple Intelligence, allowing apps to make visible content accessible to Siri without workarounds like screenshots. These tools, paired with early ChatGPT integration testing in iOS 18.2, aim to transform Siri into a more contextually aware assistant.

  • Tencent unveiled Hunyuan-Large, a 389B parameter language model utilizing a Mixture-of-Experts (MoE) architecture for efficiency, with only 52B parameters active at a time. The model achieved SOTA performance on the MMLU benchmark and supports up to 256K token context lengths, doubling its competitors.

  • Apple is reportedly exploring smart glasses development under an internal project codenamed Atlas, gathering feedback on existing products and use cases. The initiative follows Meta’s success with Ray-Ban smart glasses and the challenges Apple faces with its bulky Vision Pro headset.

Quickfire News

  • Anthropic announced that Claude is now available on desktop apps for Apple and Windows, along with new dictation capabilities for mobile and iPad users.

  • Google Maps integrated Gemini, adding personalized recommendations, AI-powered navigation, and expanded Immersive View features.

  • Meta’s FAIR team unveiled three open-source tactile sensing advancements, including a human-like artificial fingertip and a unified platform for robotic touch integration.

  • D-ID launched Personal Avatars, a hyper-realistic AI avatar suite for marketers, enabling real-time digital human interactions from just one minute of source footage.

  • Microsoft delayed the release of the Copilot Plus ‘Recall’ feature to December, citing ongoing updates to security and opt-in controls for the AI-powered screenshot system.

  • Google introduced ‘Learn About,’ an experimental conversational tool for exploring a variety of topics through an AI-powered interactive interface.

  • ElevenLabs debuted ‘X to Voice,’ a feature that creates a unique AI voice profile based on a user’s X (formerly Twitter) social media account.

  • Chinese military researchers reportedly used Meta's open-source Llama model to create ChatBIT, an AI tool for military intelligence analysis and strategic planning.

  • Microsoft teased its upcoming ‘Copilot Vision’ feature, which will allow the AI assistant to see and understand browser content and user behavior.

  • Google released ‘Grounding with Google Search’ for its Gemini API and AI Studio, enabling developers to integrate real-time search results into model outputs to improve accuracy and reduce hallucinations.

  • Disney launched an ‘Office of Technology Enablement’ to oversee AI and mixed reality adoption across its divisions, focusing on responsible deployment of the technologies.

  • Amazon delayed the rollout of its AI-enhanced Alexa to 2025 due to technical challenges, including hallucinations and declining performance on basic tasks during testing.

  • Nvidia researchers introduced DexMimicGen, a system capable of generating thousands of robotic training demonstrations from as few as five examples, achieving a 90% success rate on real-world humanoid tasks.

  • Perplexity CEO Aravind Srinivas responded to a post on X about the New York Times Tech Guild strike, offering AI assistance for election coverage shortly after the platform announced a dedicated U.S. election news hub.

  • Amazon Prime Video introduced X-Ray Recaps, an AI-powered feature that generates personalized show summaries at any point during viewing, designed to avoid spoilers.

  • Hume launched a new app featuring AI assistants combining the EVI 2 speech-language model with Claude 3.5 Sonnet and Haiku, offering conversational interactions, emotional reflection, deep questions, and life advice.

  • Netflix executive Mike Verdu transitioned to a new role as VP of GenAI for Games, emphasizing AI's potential to transform game development weeks after Netflix laid off 35 developers from its Team Blue studio.

  • OpenAI introduced a ‘Predicted Outputs’ feature for GPT-4o models, enabling developers to reduce response times by supplying reference text for tasks like code refactoring and document editing.

  • Google’s Big Sleep AI agent uncovered a major safety flaw in the SQLite database system, marking an AI security research milestone by identifying an issue missed by traditional testing programs for years.

  • Former Meta AR lead Caitlin Kalinowski announced she is joining OpenAI to lead its robotics and consumer hardware initiatives, aiming to integrate AI into the physical world.

  • T-Mobile will pay $100 million to OpenAI over the next three years to develop an "intent-driven" AI platform for automating customer service tasks and integrating with operational systems.

  • Meta's plans for a nuclear-powered AI facility faced delays after the discovery of a rare bee species at the proposed site raised regulatory and environmental concerns.

  • Apple’s iOS 18.2 Beta 2 revealed that ChatGPT integration with Siri will include daily usage limits for free users and a $19.99 monthly Plus plan for expanded GPT-4o and DALL-E features.

  • Amazon received FAA approval for its new MK30 delivery drones, enabling beyond-line-of-sight flights and advancing autonomous delivery capabilities.

  • Unitree Robotics showcased its Humanoid G1 and Go2 robots in a new video, demonstrating natural walking gaits, improved balance, and enhanced coordination.

  • Google announced plans for an AI hub in Saudi Arabia focused on Arabic language models and regional applications, despite earlier commitments to reduce ties with the fossil fuel industry.

  • Google accidentally leaked "Jarvis," an AI system designed to autonomously control a computer and perform web-based tasks, on the Chrome Web Store ahead of its planned December release.

  • Saudi Arabia announced "Project Transcendence," a $100 billion AI initiative to position the kingdom as a global tech leader through investments in data centers, startups, and infrastructure.

  • Perplexity is reportedly raising valuation despite facing legal challenges from major publishers over its content usage practices.

  • Chinese AI video platform KLING is introducing a "Custom Models" feature, enabling users to train personalized video characters using 10-30 clips for consistent appearances across different scenes and camera angles.

  • Microsoft filed a patent for a "response-augmenting system" aimed at reducing AI hallucinations by having the model verify its answers against real-world information before responding to users.

Closing Thoughts

That’s it for us this week.

If you find any value from this newsletter. Please pay it forward and receive 5€ for every person you get to sign up in the form of an Amazon gift card. What’s keeping you ?

Reply

or to participate.