Fattening Up The Pig

In partnership with

Good morning.

A few weeks ago, I predicted the end of 2024 would be rather boring, and your AI-sceptic colleague (we all have one) would be sitting there at lunch gloating at you. Silently mocking your foolish optimism about AI.

Well, it seems OpenAI had other plans. It is in dire need of funds and needs to speed up the training of its new model.

And so it is that Strawberry, aka ChatGPT-o1, came into our lives last week on Thursday.

Here’s our attempt at explaining the significance of this step. As an aside, you can now scrap “Strawberry” from your AI lingo repertoire and start talking about "Orion.”. Read on to know why!

What is this new update? In the words of OpenAI:

So a new series of models (codenamed O1) that “spend more time thinking.”. These models do really shine in exact sciences like coding and math.
The improvements are pretty wild as per OpenAI’s research paper.

The internet has been buzzing with excitement over the last few days as users are playing around with o1 and arriving at exciting but slightly scary results.

Not everyone is convinced, however. Although we have talked about this phenomenon in the past, there are a lot of people that “test” an LLM, are not impressed that it’s good at the task it’s been given, and then just dismiss the concept of AI altogether.

It’s part of all technological revolutions that there is a group of people that are strongly opposed to the new technology and are doubling down on it not succeeding.

Don’t get blindsided.

ANYWAY.

AI Large Language Models (LLMs) can improve in two main ways:

  1. By learning from more data The more information you give to these models, the better they become at understanding and generating responses.

  2. By allowing the model to "think" more, this means giving the model more time or steps to process the information before answering. The more carefully it reasons through a problem, the more accurate its responses can be.

However, while LLMs have been trained on huge amounts of data, they haven't seen everything. They have processed a lot of data from the internet, books, and articles, but there’s always more specialized or updated data they can learn from. So, they can still improve by adding more data. But still, they are coming up with limits as to what they can absorb.

At the same time, a new method of improving these models involves letting them "think" deeper, which helps them handle more complex tasks.

This process can generate new types of insights. This means that AI is getting better not just by learning from more data but also by reasoning more effectively.

By applying this reasoning, they’re coming to new insights and they are storing these insights as “synthetic data.”

OpenAI’s next LLM, codenamed "Orion," is said to be built using synthetic data generated by these new reasoning engines. This could help push AI capabilities even further.

While we are asking our questions to this new model, in the background, Orion is like a pig getting fattened up with every response.

A few weeks ago we talked about the race to AGI (Artificial General Intelligence) and how OpenAI is running on a negative cashflow in its quest to reach this first.

This is exactly why this model was released very early. We are now all feeding Orion with synthetic data by interacting with O1.

We are fattening up the pig.

Sit tight—because it’ll get a lot more interesting in the next couple of weeks as Orion might be closer than we think.

By the way, there is now a referral program for this newsletter. For every 3 people that you get to sign up, we will gift you a 15€ Amazon gift card.

Scroll to the bottom of the newsletter for details!

All your news. None of the bias.

Be the smartest person in the room by reading 1440! Dive into 1440, where 3.5 million readers find their daily, fact-based news fix. We navigate through 100+ sources to deliver a comprehensive roundup from every corner of the internet – politics, global events, business, and culture, all in a quick, 5-minute newsletter. It's completely free and devoid of bias or political influence, ensuring you get the facts straight.

AI News

  • As mentioned above and to reiterate: OpenAI has launched its new O1 model, featuring advanced reasoning capabilities integrated into ChatGPT for Premium and Teams users. It significantly outperforms human experts in science, coding, and math, opening up new possibilities for solving complex problems. API access comes at a premium, priced much higher than previous models. OpenAI’s new o1 model scored an IQ of 120 on the Norway Mensa test, showing strong problem-solving abilities in logic and visual puzzles.

  • Google DeepMind introduced two AI systems, ALOHA Unleashed and DemoStart, which enhance robot dexterity for tasks like tying shoelaces and repairing robots. These systems mark progress toward more practical robotic applications, with success rates as high as 97% (!!) in real-world tasks.

  • The White House is launching a task force to develop AI datacenter infrastructure, ensuring U.S. leadership in AI. The initiative focuses on datacenter growth, energy efficiency, and national security, with support from tech companies like Nvidia and Google.

  • Fei-Fei Li has launched World Labs, a company focused on developing AI models that understand and generate 3D environments, called "Large World Models" (LWMs). These models incorporate spatial intelligence, physics, and semantics to interact with virtual spaces, attracting over $230 million in funding. The innovation aims to push AI beyond text-based systems, opening new opportunities in AR/VR, robotics, and creative fields.

  • Tencent introduced GameGen-O, an AI capable of generating open-world video game content from text prompts, reducing development time and costs. The model creates characters, environments, and actions, allowing gamers to interact with these worlds. Trained on 4,000 hours of game footage, GameGen-O sets a new benchmark for AI in game development.

  • Microsoft has unveiled the next iteration of its Copilot AI assistant, adding new features like Copilot Pages for collaborative work, Copilot Agents for automating business processes, and a no-code Agent Builder for easy AI development. Integrated across Microsoft 365 apps, these enhancements promise faster and more user-friendly AI-driven productivity tools.

  • Slack has introduced AI-powered features, including Salesforce's Agentforce AI, which automates tasks and integrates CRM data directly within Slack. New tools like huddle notes, an AI workflow builder, and pre-built templates aim to boost productivity for its 32 million daily users by embedding intelligent automation directly into their workflow.

  • AI startup Groq is partnering with Saudi oil giant Aramco to build what could be the world's largest AI inferencing center in Saudi Arabia. With plans to house up to 200,000 language processing units, this project underscores Saudi Arabia's commitment to AI infrastructure and could significantly enhance AI processing capabilities across the Middle East, Africa, and India.

  • Snap has introduced its latest Spectacles, AR glasses powered by the new Snap OS, designed to enhance social interactions through augmented reality. These lightweight glasses offer AI-driven features like My AI and immersive lenses, but the 45-minute battery life and limited field of view indicate room for improvement. Available to developers for $99/month, these Spectacles represent a bold step forward in AR technology.

  • AI startup 1X has developed a 'World Model' that allows robots to predict complex object interactions and imagine multiple future scenarios based on thousands of hours of real-world robot data. This model enhances the capabilities of 1X's humanoid robots, aiming to create smarter, more versatile robots for everyday tasks. The company is also releasing data and resources to encourage further research in robotics.

  • A study by Hong Kong researchers suggests that large language models (LLMs) like GPT-4 possess a dynamic memory similar to human memory, challenging current views on AI cognition. The study shows that LLMs can memorize and generate outputs based on specific inputs, much like humans, potentially narrowing the gap between artificial and human intelligence. This finding could reshape how we approach AI development, focusing on improving hardware and data rather than fundamental differences in cognition.

  • Lionsgate has partnered with the AI video generation company Runway to develop a custom AI model based on its extensive film catalogue. This AI will assist filmmakers in generating and manipulating cinematic content, potentially streamlining both pre- and post-production processes. This collaboration marks a significant move towards integrating AI into Hollywood.

  • YouTube has introduced new AI features, including text-to-video generation for shorts, AI-powered inspiration tools, and advanced dubbing capabilities. These tools aim to assist creators by enhancing creativity and making content production more efficient, with clear AI labeling to maintain transparency. YouTube’s embrace of AI highlights the platform's commitment to integrating technology that supports rather than replaces human creativity.

  • Google Research has developed an AI model capable of identifying vocalizations from eight whale species, including the previously mysterious "Biotwang" sound of Bryde’s whales. This tool is designed to improve conservation efforts by enabling more accurate tracking of whale populations through acoustic monitoring.

Quickfire News

  • Google started rolling out Gemini Live for free users on the Gemini Android app, featuring natural voice conversations with the AI assistant and 10 new voice options.

  • ChatGPT reportedly surpassed 11 million paying subscribers, including 1 million on higher-priced business plans, potentially generating over $2.7 billion annually, according to OpenAI's COO Brad Lightcap.

  • Mastercard agreed to buy AI-powered threat intelligence company Recorded Future for $2.65 billion, aiming to strengthen its cybersecurity capabilities.

  • Salesforce introduced Agentforce, a suite of low-code tools designed to build autonomous AI agents that can reason and handle tasks in sales, service, marketing, and commerce.

  • Google launched DataGemma, a new open model that connects large language models with real-world data from its Data Commons, aiming to reduce AI hallucinations by grounding responses in factual statistics.

  • Hume AI unveiled Empathic Voice Interface 2 (EVI 2), a voice-to-voice foundation model trained in emotional intelligence capable of interpreting and generating various tones of voice and speaking styles.

  • Runway launched Gen-3 Alpha Video to Video, allowing users on all paid plans to transform input videos using AI-generated styles and prompts.

  • Meta admitted to scraping public data from all Australian adult users to train AI models without providing an opt-out option like it does for EU users.

  • Google AI Studio introduced a new model comparison feature, enabling users to easily compare outputs from different AI models and parameter settings.

  • Researchers developed "g1," an AI system using Llama-3.1 on Groq hardware that employs reasoning chains to solve complex problems, similar to OpenAI's o1 model.

  • A new AI chatbot using GPT-4 Turbo successfully reduced belief in conspiracy theories among users, with effects lasting months after brief interactions.

  • Montana State University is working on AI methods using neural symbolic regression to help farmers optimize crop yields with precision agriculture.

  • Researchers are developing AI-piloted drone swarms with up to 30 autonomous aircraft working together to detect and extinguish wildfires.

  • Luma Labs introduced the Dream Machine API, enabling developers to integrate their popular video generation AI model into applications without needing to build complex tools.

  • Google announced significant performance upgrades for Gemini 1.5 Flash, reducing latency by over 3x and increasing output tokens per second by more than 2x.

  • A Canadian study found that an AI early warning system reduced unexpected patient deaths by 26% by monitoring vital signs and alerting staff for early intervention.

  • James Earl Jones agreed to let AI replicate his Darth Vader voice before his death, ensuring the character's continuation in future Star Wars productions.

  • AI pioneers called for international oversight to address potential catastrophic risks from rapidly advancing AI technology, warning it could soon surpass human control.

  • OpenAI announced enhanced safety and security measures, including establishing a board oversight committee chaired by Zico Kolter to monitor model development and deployment.

  • Microsoft and Blackrock launched a $100 billion fund to invest in AI data centers and related power infrastructure, with an initial $30 billion raised.

  • Sakana AI secured approximately $200 million in Series A funding from Japanese companies to accelerate AI development and market expansion.

  • Google introduced 10 voice options for Gemini Live, allowing users to customize their AI assistant’s vocal interface.

  • OpenAI increased rate limits for 01-mini and 01-preview models, allowing Plus and Team users up to 50 daily interactions for the 01-mini model.

  • Perplexity introduced a "reasoning" focus for Pro users, offering up to 10 daily uses of OpenAI’s O1-mini model for puzzles, math, and coding tasks.

  • The Mark Cuban Foundation launched a free AI bootcamp for Dallas teens, partnering with the Mavericks to teach AI fundamentals and applications.

  • Intel announced a partnership with Amazon to manufacture custom AI chips, expanding their foundry business and semiconductor capabilities.

  • Lenovo announced plans to manufacture AI servers in India and opened a new AI-focused R&D lab in Bengaluru, aiming to produce 50,000 units annually.

  • Together AI's LlamaCoder app, using Llama 3.1 405B, has generated over 200,000 applications since launch and gained more than 2,000 GitHub stars.

  • The Biden administration announced an international AI safety meeting in San Francisco for November, bringing together experts from nine countries and the EU.

  • OpenAI reportedly warned users against probing the reasoning processes of its new O1 AI models, with potential bans for policy violations.

  • Northwestern University received $20 million to lead a new AI research institute focused on developing tools for analyzing astronomy and astrophysics data.

  • Google announced $25 million in funding for AI education initiatives, aiming to train over 500,000 educators and students in artificial intelligence skills.

  • Alibaba released Qwen 2.5, a multilingual AI model with 72B parameters, competing with larger models across various performance benchmarks.

  • Nvidia launched its AI Aerial platform to optimize wireless networks and enable new AI experiences on a unified infrastructure for telecom providers.

Closing Thoughts

That’s it for us this week.

If you find any value from this newsletter : make sure to send the link below to your colleagues and friends :

Reply

or to participate.