- The Blacklynx Brief
- Posts
- The Blacklynx Brief - Vol.9 - A Pivotal Moment
The Blacklynx Brief - Vol.9 - A Pivotal Moment
Welcome to Volume 9 of the Blacklynx Brief.
We listened to the feedback and present you a new format this week. Our readers (we’re a bit along in the triple digits by now) have spoken: more AI and more tools, please.
So here goes.
The last few weeks I started the newsletter by proclaiming : “this is the craziest week yet”. Only to have to announce even crazier news in the next edition.
The last few weeks might be the craziest of them all. We might have reached a pivotal moment . One of those moments we can look back on and say : that is when it all changed.
So why do I say this?
Well for the first time the latest AI innovations are being seamlessly incorporated into the tools we all use every day. The big players in the space like Microsoft, Google and Meta are coming forward with their strategy and goals for AI.
Instead of you as a user ending up with a myriad of tools, the end game is to have the AI tools centralized within their respective ecosystems.
Soon, you will call upon ChatGPT in the form of Copilot from your Excel sheets or you can ask Bard a question from Google Docs. This is the week where this became solidified.
On top of that there’s Meta that is doubling down on its metaverse vision and is integrating AI tools in augmented reality.
Grab a coffee and join me as we go down this week’s rabbit hole.
🗣 ChatGPT's Multisensory Evolution 🗣
The big one this week is that OpenAI is rolling out some crazy features that were announced last week. A lot of users in the US are reporting that the features have been activated, seems the EU users will have to wait some more but some of the below features have popped up for me already.
First of all there’s ChatGPT-V or Vision. ChatGPT can now understand images, photographs, screenshots and text documents.
Some use cases:
Recognising handwriting
Asking for advice based on a picture
Analyze pictures and drawings. Here’s a crazy one from Twitter user @mckaywrigley
Go to his Twitter (or X I should say) - profile to see the latest ChatGPT functions in action.
You could have a meeting where you draw something on a whiteboard and ask ChatGPT to code a website based on what is on the whiteboard.
On top of that - if you open your ChatGPT app and you’re a plus subscriber you will notice that there is a headphone icon on top.
A newsletter is the wrong medium to actually SHOW this all in action so here’s a YouTube video on how to do it. By the way it speaks thirty languages including Dutch and it speaks it quite well.
The third feature is that ChatGPT will soon be able to generate images based on prompts. Just like Midjourney or Stable Diffusion. Only now from your GPT Chat interface. It’s reportedly also slowly being rolled out across the user base.
Watch this video to see it in action. Courtesy of MattVidPro AI who got early access.
Putting all this together and we have an incredibly powerful tool at our fingertips.
This is the type of conversation one should have at a bar - with a glass of the finest single malt in hand - but I cannot stress enough how this will change how we interact with information going forward.
This will speed up our productivity 10x but also bring terrible danger in the form of deepfakes.
🌌 Meta's AI Announcements 🌌
At last week’s ‘Metaconnect’ Event Mark Zuckerberg presented Meta’s vision for AI.
Facebook was renamed to Meta because of their belief in the concept of “the metaverse”. This topic can get complex but basically Meta wants to build a collection of digital spaces in which users can work, play and interact.
You’ll be sitting in a virtual meeting room, looking at virtual versions of your colleagues, having virtual conversations while you’re actually sitting in your underwear on your sofa.
Anyway, enough about my bad habits.
Some of the things that were announced in the AI space:
Work continues on LLama2 , Meta’s Large Language Model
Meta has developed their own image generating tool - called EMU for “Expressive Media Universe”
This tool will be integrated into Instagram where you can create “stickers” to put onto your posts by merely telling the tool what you want.
Chatbots : Meta introduced a range of niche chatbots, each with a unique specialization. From 'Max' the sous chef, 'Lily' the editor, to 'Dungeon Master Snoop' for D&D fans, there's something for everyone. They even teased the idea of chatbots embodying user personalities in the metaverse.
AI Studio : Meta is working on a platform called ‘AI Studio’ (you don’t need an expensive branding agency to come up with a name). This will allow users and businesses to build and integrate their own AIs
But more importantly - and that is the big innovation: Meta introduced a headset that does not make you look like a total dork.
Meta AI Sunglasses: Meta introduced Ray-Ban smart sunglasses with a built-in camera. These sunglasses are not just for vision but have an audio feature, enabling users to speak and listen. Moreover, they are embedded with AI. Wearing them, you can ask questions like "What's the square root of 7362?" and receive answers directly in your ear. An upcoming update will enhance their capabilities, allowing the glasses to "see" and provide details about objects you're looking at. This includes reading and translating signs in foreign languages. Once multimodal, these sunglasses will offer users vast information about their environment.
They also look more or less ‘normal’ but it clearly advances the trend that AI will be integrated into our daily lives.
At this point I’m glad I gave up the cybersecurity part of the newsletter because this will completely destroy the concept of privacy. We’re not far from chip implants at this rate as well.
I plan to show up with one of these at the next local pub quiz and destroy the opposition.
🌌Various AI News 🌌
Windows 11 received an AI upgrade. With this update, a "copilot preview" feature has been introduced. It resembles Bing chat in Microsoft Edge but also controls apps on the computer. While it can generate images or manage computer settings, its current version appears to have limitations. However, Microsoft has hinted at more comprehensive features in future updates.
Canva just dropped Magic Studio. Canva, everyone’s favorite design suite just released a web based AI design tool, promising to save you a lot of time.
MacOS Sonoma, the new operating system from Apple, was launched. While not heavily AI-centric, it does include an AI-driven comprehensive update for the keyboard autocorrect feature. The update uses advanced on-device machine learning for better accuracy.
Amazon Teams Up with Anthropic Amazon announces a whopping $4 billion investment into Anthropic, AWS (Amazon Web Services) will now be the primary cloud provider for Anthropic. This exciting partnership comes even as Amazon pushes forward with its model, Amazon Bedrock, while Anthropic garners support from tech giant Google.
Amazon Bedrock Goes Public Amazon introduces general availability for Amazon Bedrock. This move ensures easier accessibility for companies, irrespective of size, to craft their AI models fine-tuned to their specific needs.
Cloudflare Dives Deep into AI Cloudflare, a renowned CDN (Content Delivery Network), announces the launch of a unique AI toolkit. This suite aids customers in deploying and managing AI models, capitalizing on Cloudflare's extensive GPU infrastructure.
Snapchat and Microsoft Collaborate on AI Ads Snapchat pairs with Microsoft for its My AI chatbot feature. It offers users AI-driven recommendations, potentially sponsored by businesses. These recommendations are now available for purchase via Microsoft's advertising platform.
Spotify’s Voice Clone Tech In a revolutionary move, Spotify unveils its capability to replicate podcasters’ voices and translate content into various languages. This feature ensures global listeners experience podcasts in their language while preserving the original tone.
Getty’s Exclusive AI Image Generator Getty steps into the AI domain with its AI generator, exclusively trained on Getty licensed images. However, access remains restricted with a demo request requirement.
Leonardo AI Showcases LoRAs Leonardo AI rolls out LoRAs (Low Rank Adaptation Models). These aren't custom-trained models but augmentative effects, enhancing image training. Users can combine multiple LoRAs for diverse stylistic outputs.
Pika Labs Innovates with Video Messaging Pika Labs introduces a distinctive video feature: encrypted text messages embedded in videos. The platform remains freely accessible.
Genmo AI’s Dynamic Replay FX Genmo AI unveils Replay FX, a unique kaleidoscope effect, stabilizing the central face while animating the surroundings. This AI video feature offers a plethora of effects to experiment with.
Hollywood & AI: The WGA Strike concludes with an interesting term Hollywood studios now have permission to train AI models on writer's work. The rationale is that AI would eventually train on these works, so it's better to control its usage.
Tesla's Optimus Robot: Tesla showcased the Optimus robot's capability to sort objects based on color autonomously, demonstrating its adaptability in real-world dynamic conditions.
Not creepy at all…
AI Tool of the Week
As for this week’s AI tool - where we highlight companies that are using AI to revolutionize work - this week we’re looking at Legalfly (There are a lot of lawyers on the mailing list for some reason).
This is a Ghent-based startup looking to insert AI into law firms - which I can imagine is quite the challenge as I’ve met some of them who have yet to incorporate the personal computer into their workflow, let alone AI.
If you are of the legal persuasion - check them out !
Closing Thoughts
The pace at which AI is evolving is astonishing, making the rest of the year promising for enthusiasts and professionals alike. As we continue to stride into this new reality, let's focus on the myriad of opportunities and potentials AI brings to our doorstep.
Until next week, keep exploring and stay curious!
Reply