Book a Call

Book a Call

Google I/O 2024: Highlights on updates and future prospects

Google I/O: 2024 highlights on updates and future prospects
Google I/O: 2024 highlights on updates and future prospects

Written by:

10 min read

Updated on: May 23, 2024

Toni Hukkanen

Head of Design

Creative Direction, Brand Direction

Toni Hukkanen

Head of Design

Creative Direction, Brand Direction

Google I/O 2024 just wrapped up, and it was a major flex for AI. Google didn’t just announce a few new features—they completely rebranded its chatbot from Bard to Gemini, rolled out cutting-edge AI models, and sprinkled machine learning upgrades across Google Search, Workspace, Android 15, and even Chrome. In other words, if AI were a band, it just became the headliner at Google’s biggest show.

Meanwhile, the Search Generative Experience from 2023 got swapped out for AI Overviews, sending marketers into a mix of panic about potential traffic hits and excitement for new AI-driven opportunities. Below, we’ll dive into the marquee announcements and what they mean for digital marketing. From fresh search dynamics to novel ways of engaging users, here’s your cheat sheet to everything that went down—and why it matters.

Google I/O 2024 just wrapped up, and it was a major flex for AI. Google didn’t just announce a few new features—they completely rebranded its chatbot from Bard to Gemini, rolled out cutting-edge AI models, and sprinkled machine learning upgrades across Google Search, Workspace, Android 15, and even Chrome. In other words, if AI were a band, it just became the headliner at Google’s biggest show.

Meanwhile, the Search Generative Experience from 2023 got swapped out for AI Overviews, sending marketers into a mix of panic about potential traffic hits and excitement for new AI-driven opportunities. Below, we’ll dive into the marquee announcements and what they mean for digital marketing. From fresh search dynamics to novel ways of engaging users, here’s your cheat sheet to everything that went down—and why it matters.

What is Google I/O?

What is Google I/O?

Google I/O is an annual developer bash, thrown in May, where the tech giant unveils new software updates and product roadmaps. Consider it as Google’s personal Comic-Con, but for developers and tech enthusiasts and minus the cosplay. This year's conference was hosted on May 14 and May 15 at the Shoreline Amphitheater in Mountain View, California.

The I/O refers to Input/Output, an homage to the computer science principle of how information is transferred between systems and the external world. But there is also a playful twist, Google calls it “Innovation in the Open,” hinting at the collaborative, open-source vibes that the company wants to promote. If you’re all about cutting-edge software and community-driven development, I/O is basically your Super Bowl.

What is Google I/O?

The first Google I/O took place in 2018 (skipping 2020 for obvious reasons). Since then, it’s been an annual fixture—where everyone from curious novices to seasoned devs gets the inside scoop on upcoming Android features, the latest AI tools, and under-the-hood changes to Google’s ecosystem. These days, it’s also a sneak peek at everything from new Pixel hardware to Earth-shattering announcements about AI-driven search features.

Google I/O revolves around product keynotes, technical sessions, and hands-on labs. The keynotes often steal the show, with Google’s top brass unveiling brand-new innovations or major updates to existing services. After that, you’ve got deep-dive sessions where engineers walk you through the nuts and bolts of new frameworks or APIs.

Google I/O is an annual developer bash, thrown in May, where the tech giant unveils new software updates and product roadmaps. Consider it as Google’s personal Comic-Con, but for developers and tech enthusiasts and minus the cosplay. This year's conference was hosted on May 14 and May 15 at the Shoreline Amphitheater in Mountain View, California.

The I/O refers to Input/Output, an homage to the computer science principle of how information is transferred between systems and the external world. But there is also a playful twist, Google calls it “Innovation in the Open,” hinting at the collaborative, open-source vibes that the company wants to promote. If you’re all about cutting-edge software and community-driven development, I/O is basically your Super Bowl.

What is Google I/O?

The first Google I/O took place in 2018 (skipping 2020 for obvious reasons). Since then, it’s been an annual fixture—where everyone from curious novices to seasoned devs gets the inside scoop on upcoming Android features, the latest AI tools, and under-the-hood changes to Google’s ecosystem. These days, it’s also a sneak peek at everything from new Pixel hardware to Earth-shattering announcements about AI-driven search features.

Google I/O revolves around product keynotes, technical sessions, and hands-on labs. The keynotes often steal the show, with Google’s top brass unveiling brand-new innovations or major updates to existing services. After that, you’ve got deep-dive sessions where engineers walk you through the nuts and bolts of new frameworks or APIs.

Google I/O 2024 Highlights

Every May, Google’s annual developer conference kicks off, and nearly two-thirds of mobile phone users around the globe keep an eye on it—mostly to see what AI-infused magic Google’s cooking up next. Over the past few years, Google’s been pulling out all the stops to stay on top of the AI race. Below, we’ll break down the biggest takeaways from Google I/O 2024, including the long-awaited Pixel 8A launch, AI updates for search, and more.

1. Pixel 8A shakes up the mid-range market

  • Google unveiled the Pixel 8A, a budget-friendly smartphone featuring a sleek design and solid camera.

  • The launch timing clashed with OpenAI’s GPT-4o reveal, stirring competition in both AI and hardware.

  • Positioned as an affordable alternative for users who want premium features without the high-end Apple price.

2. AI Overviews: Next-level search results

  • Google introduced AI Overviews, a search feature that surfaces AI-generated summaries of top results.

  • Ideal for users looking to skim key facts without clicking around multiple links.

  • Raises concerns for marketers: Will traffic still come through to sites, or stay within Google’s AI snapshot?

3. Multi-step reasoning for complex queries

  • This feature breaks down complex or multi-layered questions into logical segments.

  • Provides comprehensive answers to users with detailed or multi-step needs (e.g., travel planning).

  • Content creators can optimize for specific steps in these “chain of thought” queries, opening new SEO angles.

4. Notebook LM: Your AI-powered lesson creator

  • Google’s move into AI-assisted note-taking, compiling documents into mini-lessons or guides.

  • Appeals to users drowning in random docs and articles—promises quick, coherent summaries.

  • Potential game-changer for repurposing content, from blog posts to user manuals.

5. Ask Photos: Searching your library with AI

  • Lets users query their Google Photos with natural language (“Show me my beach sunsets from 2019”).

  • Simplifies content organization for social media managers or photographers juggling large image libraries.

  • Could drastically reduce the time spent manually hunting down specific pictures or assets.

6. AI-Generated Gmail summaries

  • A new feature that provides a concise highlight reel of your inbox.

  • Designed to save time in inboxes overflowing with emails, newsletters, and spam.

  • Can tip the balance from “inbox meltdown” to “inbox zero” by swiftly surfacing important messages.

Below are the main updates and upgrades in more detail.

Every May, Google’s annual developer conference kicks off, and nearly two-thirds of mobile phone users around the globe keep an eye on it—mostly to see what AI-infused magic Google’s cooking up next. Over the past few years, Google’s been pulling out all the stops to stay on top of the AI race. Below, we’ll break down the biggest takeaways from Google I/O 2024, including the long-awaited Pixel 8A launch, AI updates for search, and more.

1. Pixel 8A shakes up the mid-range market

  • Google unveiled the Pixel 8A, a budget-friendly smartphone featuring a sleek design and solid camera.

  • The launch timing clashed with OpenAI’s GPT-4o reveal, stirring competition in both AI and hardware.

  • Positioned as an affordable alternative for users who want premium features without the high-end Apple price.

2. AI Overviews: Next-level search results

  • Google introduced AI Overviews, a search feature that surfaces AI-generated summaries of top results.

  • Ideal for users looking to skim key facts without clicking around multiple links.

  • Raises concerns for marketers: Will traffic still come through to sites, or stay within Google’s AI snapshot?

3. Multi-step reasoning for complex queries

  • This feature breaks down complex or multi-layered questions into logical segments.

  • Provides comprehensive answers to users with detailed or multi-step needs (e.g., travel planning).

  • Content creators can optimize for specific steps in these “chain of thought” queries, opening new SEO angles.

4. Notebook LM: Your AI-powered lesson creator

  • Google’s move into AI-assisted note-taking, compiling documents into mini-lessons or guides.

  • Appeals to users drowning in random docs and articles—promises quick, coherent summaries.

  • Potential game-changer for repurposing content, from blog posts to user manuals.

5. Ask Photos: Searching your library with AI

  • Lets users query their Google Photos with natural language (“Show me my beach sunsets from 2019”).

  • Simplifies content organization for social media managers or photographers juggling large image libraries.

  • Could drastically reduce the time spent manually hunting down specific pictures or assets.

6. AI-Generated Gmail summaries

  • A new feature that provides a concise highlight reel of your inbox.

  • Designed to save time in inboxes overflowing with emails, newsletters, and spam.

  • Can tip the balance from “inbox meltdown” to “inbox zero” by swiftly surfacing important messages.

Below are the main updates and upgrades in more detail.

Gemini Advanced Updates

Google’s on-device mobile language model, Gemini Nano, now supports multimodality, a fancy way of saying it can handle texts, audio, videos, photos, websites, and even real-time phone camera feeds. CEO Sundar Pichai unveiled this upgrade, promising a future where any input can be transformed into any output. For instance, during a live demo, they scanned a shelf of books to capture book titles in a database, so they would be easy to identify later. If you are thinking this is a sci-fi thing, you are not alone. The thought is to bring all your data, regardless of format, into one AI-friendly system.

Gemini Advanced Updates

Gemini 1.5 Flash

On top of that, Google introduced Gemini 1.5 Flash, a speedy AI model optimized for high-frequency tasks. It’s comparable in power to Gemini 1.5 Pro but is designed to spit out responses faster and at a lower cost. You’ll find this nimble model available through Google’s AI Studio and Vertex AI. And in case that’s not enough, Google also doubled the context window for Gemini 1.5 Pro from 1 million to 2 million tokens. Translation? It can handle more data at once, which supercharges its ability to reason, translate, and even code. If you are constantly dealing with huge data sets—maybe translating big texts or analyzing complex code—this extended context window could save you loads of time (and frustration). It’s like upgrading from a pocket calculator to a full-blown supercomputer, but without the sticker shock.

‘Ask Photos’

A new feature, “Ask Photos,” shows off Gemini’s knack for searching your Google Photos library. Instead of manually scrolling through thousands of images to find that one dog picture, you can just ask Gemini. Sundar Pichai took it a step further by asking for his license plate number, and the AI promptly delivered both the plate number and the relevant photo. That’s a serious upgrade from typical image search, and it demonstrates how Gemini uses context clues like numbers and related objects to pinpoint what you need.

Google’s software engineers emphasize that Ask Photos doesn’t train on user data for ads or feed it back into other Gemini models. So, while the tech is undeniably powerful, Google promises it’s still respecting your boundaries.

Gemini 1.5 Pro in Workspace

Google also plans to incorporate Gemini 1.5 Pro into its Workspace suite—think Gmail, Docs, Slides, Sheets, and Drive. This AI assistant won’t just be a gimmick; it’ll pull info from your Drive, help draft emails using document snippets, and even remind you of tasks mentioned in your inbox. The catch? Only paying subscribers get to enjoy these new AI superpowers. Some lucky testers already have access, but Google says a broader rollout is planned for next month.

Workspace is the hub where most of us juggle documents, spreadsheets, and presentations. Having an AI buddy in there, one that can seamlessly piece together data from your files or email threads, could transform how we handle daily tasks. If you’re someone who toggles between multiple apps, preparing for calls or brainstorming ideas, Gemini’s direct integration might cut that time in half.

Google’s on-device mobile language model, Gemini Nano, now supports multimodality, a fancy way of saying it can handle texts, audio, videos, photos, websites, and even real-time phone camera feeds. CEO Sundar Pichai unveiled this upgrade, promising a future where any input can be transformed into any output. For instance, during a live demo, they scanned a shelf of books to capture book titles in a database, so they would be easy to identify later. If you are thinking this is a sci-fi thing, you are not alone. The thought is to bring all your data, regardless of format, into one AI-friendly system.

Gemini Advanced Updates

Gemini 1.5 Flash

On top of that, Google introduced Gemini 1.5 Flash, a speedy AI model optimized for high-frequency tasks. It’s comparable in power to Gemini 1.5 Pro but is designed to spit out responses faster and at a lower cost. You’ll find this nimble model available through Google’s AI Studio and Vertex AI. And in case that’s not enough, Google also doubled the context window for Gemini 1.5 Pro from 1 million to 2 million tokens. Translation? It can handle more data at once, which supercharges its ability to reason, translate, and even code. If you are constantly dealing with huge data sets—maybe translating big texts or analyzing complex code—this extended context window could save you loads of time (and frustration). It’s like upgrading from a pocket calculator to a full-blown supercomputer, but without the sticker shock.

‘Ask Photos’

A new feature, “Ask Photos,” shows off Gemini’s knack for searching your Google Photos library. Instead of manually scrolling through thousands of images to find that one dog picture, you can just ask Gemini. Sundar Pichai took it a step further by asking for his license plate number, and the AI promptly delivered both the plate number and the relevant photo. That’s a serious upgrade from typical image search, and it demonstrates how Gemini uses context clues like numbers and related objects to pinpoint what you need.

Google’s software engineers emphasize that Ask Photos doesn’t train on user data for ads or feed it back into other Gemini models. So, while the tech is undeniably powerful, Google promises it’s still respecting your boundaries.

Gemini 1.5 Pro in Workspace

Google also plans to incorporate Gemini 1.5 Pro into its Workspace suite—think Gmail, Docs, Slides, Sheets, and Drive. This AI assistant won’t just be a gimmick; it’ll pull info from your Drive, help draft emails using document snippets, and even remind you of tasks mentioned in your inbox. The catch? Only paying subscribers get to enjoy these new AI superpowers. Some lucky testers already have access, but Google says a broader rollout is planned for next month.

Workspace is the hub where most of us juggle documents, spreadsheets, and presentations. Having an AI buddy in there, one that can seamlessly piece together data from your files or email threads, could transform how we handle daily tasks. If you’re someone who toggles between multiple apps, preparing for calls or brainstorming ideas, Gemini’s direct integration might cut that time in half.

Search Upgrades

Google’s bread and butter is web search; it makes most of its profit from search ads, and with AI gaining steam, the company’s been keen to reinforce its dominance. Microsoft tried integrating AI into Bing, but let’s face it: Bing hasn’t exactly become everyone’s new best friend. Google, on the other hand, keeps plugging away, merging AI with Search to stay on top.

A big part of Google’s strategy involves Gemini—the AI model they’re weaving into more parts of the search experience. According to Google’s Head of Search, planning is both work and fun, and they want to reduce the “work” side. Translation: Gemini will handle the heavy lifting so you can spend more time discovering that perfect restaurant or planning your next dream vacation without going down a rabbit hole of endless tabs.

Search Generative Experience morphs into AI Overviews

If you’ve played around with Search Generative Experience in Search Labs, you’ve likely encountered AI Overviews. Essentially, Google’s AI insights now appear at the top of your search results, directly answering queries without making you scroll through pages of links. As of now, AI Overviews are rolling out to all users in the U.S.

Since launching, these AI-driven snippets have generated billions of responses, and Google’s data shows a boost in user engagement and satisfaction. The company also plans to let you tailor your AI overview preferences—whether you want more detail or a simpler, stripped-down explanation. That means you can have your cooking instructions spelled out step by step or get a breezy summary to save time.

Google sees these features being especially handy for topics like dining, recipes, entertainment, travel, and shopping. Instead of trudging through multiple pages looking for “inspiration,” you might get a curated summary on what’s hot in your area or the best reads for your next flight.

Google Chrome AI Assistant

Beyond Search, Gemini Nano is also heading into Google Chrome on desktops. Picture an on-device AI “assistant” that sits right in your browser and helps you whip up product reviews, social media posts, or any text-based content. No more copying and pasting between different apps—just type, let Gemini do its magic, and hit publish.

For many of us, browser time is work time: writing emails, drafting proposals, or responding to social media queries. With Gemini Nano embedded, you get shortcuts to speed up that repetitive writing. Think of it like a supercharged autocomplete on steroids, tailored to your style but with a lot more depth.

Google’s bread and butter is web search; it makes most of its profit from search ads, and with AI gaining steam, the company’s been keen to reinforce its dominance. Microsoft tried integrating AI into Bing, but let’s face it: Bing hasn’t exactly become everyone’s new best friend. Google, on the other hand, keeps plugging away, merging AI with Search to stay on top.

A big part of Google’s strategy involves Gemini—the AI model they’re weaving into more parts of the search experience. According to Google’s Head of Search, planning is both work and fun, and they want to reduce the “work” side. Translation: Gemini will handle the heavy lifting so you can spend more time discovering that perfect restaurant or planning your next dream vacation without going down a rabbit hole of endless tabs.

Search Generative Experience morphs into AI Overviews

If you’ve played around with Search Generative Experience in Search Labs, you’ve likely encountered AI Overviews. Essentially, Google’s AI insights now appear at the top of your search results, directly answering queries without making you scroll through pages of links. As of now, AI Overviews are rolling out to all users in the U.S.

Since launching, these AI-driven snippets have generated billions of responses, and Google’s data shows a boost in user engagement and satisfaction. The company also plans to let you tailor your AI overview preferences—whether you want more detail or a simpler, stripped-down explanation. That means you can have your cooking instructions spelled out step by step or get a breezy summary to save time.

Google sees these features being especially handy for topics like dining, recipes, entertainment, travel, and shopping. Instead of trudging through multiple pages looking for “inspiration,” you might get a curated summary on what’s hot in your area or the best reads for your next flight.

Google Chrome AI Assistant

Beyond Search, Gemini Nano is also heading into Google Chrome on desktops. Picture an on-device AI “assistant” that sits right in your browser and helps you whip up product reviews, social media posts, or any text-based content. No more copying and pasting between different apps—just type, let Gemini do its magic, and hit publish.

For many of us, browser time is work time: writing emails, drafting proposals, or responding to social media queries. With Gemini Nano embedded, you get shortcuts to speed up that repetitive writing. Think of it like a supercharged autocomplete on steroids, tailored to your style but with a lot more depth.

Android Updates

Google has replaced Google Assistant with Gemini as the default AI assistant on Android. Working in tandem with Android TalkBack, Gemini’s multimodal capabilities—like voice, text, and even image analysis—promise a richer, more “smartphone-on-steroids” experience. The best part? Deep integration with core Android and Play Store apps. Right now, it plays nicely with YouTube, Google, Gmail, and Messages, but expect more rollouts soon. If you’re the kind of person who multitasks like a champ, Gemini might just feel like a personal sidekick handling your day-to-day queries.

Circle to Search

Ever wished you could instantly look up a snippet of text or an image on your phone without digging through an app’s share menu? Circle to Search lets you literally circle whatever’s on your screen—be it an Instagram image, a portion of text in an ebook, or part of a video—and then performs a Google search for related info. Picture this: students can scribble a circle around a math problem in a PDF, and boom, Gemini fetches relevant explanations or solutions. It’s like having a private tutor who shows up whenever you draw a ring around something baffling.

AI-Powered scam detection

Ever get a weird call about transferring money to a mystery account? Google’s new scam detection feature uses on-device AI to sniff out suspicious language in real time. It eavesdrops on the conversation—without sending audio to the cloud—just enough to catch phrases like “wire funds” or “urgent account transfer.” If it smells a rat, it’ll interrupt with an on-screen alert telling you to hang up. It’s a small comfort in a world teeming with robo-calls and phishing attempts. Because the analysis happens on your phone, your private convos aren’t whisked away to Google’s servers. This design not only helps you dodge scammers, but also keeps your data where it belongs—on your device.

Project Astra

Project Astra is Google’s next big swing at a holistic AI assistant, aiming for a level of proactivity that goes beyond even Gemini’s capabilities. If Gemini is your personal sidekick, Astra is like your all-knowing companion that:

  • Processes visual info: Launch your phone camera and point it around to identify objects or spaces.

  • Knows where your stuff is: Think “smart shelf” capabilities for your entire home or office.

  • Does tasks on your behalf: Whether it’s scheduling a grocery delivery or sorting emails, Astra is designed to take over routine tasks.

Google envisions Astra as a conversational agent that not only talks but also acts autonomously, bridging the gap between voice assistant and personal AI manager. If this pans out, you might soon be telling your phone to not just book reservations, but also buy groceries, reorder laundry detergent, and maybe even arrange your living room furniture—assuming you have a robot arm for that.

Google has replaced Google Assistant with Gemini as the default AI assistant on Android. Working in tandem with Android TalkBack, Gemini’s multimodal capabilities—like voice, text, and even image analysis—promise a richer, more “smartphone-on-steroids” experience. The best part? Deep integration with core Android and Play Store apps. Right now, it plays nicely with YouTube, Google, Gmail, and Messages, but expect more rollouts soon. If you’re the kind of person who multitasks like a champ, Gemini might just feel like a personal sidekick handling your day-to-day queries.

Circle to Search

Ever wished you could instantly look up a snippet of text or an image on your phone without digging through an app’s share menu? Circle to Search lets you literally circle whatever’s on your screen—be it an Instagram image, a portion of text in an ebook, or part of a video—and then performs a Google search for related info. Picture this: students can scribble a circle around a math problem in a PDF, and boom, Gemini fetches relevant explanations or solutions. It’s like having a private tutor who shows up whenever you draw a ring around something baffling.

AI-Powered scam detection

Ever get a weird call about transferring money to a mystery account? Google’s new scam detection feature uses on-device AI to sniff out suspicious language in real time. It eavesdrops on the conversation—without sending audio to the cloud—just enough to catch phrases like “wire funds” or “urgent account transfer.” If it smells a rat, it’ll interrupt with an on-screen alert telling you to hang up. It’s a small comfort in a world teeming with robo-calls and phishing attempts. Because the analysis happens on your phone, your private convos aren’t whisked away to Google’s servers. This design not only helps you dodge scammers, but also keeps your data where it belongs—on your device.

Project Astra

Project Astra is Google’s next big swing at a holistic AI assistant, aiming for a level of proactivity that goes beyond even Gemini’s capabilities. If Gemini is your personal sidekick, Astra is like your all-knowing companion that:

  • Processes visual info: Launch your phone camera and point it around to identify objects or spaces.

  • Knows where your stuff is: Think “smart shelf” capabilities for your entire home or office.

  • Does tasks on your behalf: Whether it’s scheduling a grocery delivery or sorting emails, Astra is designed to take over routine tasks.

Google envisions Astra as a conversational agent that not only talks but also acts autonomously, bridging the gap between voice assistant and personal AI manager. If this pans out, you might soon be telling your phone to not just book reservations, but also buy groceries, reorder laundry detergent, and maybe even arrange your living room furniture—assuming you have a robot arm for that.

Veo (Text-to-video generator)

At Google I/O 2024, Google upped its AI game by unveiling Veo, a text-to-video model that can crank out high-quality, 1080p videos—over a minute in length. Think of it like a fusion of Google’s earlier work with Lumiere and Imagen-Video, but with more advanced capabilities. Veo doesn’t just spit out generic animations; it actually understands natural language and can interpret cinematic cues such as “time-lapse” to produce visuals that feel closer to a genuine short film. It’s basically Google’s shot at turning your prompt into a fully fleshed-out video sequence, minus the need for a film crew.

Text-to-video has been on the radar for a while, but achieving lengthy, high-quality clips was always tricky. With Veo promising up to 60+ seconds of 1080p footage, Google is signalling that AI-generated video might soon rival amateur or even professional videography—depending on the user’s creativity.

Private preview and waiting list

Veo is rolling out in a private preview for a select group of creators via VideoFX, Google’s dedicated suite of generative video tools. If you’re itching to try it out, there’s a waiting list you can join. Given how new and specialized text-to-video is, Google wants to test the waters with a smaller crowd first. Over time, they’ll likely expand access, letting more users experiment with AI-driven storytelling. If you thought text-to-image was cool, text-to-video raises the bar. But since it’s still early days, expect some quirks. A limited release will help Google iron out any weird visual artifacts or off-topic scenes Veo might conjure up.

Competition and context

Veo emerges as direct competition to OpenAI’s text-to-image model, Sora—and by extension, any AI tool dabbling in creative generation. While Sora focuses on images, Veo tackles full-fledged video sequences. Google’s consistent push into creative AI tools underscores its ambition to stay on top in an increasingly crowded field of generative models. For content creators, this battle between AI giants spells more powerful creation tools, whether you lean on AI for social media clips or plan to craft entire mini-documentaries. The real winner here might be the everyday user who gets to experiment with increasingly sophisticated AI at their fingertips.

At Google I/O 2024, Google upped its AI game by unveiling Veo, a text-to-video model that can crank out high-quality, 1080p videos—over a minute in length. Think of it like a fusion of Google’s earlier work with Lumiere and Imagen-Video, but with more advanced capabilities. Veo doesn’t just spit out generic animations; it actually understands natural language and can interpret cinematic cues such as “time-lapse” to produce visuals that feel closer to a genuine short film. It’s basically Google’s shot at turning your prompt into a fully fleshed-out video sequence, minus the need for a film crew.

Text-to-video has been on the radar for a while, but achieving lengthy, high-quality clips was always tricky. With Veo promising up to 60+ seconds of 1080p footage, Google is signalling that AI-generated video might soon rival amateur or even professional videography—depending on the user’s creativity.

Private preview and waiting list

Veo is rolling out in a private preview for a select group of creators via VideoFX, Google’s dedicated suite of generative video tools. If you’re itching to try it out, there’s a waiting list you can join. Given how new and specialized text-to-video is, Google wants to test the waters with a smaller crowd first. Over time, they’ll likely expand access, letting more users experiment with AI-driven storytelling. If you thought text-to-image was cool, text-to-video raises the bar. But since it’s still early days, expect some quirks. A limited release will help Google iron out any weird visual artifacts or off-topic scenes Veo might conjure up.

Competition and context

Veo emerges as direct competition to OpenAI’s text-to-image model, Sora—and by extension, any AI tool dabbling in creative generation. While Sora focuses on images, Veo tackles full-fledged video sequences. Google’s consistent push into creative AI tools underscores its ambition to stay on top in an increasingly crowded field of generative models. For content creators, this battle between AI giants spells more powerful creation tools, whether you lean on AI for social media clips or plan to craft entire mini-documentaries. The real winner here might be the everyday user who gets to experiment with increasingly sophisticated AI at their fingertips.

Other AI innovations at Google I/O 2024

Google also presented a range of other AI enhancements:

Imagen 3

Imagen 3 is the newest version of Google’s text-to-image generator, delivering higher-quality images with fewer digital “glitches.” If you’ve seen models produce strange artifacts or jumbled facial features, Imagen 3 is meant to clean that up significantly. Better image quality means fewer headaches for designers or marketers who rely on these AI visuals. Think of crisp product mockups or hyper-realistic ad images, all whipped up from a single text prompt.

VideoFX

VideoFX sits on top of Veo and acts like a toolkit for generative video tasks. That might include transitions, scene changes, or layering effects that would usually require manual editing in a separate application. If you’re piecing together a short promo video, you could potentially skip the usual editing software. VideoFX might handle scene transitions and color grading for you, freeing you up to focus on the narrative or creative side.

ImageFX

ImageFX is an enhanced, high-resolution image generator. While Imagen 3 focuses on improved text-to-image generation, ImageFX zeroes in on resolution and clarity, minimizing the artifacts that often plague AI-based graphics. If you’re tired of pixelated edges or inconsistent lighting in AI-produced images, ImageFX might be your saving grace. Perfect for larger prints, billboards, or HD marketing materials.

MusicFX DJ Mode

MusicFX DJ Mode is Google’s foray into AI-generated music loops and samples. You feed it a text prompt—maybe the vibe you’re going for—and it spits out loops that fit your description. Musicians, podcasters, and content creators can easily generate background tracks or jingles without hiring a composer. Sure, it won’t replace top-tier production studios just yet, but it’s a slick option for everyday projects or quick creative inspiration.

Google also presented a range of other AI enhancements:

Imagen 3

Imagen 3 is the newest version of Google’s text-to-image generator, delivering higher-quality images with fewer digital “glitches.” If you’ve seen models produce strange artifacts or jumbled facial features, Imagen 3 is meant to clean that up significantly. Better image quality means fewer headaches for designers or marketers who rely on these AI visuals. Think of crisp product mockups or hyper-realistic ad images, all whipped up from a single text prompt.

VideoFX

VideoFX sits on top of Veo and acts like a toolkit for generative video tasks. That might include transitions, scene changes, or layering effects that would usually require manual editing in a separate application. If you’re piecing together a short promo video, you could potentially skip the usual editing software. VideoFX might handle scene transitions and color grading for you, freeing you up to focus on the narrative or creative side.

ImageFX

ImageFX is an enhanced, high-resolution image generator. While Imagen 3 focuses on improved text-to-image generation, ImageFX zeroes in on resolution and clarity, minimizing the artifacts that often plague AI-based graphics. If you’re tired of pixelated edges or inconsistent lighting in AI-produced images, ImageFX might be your saving grace. Perfect for larger prints, billboards, or HD marketing materials.

MusicFX DJ Mode

MusicFX DJ Mode is Google’s foray into AI-generated music loops and samples. You feed it a text prompt—maybe the vibe you’re going for—and it spits out loops that fit your description. Musicians, podcasters, and content creators can easily generate background tracks or jingles without hiring a composer. Sure, it won’t replace top-tier production studios just yet, but it’s a slick option for everyday projects or quick creative inspiration.

What should marketers be doing now?

Google I/O 2024 dropped a series of AI and search updates that have some marketers freaking out about the “end of SEO.” Relax—it’s not the apocalypse. Below are a few practical steps to help you pivot and keep your digital strategy robust in the face of these new Google enhancements.

1. Monitor changes in SERPs and traffic

First, keep a close eye on your search engine results pages (SERPs) and site analytics to see if AI Overviews or any other tweaks are nudging your organic traffic up or down. It’s not just about raw clicks, either—look for any change in user behavior, like longer on-page times or higher bounce rates.

2. Ensure your focus keywords trigger AI Overviews

Check if your main focus keywords are generating AI Overviews in Google’s new search layout. Then test if your brand or content is getting mentioned—if it’s overlooked, your SEO might need a fresh angle or more authority signals. Also, confirm the AI is accurately pulling info about your product or service. No one wants a half-baked summary misrepresenting their brand.

3. Optimise for mentions and backlinks

As always, backlinks from legit, high-authority sites boost your credibility. This clout can influence whether Google’s AI Overviews cite your content. If you’re struggling to get traction, consider a PR push or a guest post strategy to get your name (and URL) out there. The more relevant your content appears, the higher your odds of making it into AI’s top picks.

4. Maintain a technically sound and user-friendly site

Don’t ignore the basics: fast load times, clear site structure, and authoritative pages still matter—a lot. If your site is sluggish or riddled with broken links, no level of AI synergy will save you. Remember, user experience translates directly into how well your pages rank and how likely they are to appear in AI-generated results.

5. Focus on detailed, well-researched content

Generic or shallow content is a no-go. With multi-step reasoning, Google’s AI now rewards in-depth, well-structured articles that address user intent thoroughly. If your competitor is serving a “barely-there” blog post, you can outshine them by providing real substance, including data or expert quotes.

6. Stay informed about AI Overviews and updates

Google doesn’t exactly send out engraved invitations for every algorithm tweak, so you’ll want to stay in the loop. Join relevant forums, catch webinars, or bookmark industry blogs. If you’re not plugged in, you risk reacting too late when a new update changes your traffic overnight.

What marketers should be doing now?

Google I/O 2024 dropped a series of AI and search updates that have some marketers freaking out about the “end of SEO.” Relax—it’s not the apocalypse. Below are a few practical steps to help you pivot and keep your digital strategy robust in the face of these new Google enhancements.

1. Monitor changes in SERPs and traffic

First, keep a close eye on your search engine results pages (SERPs) and site analytics to see if AI Overviews or any other tweaks are nudging your organic traffic up or down. It’s not just about raw clicks, either—look for any change in user behavior, like longer on-page times or higher bounce rates.

2. Ensure your focus keywords trigger AI Overviews

Check if your main focus keywords are generating AI Overviews in Google’s new search layout. Then test if your brand or content is getting mentioned—if it’s overlooked, your SEO might need a fresh angle or more authority signals. Also, confirm the AI is accurately pulling info about your product or service. No one wants a half-baked summary misrepresenting their brand.

3. Optimise for mentions and backlinks

As always, backlinks from legit, high-authority sites boost your credibility. This clout can influence whether Google’s AI Overviews cite your content. If you’re struggling to get traction, consider a PR push or a guest post strategy to get your name (and URL) out there. The more relevant your content appears, the higher your odds of making it into AI’s top picks.

4. Maintain a technically sound and user-friendly site

Don’t ignore the basics: fast load times, clear site structure, and authoritative pages still matter—a lot. If your site is sluggish or riddled with broken links, no level of AI synergy will save you. Remember, user experience translates directly into how well your pages rank and how likely they are to appear in AI-generated results.

5. Focus on detailed, well-researched content

Generic or shallow content is a no-go. With multi-step reasoning, Google’s AI now rewards in-depth, well-structured articles that address user intent thoroughly. If your competitor is serving a “barely-there” blog post, you can outshine them by providing real substance, including data or expert quotes.

6. Stay informed about AI Overviews and updates

Google doesn’t exactly send out engraved invitations for every algorithm tweak, so you’ll want to stay in the loop. Join relevant forums, catch webinars, or bookmark industry blogs. If you’re not plugged in, you risk reacting too late when a new update changes your traffic overnight.

What marketers should be doing now?

Frequently Asked Questions

How can I stay updated on new developer tools and frameworks introduced at Google I/O?

Google often drops project roadmaps in dedicated sessions or blog posts after the conference. Timelines vary but keep tabs on GitHub repos for code releases and watch the Google Developers Blog. Occasionally, new libraries and frameworks roll out faster than you’d expect.

How does Google I/O address privacy concerns and data security?

Google usually reaffirms its commitment to user privacy during I/O revealing improvements to data encryption and new transparency dashboards. Expect expansions on the privacy sandbox for Android and Web, plus fresh developer policies that may tighten how apps handle personal info across ecosystems.

Can I participate in Google I/O remotely?

Virtual Q&A sessions and community forums usually pop up alongside official streams. Joining those discussion threads or Slack groups keeps you connected. Sometimes, organizers offer interactive coding challenges or mini hackathons online, so you can still get that collaborative buzz even from afar.

Final Thoughts on Google I/O

This year’s Google I/O event signalled a transformative juncture for search, AI, and SEO practices, wrapping in Gemini and fresh algorithm tweaks. Google’s intention is clear: enhance user journeys with AI-driven snippets and advanced content generation. For businesses and site owners, adaptability is everything. Whether that means sprinkling AI-friendly cues into your pages, upgrading your technical SEO, or just keeping tabs on future improvements, the lesson is the same: stay alert and be ready for the next wave of change.

Frequently Asked Questions

How can I stay updated on new developer tools and frameworks introduced at Google I/O?

Google often drops project roadmaps in dedicated sessions or blog posts after the conference. Timelines vary but keep tabs on GitHub repos for code releases and watch the Google Developers Blog. Occasionally, new libraries and frameworks roll out faster than you’d expect.

How does Google I/O address privacy concerns and data security?

Google usually reaffirms its commitment to user privacy during I/O revealing improvements to data encryption and new transparency dashboards. Expect expansions on the privacy sandbox for Android and Web, plus fresh developer policies that may tighten how apps handle personal info across ecosystems.

Can I participate in Google I/O remotely?

Virtual Q&A sessions and community forums usually pop up alongside official streams. Joining those discussion threads or Slack groups keeps you connected. Sometimes, organizers offer interactive coding challenges or mini hackathons online, so you can still get that collaborative buzz even from afar.

Final Thoughts on Google I/O

This year’s Google I/O event signalled a transformative juncture for search, AI, and SEO practices, wrapping in Gemini and fresh algorithm tweaks. Google’s intention is clear: enhance user journeys with AI-driven snippets and advanced content generation. For businesses and site owners, adaptability is everything. Whether that means sprinkling AI-friendly cues into your pages, upgrading your technical SEO, or just keeping tabs on future improvements, the lesson is the same: stay alert and be ready for the next wave of change.

Work with us

Click to copy

work@for.co

  • FOR® Brand. FOR® Future.

We’re remote-first — with strategic global hubs

Click to copy

Helsinki, FIN

info@for.fi

Click to copy

New York, NY

ny@for.co

Click to copy

Miami, FL

mia@for.co

Click to copy

Dubai, UAE

uae@for.co

Click to copy

Kyiv, UA

kyiv@for.co

Click to copy

Lagos, NG

lagos@for.ng

Copyright © 2024 FOR®

Cookie Settings

Work with us

Click to copy

work@for.co

  • FOR® Brand. FOR® Future.

We’re remote-first — with strategic global hubs

Click to copy

Helsinki, FIN

info@for.fi

Click to copy

New York, NY

ny@for.co

Click to copy

Miami, FL

mia@for.co

Click to copy

Dubai, UAE

uae@for.co

Click to copy

Kyiv, UA

kyiv@for.co

Click to copy

Lagos, NG

lagos@for.ng

Copyright © 2024 FOR®

Cookie Settings

Work with us

Click to copy

work@for.co

We’re remote-first — with strategic global hubs

Click to copy

Helsinki, FIN

hel@for.co

Click to copy

New York, NY

ny@for.co

Click to copy

Miami, FL

mia@for.co

Click to copy

Dubai, UAE

uae@for.co

Click to copy

Kyiv, UA

kyiv@for.co

Click to copy

Lagos, NG

lagos@for.ng

Copyright © 2024 FOR®

Cookie Settings