Google Flow Gets Built-In Image Generation and Doodle-Based Editing

Google announced a major update to Flow, its AI-powered video creation and editing tool. The redesigned interface brings image generation front and center, adds new editing controls, and consolidates capabilities that were previously scattered across separate Google Labs experiments into a single workspace.
Image Generation Moves Into Flow
The headline change is the integration of image generation directly into Flow. The best capabilities from Google's Whisk and ImageFX image generation experiments are now built into Flow, so users can generate, edit, and animate content without switching between tools. This consolidation matters because creative workflows lose momentum every time a user has to export from one tool and import into another.
The image generation is powered by Nano Banana Pro, Google's newest image model. It brings professional-grade controls including adjustable depth of focus, lighting, and color grading. These are features typically found in dedicated photo editing software, not AI generation tools.
Doodle-Based Video Editing
One of the more interesting additions is a doodling capability that lets creators draw edits directly onto video frames. Instead of describing a change in text and hoping the model interprets the prompt correctly, users can sketch what they want on the frame itself. This is a practical solution to one of the persistent frustrations with prompt-based editing: spatial precision.
Telling a model "move the object slightly to the left" is ambiguous. Drawing exactly where you want it is not. This kind of direct manipulation interface bridges the gap between the flexibility of AI generation and the precision of traditional editing tools.
Audio Across All Features
Flow now supports audio across all of its creation and editing features. Combined with Veo 3.1, Google's latest video generation model, this means richer audio output, more narrative control, and enhanced realism in generated video. The audio integration follows a broader industry trend where native audio-video generation is becoming standard rather than optional.
Scale and Adoption
Google shared that since Flow launched, users have created over 1.5 billion images and videos for creative projects spanning films, music videos, and product campaigns. That adoption number is significant. It suggests Flow has moved beyond experimentation into active creative production for a large user base.
Flow is also now available as an additional Google service for Workspace customers, which opens it up to enterprise and team use cases beyond individual creators.
The Unified Workspace Trend
This update reflects a pattern across AI creative tools: consolidation. The early phase of AI generation produced many specialized tools, each handling one modality or one step in the creative process. The current phase is about combining those capabilities into unified workspaces where a creator can go from concept to finished output without leaving the tool.
Google is well positioned for this because it controls the underlying models (Veo for video, Nano Banana for images, Lyria for audio) and can integrate them tightly. The result is a tool where generating an image, animating it into video, and adding a soundtrack are steps in the same workflow rather than separate tasks requiring separate applications.
What This Signals
The addition of professional-grade controls like depth of focus and color grading to an AI generation tool is worth noting. It suggests that AI creative tools are moving past the "generate and hope" phase into a more controllable, production-ready phase. The doodle editing feature pushes in the same direction: giving creators precise control over AI output rather than leaving everything to prompt interpretation.
For anyone working in video production, the message is clear. The tools are converging. Image generation, video generation, audio generation, and editing are collapsing into single platforms. The question is no longer whether AI can produce creative content, but how much control creators have over the result. Updates like this one push meaningfully toward more control.


