Wan2.5-Preview arrives with a groundbreaking multimodal foundation that unifies understanding and generation across text, imagery, audio, and video. Its native multimodal design, trained jointly across diverse data sources, enables tighter modal alignment, smoother instruction execution, and highly coherent audio-visual output. Through reinforcement learning from human feedback, it continually adapts to aesthetic preferences, resulting in more natural visuals and fluid motion dynamics. Wan2.5 supports cinematic 1080p video generation with synchronized audio, including multi-speaker content, layered sound effects, and dynamic compositions. Creators can control outputs using text prompts, reference images, or audio cues, unlocking a new range of storytelling and production workflows. For still imagery, the model achieves photorealism, artistic versatility, and strong typography, plus professional-level chart and design rendering. Its editing tools allow users to perform conversational adjustments, merge concepts, recolor products, modify materials, and refine details at pixel precision. This preview marks a major leap toward fully integrated multimodal creativity powered by AI.