Motion Control is simple in practice: pick one character image, then drive movement using a motion reference video (or a motion from a library). This makes outputs easier to compare across variants—especially when you’re testing pacing, gestures, and facial performance for short-form ads.
For a broader Kling lineup, you can also explore Kling 2.6, KlingAI Avatar 2.0, Kling O1, Kling 2.5, and Kling AI.

Use the motion reference to lock actions and expressions, then use the prompt to steer everything around the character—background elements, scene props, extra movement in the environment, and the overall look. This split of responsibilities makes results feel less random: motion comes from the reference, while the prompt handles scene intent and visual tone.
Tip: keep the prompt focused on scene details (lighting, location, objects, atmosphere). Let the motion reference do the heavy lifting for performance.

Get clean, fully synchronised body movement from head to toe—great for dance loops, product demos, and character-led hooks. When your image framing matches the motion reference (full-body to full-body, half-body to half-body), timing looks tighter and the performance reads more “shot” than “generated.”
Best practice: use a motion reference with moderate speed and minimal displacement, and avoid cuts for a steadier result.
Hand gestures are where most motion videos fall apart—pointing, holding, waving, small object interactions. Kling Motion Control handles these micro-actions more reliably when the reference is clear and uninterrupted.
Tip: keep the character’s hands visible in the image reference, and choose a motion reference with stable framing (no camera shake) so the model can track finger movement and contact points cleanly.
Use Kling Motion Control to produce repeatable character performances—drive actions with a motion reference, steer scene details with prompts, and iterate faster on ad-ready variants.
Generate Now