Live Portrait animates static images using a reference driving video through implicit key point based framework, bringing a portrait to life with realistic expressions and movements. It identifies key points on the face (think eyes, nose, mouth) and manipulates them to create expressions and movements.
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/live-portrait"
# Request payload
data = {
"face_image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-input.jpg"), # Or use image_file_to_base64("IMAGE_PATH")
"driving_video": "https://segmind-sd-models.s3.amazonaws.com/display_images/liveportrait-video.mp4",
"live_portrait_dsize": 512,
"live_portrait_scale": 2.3,
"video_frame_load_cap": 128,
"live_portrait_lip_zero": True,
"live_portrait_relative": True,
"live_portrait_vx_ratio": 0,
"live_portrait_vy_ratio": -0.12,
"live_portrait_stitching": True,
"video_select_every_n_frames": 1,
"live_portrait_eye_retargeting": False,
"live_portrait_lip_retargeting": False,
"live_portrait_lip_retargeting_multiplier": 1,
"live_portrait_eyes_retargeting_multiplier": 1
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
An image with a face
A video to drive the animation
Size of the output image
min : 64,
max : 2048
Scaling factor for the face
min : 1,
max : 4
The maximum number of frames to load from the driving video. Set to 0 to use all frames.
Enable lip zero
Use relative positioning
Horizontal shift ratio
min : -1,
max : 1
Vertical shift ratio
min : -1,
max : 1
Enable stitching
Select every nth frame from the driving video. Set to 1 to use all frames.
Enable eye retargeting
Enable lip retargeting
Multiplier for lip retargeting
min : 0.01,
max : 10
Multiplier for eye retargeting
min : 0.01,
max : 10
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Live Portrait is an advanced AI-driven portrait animation framework. Unlike mainstream diffusion-based methods, Live Portrait leverages an implicit-keypoint-based framework for creating lifelike animations from single source images.
Efficient Animation: LivePortrait synthesizes lifelike videos from a single source image, using it as an appearance reference. The motion (facial expressions and head pose) is derived from a driving video, audio, text, or generation.
Stitching and Retargeting: Instead of following traditional diffusion-based approaches, LivePortrait explores and extends the potential of implicit-keypoint-based techniques. This approach effectively balances realism and expressiveness.
Bring life to historical figures: Imagine educational content or documentaries featuring animated portraits of historical figures with realistic expressions. Live Portrait allows you to create engaging narratives by adding subtle movements and emotions to portraits.
Create engaging social media content: Stand out from the crowd with captivating animated profile pictures or eye-catching social media posts featuring your own portrait brought to life. Live Portrait lets you personalize your content and grab attention with dynamic visuals.
Enhance e-learning experiences: Make educational content more interactive and engaging for learners of all ages. Animate portraits of educators or characters to explain concepts in a lively and memorable way.
Personalize avatars and characters: Design unique and expressive avatars for games, apps, or virtual reality experiences. Live Portrait allows you to create avatars with realistic facial movements that enhance user interaction.
Best-in-class clothing virtual try on in the wild
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training