Alibaba EMO Video Model: Competition in the field of Artificial Intelligence is continuously increasing. To compete with the Sora model of Open AI, Chinese company Alibaba has introduced a new video AI model EMO. Alibaba’s Institute for Intelligent Computing Research recently introduced this model, which is an expert in creating audio-driven portrait videos.
The EMO video model of Chinese company Alibaba is similar to OpenAI’s Sora. EMO can be defined as Emote Portrait Alive, which creates a short video from a photo and audio file. The maximum length of this video is 1 minute 30 seconds, in which the portrait can also sing, speak and move from here to there.
For example, Leonardo da Vinci’s famous painting Mona Lisa can also talk with the help of EMO. Not only this, Monalisa can also sing songs and can also look around.
How does EMO work?
One of the best parts of EMO is that it can change the facial expressions of the person in the photo. Not only this, his lips can be synced with real audio, which makes it seem like it is actually a real video. This experiment can be done with photographs, paintings, anime-style cartoons, everything.
Talking about OpenAI’s Sora model, it prepares the entire video according to the text. It can create HD video from text prompt. However, its access is not yet available to everyone. The company has released it for some selected users working in the research field. Regarding this, OpenAI CEO Sam Altman had also shared a post on social media platform X, in which the company had explained through a video post how it works.
Also read:-