Successful development of an AI model that can translate languages into physical motion
AI researchers from Carnegie Mellon University, Pennsylvania, United States recently developed an intensive AI model capable of translating languages (text, voice) into motion and gestures. physics with relatively high accuracy.
This AI model is named Joint Language-to-Pose (JL2P), and is also known as a method to bring the ability to combine natural language with 3D posture simulation models effectively. Practical applications in the near future.
- DeepMind's AI model can learn how to create videos just by watching clips on YouTube
This AI model is named Joint Language-to-Pose (JL2P)
JL2P's ability to analyze and simulate poses and gestures in three dimensions is carefully trained through end-to-end programs - a powerful and effective approach to the chapters. Training is 'teared down' in the form of individual strings. The AI model will have to complete short, simple tasks before being allowed to move on to more complex goals.
Currently, JL2P's ability to simulate animation is limited in the form of rudimentary images (made up of simple lines), but the ability to simulate human-like movements based on the language of the model. This AI pattern is relatively accurate and intuitive. The team believes that models like JL2P could one day help robots perform real-world physical tasks similar to humans, or support the creation of virtual animated characters for video games. as well as movies.
- Successfully developing self-propelled bicycles using AI chips capable of deduction and learning like humans
JL2P's ability to simulate animation is limited to simple, crude images
Actually the idea of developing an AI model with the ability to translate languages into physical motion is not new. Before Carnegie Mellon University introduced JL2P, Microsoft also successfully developed a model called ObjGAN, with the task of specializing in picture sketching and storyboard. language captions. Another of Disney's AI models is also widely known for being able to use the words in a script to create a storyboard. Or more famous is Nvidia's GauGAN model, which can turn a doodles like created with a trackpad or Microsoft Paint into intelligent digital sketches with extreme aesthetics.
Returning to JL2P, this AI model can now very accurately simulate a number of simple to relatively complex movements such as walking or running, playing musical instruments (like guitar or violin), following the instructions given. direction (left or right) or speed control (fast or slow).
- Successfully developed an "fantasy keyboard" for touch screens and VR, based on AI
JL2P can now accurately simulate a number of simple to relatively complex movements
'First we optimize the model to predict 2 time steps based on complete word sentences. This simple task can help the AI model learn how to simulate very short posture sequences, such as foot movements when walking, hand movements while waving or postures, body posture when bowing. Once JL2P has learned how to simulate similar gestures with great precision, we will move on to the next stage in the curriculum. The model is now given two postures (numbers) to predict at the same time, 'said Carnegie Mellon University research team.
- AI knows how to play poker, beating the best in the world in a 6-player game
Simulate the running position of a normal person
Details of how JL2P works as well as typical 'works' were presented for the first time in a scientific article published July 2 on arXiv.org, and are expected to be presented. presented by the author and researchers of CMU Language Technology Institute Chaitanya Ahuja on September 19, on the stage of the 3D Vision International Conference taking place in Quebec, Canada.
The team confidently asserts that JL2P can provide 9% more accurate posture as well as physical movement compared to another 'top notch' AI model developed by SRI International's AI experts. in 2018.
- Samsung's Deepfake can make the dark drummer Rasputin sing like the real thing
JL2P simulates the action of standing up against human hands
Products created by JL2P after being trained by KIT Motion-Language Dataset.
JL2P simulates jumping over obstacles and running
First introduced in 2016 by Performance Humanoid Technologies, Germany, this data set is a combination of human motion with natural language descriptions, mapping 11 hours of continuous motion. people, recorded as over 6,200 English sentences, each sentence about 8 words long.
You should read it
- Stop Motion art rekindles in Vietnam
- How to create motion blur on CapCut
- How to shoot and edit slow motion videos on iPhone
- How to Deal With Motion Sickness on Rides
- How to create slow motion videos on Android?
- How to create slow-motion video with Super Slow Motion on Galaxy S9
- Galaxy S8 / S8 + Vietnam is supplemented with movie shooting features Super Slow Motion and AR Emoji
- First laptop integrated Leap Motion sensor
May be interested
- Google Translate can translate from images of 13 languages to Vietnamesethe application now supports nearly 50 languages.
- How to use Mate Translate to translate on Chromemate translate is a translation utility on google chrome browser, which supports translating up to 103 languages and many other useful translation features.
- How to translate a conversation on Google Assistantthe google assistant updates the interpreter feature to translate desired words into other languages.
- How to use AZ Translate screen, voice, photothe az translation app helps us translate on the screen of another application, voice or image translation, providing you with many different translation sources like google translate, yandex for you to choose.
- Translate text into other languages in Microsoft Word 2007faced with language problems, word processing programs have been launched. this program can translate text from different languages in word 2007.
- 3 simple ways to download pronunciation files on Google Translateyou often hear english (or other languages) through google translate and want to download its audio file but don't know how? it's easy to do that with 3 ways outlined in this article.
- Google Translate has 'spoken' Vietnameseyesterday (december 4), google just announced the release of a complete package of translation tools from vietnamese into 34 languages and vice versa here.
- How to translate the example of the original word on Google Translateif you want to translate examples of the word being searched on google translate, you can install the google translate plus utility.
- How to translate words in photos on Chrome using Translate Man Plustranslate man plus extension will translate languages on chrome or images or pdf documents.
- How to translate Vietnamese via images on Google Translategoogle translate now allows you to translate vietnamese via cameras in many different languages.