Monday, December 9, 2024

Artificial Intelligence news

The US Department of...

The US Department of Defense has invested $2.4 million over two years...

OpenAI’s new defense contract...

At the start of 2024, OpenAI’s rules for how armed forces might...

Google DeepMind’s new AI...

Google DeepMind has unveiled an AI model that’s better at predicting the...

The startup trying to...

A startup called Exa is pitching a new spin on generative search....
HomeNewsGenerative AI taught...

Generative AI taught a robot dog to scramble around a new environment


Teaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but that’s scarce, and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when they’re pulled out of virtual worlds and asked to do the same tasks in the real one. 

Now, there’s potentially a better option: a new system that uses generative AI models in conjunction with a physics simulator to develop virtual training grounds that more accurately mirror the physical world. Robots trained using this method worked with a higher success rate than those trained using more traditional techniques during real-world tests. 

Researchers used the system, called LucidSim, to train a robot dog in parkour, getting it to scramble over a box and climb stairs, despite never seeing any real world data. The approach demonstrates how helpful generative AI could be when it comes to teaching robots to do challenging tasks. It also raises the possibility that we could ultimately train them in entirely virtual worlds. The research was presented at the Conference on Robot Learning (CoRL) last week.

“We’re in the middle of an industrial revolution for robotics,” says Ge Yang, a postdoc scholar at MIT CSAIL who worked on the project. “This is our attempt at understanding the impact of these [generative AI] models outside of their original intended purposes, with the hope that it will lead us to the next generation of tools and models.” 

LucidSim uses a combination of generative AI models to create the visual training data. Firstly, the researchers generated thousands of prompts for ChatGPT, getting it to create descriptions of a range of environments that represent the conditions the robot will encounter in the real world, including different types of weather, times of day, and lighting conditions. For example, these included ‘an ancient alley lined with tea houses and small, quaint shops, each displaying traditional ornaments and calligraphy’ and ‘the sun illuminates a somewhat unkempt lawn dotted with dry patches.’   

These descriptions were fed into a system which maps 3D geometry and physics data onto AI-generated images, creating short videos mapping the trajectory the robot will follow. The robot draws on this information to work out the height, width and depth of the things it has to navigate—a box or a set of stairs, for example.

The researchers tested LucidSim by instructing a four-legged robot equipped with a webcam to complete several tasks, including locating a traffic cone or soccer ball, climbing over a box and walking up and down stairs. The robot performed consistently better than when it ran a system trained on traditional simulations. Out of 20 trials to locate the cone, LucidSim had a 100% success rate, compared to 70% for systems trained on standard simulations. Similarly, LucidSim reached the soccer ball in another 20 trials 85% of the time, compared to just 35% for the other system. 

Finally, when the robot was running LucidSim, it successfully completed all 10 stair-climbing trials, compared to just 50% for the other system.

From left to right: Phillip Isola, Ge Yang and Alan Yu
COURTESY OF MIT CSAIL

These results are likely to improve even further in the future if LucidSim draws directly from sophisticated generative video models rather than a rigged-together combination of language, image and physics models, says Phillip Isola, an associate professor at MIT who worked on the research.

The researchers’ approach to using generative AI is a novel one that will pave the way for more interesting new research, says Mahi Shafiullah, a PhD student at New York University who is using AI models to train robots, and did not work on the project. 

“The more interesting direction I see personally is a mix of both real and realistic “imagined” data that can help our current data hungry methods scale quicker and better,” he says.

The ability to train a robot from scratch purely on AI-generated situations and scenarios is a significant achievement—could extend beyond machines to more generalized AI agents, says Zafeirios Fountas, a senior research scientist at Huawei specializing in brain‑inspired AI.

“The term robots here is used very generally; we’re talking about some sort of AI that interacts with the real world,” he says. “I can imagine this being used to control any sort of visual information, from robots and self-driving cars up to controlling your computer screen or smartphone.”

In terms of next steps, the authors are interested in trying to train a humanoid robot using wholly synthetic data, which they acknowledge is an ambitious goal, as bipedal robots are typically less stable than their four-legged counterparts. They’re also turning their attention to another new challenge: using LucidSim to train the kinds of robotic arms that work in factories and kitchens, which requires a lot more dexterity and physical understanding than running around a landscape. 

“To actually pick up a cup of coffee and pour it is a very hard, open problem,” says Isola. “If we could take a simulation that’s been augmented with generative AI to create a lot of diversity and train a very robust agent that can operate in a cafe, I think that would be very cool.”



Article Source link and Credit

Continue reading

How US AI policy might change under Trump

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here. President Biden first witnessed the capabilities of ChatGPT in 2022 during a demo from Arati Prabhakar, the Director of...

Moving generative AI into production

Generative AI has taken off. Since the introduction of ChatGPT in November 2022, businesses have flocked to large language models (LLMs) and generative AI models looking for solutions to their most complex and labor-intensive problems. The promise that customer...

This manga publisher is using Anthropic’s AI to translate Japanese comics into English

A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than...