Meet Text2Reward: A Data-Free Framework that Automates the Generation of Dense Reward Functions Based on Large Language Models

0


Reward shaping, which seeks to develop reward functions that more effectively direct an agent towards desirable behaviors, is still a long-standing difficulty in reinforcement learning (RL). It is a time-consuming procedure that requires skill, might be sub-optimal, and is frequently done manually by constructing incentives based on expert intuition and heuristics. Reward shaping may be addressed via inverse reinforcement learning (IRL) and preference learning. A reward model can be taught using preference-based feedback or human examples. Both approaches still need significant labor or data collecting, and the neural network-based reward models need to be more comprehensible and unable to generalize outside the training data’s domains. 

Figure 1 illustrates the three steps of TEXT2REWARD. A hierarchy of Pythonic classes representing the environment is provided by Expert Abstraction. The objective is stated in user instructions using everyday language. Users can summarise the failure mode or their preferences in user feedback, which is utilized to enhance the reward code.

Researchers from The University of Hong Kong, Nanjing University, Carnegie Mellon University, Microsoft Research, and the University of Waterloo introduce the TEXT2REWARD framework for creating rich reward code based on goal descriptions. TEXT2REWARD creates dense reward code (Figure 1 center) based on large language models (LLMs), which are based on a condensed, Pythonic description of the environment (Figure 1 left), given an RL objective (for example, “push the chair to the marked position”). Then, an RL algorithm like PPO or SAC uses dense reward coding to train a policy (Figure 1 right). In contrast to inverse RL, TEXT2REWARD produces symbolic rewards with good data-free interpretability. The authors’ free-form dense reward code, in contrast to recent work that used LLMs to write sparse reward code (the reward is non-zero only when the episode ends) with hand-designed APIs, covers a wider range of tasks and can make use of proven coding frameworks (such as NumPy operations over point clouds and agent positions). 

Finally, given the sensitivity of RL training and the ambiguity of language, the RL strategy may fail to achieve the aim or achieve it in ways that were not intended. By applying the learned policy in the real world, getting user input, and adjusting the reward as necessary, TEXT2REWARD solves this issue. They carried out systematic studies on two robotics manipulation benchmarks, MANISKILL2, METAWORLD, and two locomotion environments of MUJOCO. Policies trained with their produced reward code achieve equivalent or greater success rates and convergence speeds than the ground truth reward code meticulously calibrated by human specialists on 13 out of 17 manipulation tasks. 

With a success rate of over 94%, TEXT2REWARD learns 6 unique locomotor behaviors. Additionally, they show how the simulator-trained strategy may be applied to a genuine Franka Panda robot. Their approach may iteratively increase the success rate of learned policy from 0 to over 100% and eliminate task ambiguity with human input in less than three rounds. In conclusion, the experimental findings showed that TEXT2REWARD could provide interpretable and generalizable dense reward code, enabling a human-in-the-loop pipeline and extensive RL task coverage. They anticipate the results will stimulate more research into the interface between reinforcement learning and code creation.

Check out the Paper, Code, and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.



Source link

You might also like
Leave A Reply

Your email address will not be published.