News

News

News

January 18, 2024

January 18, 2024

January 18, 2024

A group of Ph.D. students seeks to improve the effectiveness of natural language

A group of Ph.D. students seeks to improve the effectiveness of natural language

A group of Ph.D. students seeks to improve the effectiveness of natural language

The effectiveness of natural language as a means of communication hinges on the ability to comprehend words and their context, believe that the shared content is in good faith and trustworthy, reason about the shared information, and apply it to real-life scenarios.

In a news article for MIT News, Lauren Hinkel reports on a group of Ph.D students interning at the MIT-IBM Watson AI Lab—Athul Paul Jacob, Maohao Shen, Victor Butoi, and Andi Peng—who seek to enhance natural language usage to ensure AI systems become more reliable and accurate for users. The effectiveness of natural language as a means of communication hinges on the ability to comprehend words and their context, believe that the shared content is in good faith and trustworthy, reason about the shared information, and apply it to real-life scenarios. A group of MIT PhD students interning at the MIT-IBM Watson AI Lab—Athul Paul Jacob, Maohao Shen, Victor Butoi, and Andi Peng—are dedicating their efforts to enhance every aspect of this process embedded in natural language models to ensure AI systems become more reliable and accurate for users.

Jacob's pivotal research targets the core of existing natural language models to refine their outputs through the application of game theory. He is driven by a dual purpose: to understand human behavior through the prism of multi-agent systems and language comprehension and to utilize these insights to develop superior AI systems. His research is inspired by the board game Diplomacy, where his team crafted a system capable of learning, predicting human behaviors, and negotiating strategically to secure the most favorable outcome. The game requires players to cultivate trust and communicate effectively with language while competing against six other participants simultaneously, presenting unique research challenges previously unencountered in games like poker and GO, which were neural networks researchers' focus.

Althul Paul Jacob reformulated the issue of language generation as a two-player game with his research mentors from the MIT Department of Electrical Engineering and Computer Science and the MIT-IBM Watson AI Lab. The team employed "generator" and "discriminator" models to create a natural language system that could answer questions and evaluate the correctness of the answers. Correct answers earned the AI system points, encouraging the production of more accurate and reliable responses, thereby addressing the issue of language models that generate untrustworthy hallucinations through a no-regret learning algorithm.

Maohao Shen and his team aim to adjust language models that are poorly calibrated by their confidence levels through uncertainty quantification (UQ). By converting free text generated by a language model into a multiple-choice classification task, they assess whether the model is over- or under-confident. Their method involves training an auxiliary model with accurate data to help correct the language model's predictions and thus realign the model's accuracy and confidence levels.

Victor Butoi focuses on enabling vision-language models to better reason about what they observe and on creating prompts that unlock new learning capabilities. He highlights the importance of compositional reasoning in decision-making processes, especially in real-world applications. His team developed a model that improves the vision-language model's understanding of compositional relationships, such as spatial directions, using a low-rank adaptation technique and training on specific datasets.

Lastly, Andi Peng and his mentors concentrate on developing embodied AI models to assist individuals with physical limitations in a simulated environment called ThreeDWorld. Their work emphasizes the importance of designing AI systems and robots that operate in a manner understandable to humans, prioritizing human-like interactions and support. Together, the researchers are pushing the boundaries of natural language processing and AI, aiming to create systems that are not only more efficient and accurate but also capable of understanding and interacting with the world in ways that are meaningful and helpful to humans.




Credits

Lauren Hinkel initially wrote and reported this story for MIT News on January 18, 2024, under the title → "Reasoning and reliability in AI. PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage."

Photo: An artist’s illustration of artificial intelligence (AI). This image represents ethical research understanding human involvement in data labeling. Ariel Lu created it. Photo © Google DeepMind.