News Report Technology
August 29, 2023

Microsoft and Virginia Tech’s Research Reveals New In-Context Learning Strategy for LLMs

In Brief

Microsoft and Virginia Tech researchers have published a recent paper proposing training Large Language Models on algorithms.

The researchers claim that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself.

The research paper suggests that LLMs possess an innate capability to integrate their intuition into searches that are optimized for better outcomes.

Microsoft and Virginia Tech's Research Reveals New In-Context Learning Strategy for LLMs

Microsoft and Virginia Tech researchers recently published a paper exploring a new strategy for training large language models (LLMs)

In the paper titled “Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models ”, the researchers propose training LLMs on algorithms, calling the method “Algorithm of Thoughts.” (AoT)

The paper claims that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself. Additionally, it suggests that with this training method, LLMs could possess the capability to integrate their intuition into searches that are optimized for better outcomes.

 The research cites that LLMs have traditionally been trained on methods such as the “Chain-of-Thought,” “Self-consistency,” and “Least-to-Most prompting.” 

However, these methods presented certain limitations that restricted their overall effectiveness. 

The Limitations of Traditional Training Methods

The research explained that the “Chain-of-Thought” method involves feeding LLMs with examples where a given question unfolds through a series of intermediate reasoning pieces to reach an answer. 

While effective in enhancing thought coherence, this approach occasionally led to erroneous intermediate steps. In contrast, the “AoT” encourages LLMs to think algorithmically, generating coherent problem-solving pathways that are more intuitive and less prone to inaccuracies.

“Self-consistency” and “Least-to-Most prompting” approaches provided structured learning paths, but their rigidity limited their adaptability to complex problems. “Self-consistency” involves generating a variety of reasoning paths and selecting the final answer through a majority vote, which can require additional generations. 

“Least-to-Most prompting” decomposes problems into smaller subproblems and tackles them sequentially, while “AoT” emphasizes exploration and adaptability, enabling LLMs to consider a range of options for each subproblem, leading to more comprehensive and creative solutions.

When explored further, it was found that the “Tree of Thoughts” (ToT) method attempted to overcome coverage limitations by exploring decision trees, but it often required a high number of LLM queries, affecting efficiency. To streamline this process, “AoT” generates complete thought processes within a single context, reducing the computational burden and enhancing efficiency.

How Effective is AoT?

Given that the proposed training strategy for large language models (LLMs) is currently in a research phase, it is still bound to certain limitations. Researchers from Microsoft and Virginia Tech conducted tests on GPT-4 to explore the effectiveness of the AoT.

They acknowledged that although AoT significantly reduces the number of queries compared to the Tree of Thoughts (ToT) approach, it does require more resources than standard prompting and the Chain-of-Thought (CoT) method.

The heightened resource demand is a consequence of AoT’s idea exploration technique through token generation.

“Crafting token-efficient algorithmic examples is one avenue, but there’s also potential in judiciously tapping into or unlocking the LLM’s “tunnel-vision,” the researchers said, highlighting the limitations of their training strategy.

To overcome these limitations, the researchers propose that future efforts should involve the creation of algorithmic examples that are more efficient in terms of token usage. 

They also suggest the development of adaptive mechanisms to activate the LLM’s “tunnel-vision” more effectively, thereby enhancing the search process. Additionally, they stressed the need to gain a deeper theoretical understanding of this new mode of in-context learning before it can be implemented.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via cindy@mpost.io with press pitches, announcements and interview opportunities.

More articles
Cindy Tan
Cindy Tan

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via cindy@mpost.io with press pitches, announcements and interview opportunities.

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Read More
Read more
Custom HTML
News Report
Custom HTML
August 8, 2024
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024