Blog Logo
TAGS

Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models

Current literature proposes the Algorithm of Thoughts, a strategy to boost reasoning capacities of Large Language Models (LLMs) by employing algorithmic examples. This technique expands idea exploration with a few queries and outperforms earlier single-query methods. It stands on par with a recent multi-query strategy. We explore the efficacy and nuances of the Algorithm of Thoughts in application, suggesting that instructing an LLM using an algorithm can surpass its performance. Recent developments in LLMs have demonstrated their efficacy in general problem solving, code generation, and instruction following. However, existing strategies that involve halting, modifying, and resuming the generation process to enhance LLMs reasoning capacities lead to increased costs, memory, and computational overheads. The Algorithm of Thoughts addresses this issue by propelling LLMs through algorithmic reasoning pathways, allowing for in-context learning. By exploiting the innate recurrence dynamics of LLMs, this strategy reduces the number of query requests and achieves comparable results to a multi-query strategy with extensive tree search.