
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github https://github.com/wenhuchen/Program-of-Thoughts
arXiv.orgAddendum 11
Making Large Language Models Better Reasoners w. Alignment
https://arxiv.org/abs/2309.02144
* reasoning: cognitive process; evidence-based conclusions
* fine-tuning LLM w. chain of thought (COT) reasoning sig. enhances reasoning
* h/e freq. assign higher scores to subpar COT
* Alignment Fine-Tuning; 3 steps: fine-tuning; multiple COT responses, cat. correct/incorrect; calibrating scores w. a constraint alignment loss
#LLM #LargeLanguageModels #ChainOfThought #ProgramOfThought #reasoning


Making Large Language Models Better Reasoners with Alignment
Reasoning is a cognitive process of using evidence to reach a sound
conclusion. The reasoning capability is essential for large language models
(LLMs) to serve as the brain of the artificial general intelligence agent.
Recent studies reveal that fine-tuning LLMs on data with the chain of thought
(COT) reasoning process can significantly enhance their reasoning capabilities.
However, we find that the fine-tuned LLMs suffer from an \textit{Assessment
Misalignment} problem, i.e., they frequently assign higher scores to subpar
COTs, leading to potential limitations in their reasoning abilities. To address
this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm,
which involves three steps: 1) fine-tuning LLMs with COT training data; 2)
generating multiple COT responses for each question, and categorizing them into
positive and negative ones based on whether they achieve the correct answer; 3)
calibrating the scores of positive and negative responses given by LLMs with a
novel constraint alignment loss. Specifically, the constraint alignment loss
has two objectives: a) Alignment, which guarantees that positive scores surpass
negative scores to encourage answers with high-quality COTs; b) Constraint,
which keeps the negative scores confined to a reasonable range to prevent the
model degradation. Beyond just the binary positive and negative feedback, the
constraint alignment loss can be seamlessly adapted to the ranking situations
when ranking feedback is accessible. Furthermore, we also delve deeply into
recent ranking-based alignment methods, such as DPO, RRHF, and PRO, and
discover that the constraint, which has been overlooked by these approaches, is
also crucial for their performance. Extensive experiments on four reasoning
benchmarks with both binary and ranking feedback demonstrate the effectiveness
of AFT.
arXiv.orgAddendum 10
When Do Program-of-Thoughts Work for Reasoning?
https://arxiv.org/abs/2308.15452
https://github.com/zjunlp/EasyInstruct
* reasoning capabilities of large language models pivotal in embodied AI
* program-of-thought prompting for LLM uses programming language to tackle complex reasoning
* e.g. mathematical reasoning; code data filtering
* specific impact of code data on improvement of reasoning capabilities underexplored
#LLM #LargeLanguageModels #ChainOfThought #ProgramOfThought #reasoning #EasyInstruct

When Do Program-of-Thoughts Work for Reasoning?
In the realm of embodied artificial intelligence, the reasoning capabilities of Large Language Models (LLMs) play a pivotal role. Although there are effective methods like program-of-thought prompting for LLMs which uses programming language to tackle complex reasoning tasks, the specific impact of code data on the improvement of reasoning capabilities remains under-explored. To address this gap, we propose complexity-impacted reasoning score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity by considering the difficulty and the cyclomatic complexity. Through an empirical analysis, we find not all code data of complexity can be learned or understood by LLMs. Optimal level of complexity is critical to the improvement of reasoning abilities by program-aided prompting. Then we design an auto-synthesizing and stratifying algorithm, and apply it to instruction generation for mathematical reasoning and code data filtering for code generation tasks. Extensive results demonstrates the effectiveness of our proposed approach. Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv.org