Everybody talks about how powerful LLMs are, but neither Bard nor ChatGPT seem to come close to solving the following (easy) problem:

Given a stock portfolio with an annual return of r=5%, and the choice of either paying 30% tax on the portfolio gains each year, or paying 24% tax on the capital gains component of whatever you withdraw after 10 years, what is the actual annual return for both portfolios after 10 years?

@HalvarFlake what's the answer? Need to test it with one of them.
@HalvarFlake It seems impossible to explain to most folks that LLMs are literally incapable of math.

The "big breakthrough" for the next OpenAI model seems to be that they were able to slice in a parser for math to detect and perform basic arithmetic.
@HalvarFlake i wonder if bloomberggpt could do it https://arxiv.org/abs/2303.17564
BloombergGPT: A Large Language Model for Finance

The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg's extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology. We release Training Chronicles (Appendix C) detailing our experience in training BloombergGPT.

arXiv.org
@HalvarFlake I pity anyone using the results of any of these models to compute the future value of Series I Savings Bonds ("I Bonds"). All of the models hallucinate fantastically wrong answers that sound plausibly correct to laiety.