
Reading Notes: “I am a strange loop” by D. Hofstadter
The recent breakthroughs in AI exemplified by ChatGPT has rekindled my interest in the philosophy of mind. Many well-known thinkers are looking at state-of-the-art large language models (LLMs) such…
caisqHear me out: Ensemble of LLMs, where the aggregation process is a debate followed by a popular vote.
Do LLMs put the "Chinese room" (or whatever-language room) argument to rest, favoring the notion "understanding is just symbol manipulation"? At first glance, it may seem so. But one quickly realize that the grounding problem is the crux of the problem.
https://en.wikipedia.org/wiki/Chinese_room
Building apps with LLMs today is like building a house with a new, shiny playdough-like material. Ppl marvel at the appearance & ease of construction, w/o realizing the house won't stand for long without reinforcement with less glamorous but tougher conventional materials inside.

Mini Book Review: “The Shape of a Life” by S.-T. Yau and S. Nadis
This book is the autobiography of Prof. Shing-Tung Yau, a world-famous mathematician known for his contribution to geometry, particularly the branch of geometry called geometric analysis, which he …
caisqAstonishing new results from an invasive brain-computer interface for restoring text entry and speech output for users who are unable to move or speak due to motor impairments. Willet et al. (2023) showed in a paper on BioRxiv that a speech output rate of 60+ words per minute was achieved by decoding single-neuron-level signals from the left BA6v (premotor cortex) and BA44 (part of Broca's area) using a RNN, in a single individual with ALS. Word error rate analyses are also found in the paper.
https://www.biorxiv.org/content/10.1101/2023.01.21.524489v1.full.pdf
Is the cognitive burden of deciding whether information is correct, plus the cost of following up for wrong answers, amortized over many queries, larger or smaller than the reduction in cognitive burden due to answers summarized in a few sentences?
In the next few months, many grade-school math teachers will gradually perfect the "technique" of turning word problems into problems with graphs and diagrams in order to make it harder for students to use AI chatbots to solve those problems.
After using ChatGPT and similar LLM chatbots for a while, I believe I have formed a decent mental model for when to trust those vs. when to trust a search engine.
New Year's Eve: I'm studying Clifford Algebra while my kids watch the Clifford movie.