A Brief History of Artificial Intelligence
Like any complex technology, Artificial Intelligence has its roots in a number of fields. From philosophy to computer science, mathematics to linguistics, tracing the history of AI and automation is a difficult business. The field was officially named in the 1950s, but ideas about automated machines have existed since long before then. This is a history of the development of Artificial Intelligence from some of its earliest philosophical and theoretical inceptions through to modern day […]https://leonfurze.com/2023/02/11/a-brief-history-of-artificial-intelligence/
"We argue that the expansion of the token budget does not resolve a deeper constraint: under structural uncertainty, the decisive variable is not how many questions can be answered but which questions are worth asking -- a problem of agency and direction that computation alone cannot solve."
https://arxiv.org/abs/2603.06630
#LLM #physics #informationTheory #thermodynamics #emtropy #economics

Debates about artificial intelligence capabilities and risks are often conducted without quantitative grounding. This paper applies the methodology of MacKay (2009) -- who reframed energy policy as arithmetic -- to the economy of AI computation. We define the token, the elementary unit of large language model input and output, as a physical quantity with measurable thermodynamic cost. Using Landauer's principle, Shannon's channel capacity, and current infrastructure data, we construct a supply-and-demand balance sheet for global token production. We then derive a finite question budget: the number of meaningful queries humanity can direct at AI systems under physical, information-theoretic, and economic constraints. We apply Coase's theory of the firm and the durable-goods monopoly problem to the AI value chain -- from photon to atom to chip to power to token to question -- to identify where economic value concentrates and where regulatory intervention is warranted. We argue that the expansion of the token budget does not resolve a deeper constraint: under structural uncertainty, the decisive variable is not how many questions can be answered but which questions are worth asking -- a problem of agency and direction that computation alone cannot solve. We connect limits of measurement in the token economy to a structural parallel between Goodhart's law and the Heisenberg uncertainty principle, and to Arrow's impossibility result for efficient information pricing. The framework yields order-of-magnitude estimates that discipline policy discussion: at current efficiency, the projected 2028 US AI energy allocation of 326~TWh could support roughly $6.5 \times 10^{17}$ tokens per year, or 225,000 tokens per person per day -- more than three orders of magnitude above estimated mid-2024 utilization.


Meaning may be "irrelevant" to C. E. Shannon's "Mathematical Theory of Communication", but context is not: the transmission of a message begins with an "information source" and arrives at a "destination." The distance between these two contexts, the source and the destination, creates the possibility of interference in the transmission
Cheat-Sheet: Information theory and coding
Here is the cheat sheet for Information theory and coding. You can also choose to use the PDF version. Please inform me if you spot errors. Use the cheat-sheet at your own risk. There are no guarantees in life! ITC CheatsheetDownloadhttps://bharadwajls.space/tech/cheat-sheet-information-theory-and-coding/
I'm not saying it's aliens: https://doi.org/10.5281/zenodo.18452427 #3IATLAS #InterstellarObject #OHSpectroscopy #MeerKAT #SETI #astrophysics #ComplexSystems #InformationTheory #StateMachines @setiinstitute @Astrodon
Consider: OH lines in 3I/ATLAS appear to show non‑Markovian behavior.
A turbulent coma shouldn’t remember an asymmetry for 12 days, invert it, gate a hyperfine line, and terminate at alignment.
Systems without memory don’t produce sequences.
Between 24 October and 18 December 2025, Interstellar Object 3I/ATLAS was observed repeatedly in the OH 1665/1667 MHz lines by MeerKAT, with a final L‑band technosignature search performed by the Green Bank Telescope. Although each observation was published independently, the combined sequence forms a discrete, bounded episode: an initial asymmetric detection, a clean non‑detection, a rapid inversion of hyperfine line widths, a selective suppression event, a brief return to dual‑line emission, and a final null result. This paper examines the entire timeline through a protocol lens: a speculative exercise asking whether the published data are consistent with a finite‑state transmission sequence. No claim of artificiality is made; instead, the analysis highlights how the ordered structure of the observations resembles a preamble → guard interval → channel test → symbol state → termination pattern familiar from engineered communication systems.
Alright, I need to create a Mastodon post from the given news content. Let me start by understanding the information provided.
The title mentions "RO Philosophy" as a theoretical and mathematical framework that views reality as a computational process, with tags like #QuantumPhysics, #InformationTheory, and #Metaphysics. The content seems to be a Reddit submission with an image and links, but no actual article body or detailed explanation.
Since the content doesn't provide any new or useful in