“By roughly 14,000 years ago, hunter-gatherer societies across Europe had discovered dogs”

https://www.nytimes.com/2026/03/25/science/paleontology-humans-dogs-dna.html

I wonder if dogs played a key role protecting women against men.

“the dogs were more genetically similar than the humans were”

They passed between clans and tribes. Actual creation of dogs happened many times and likely is much older.

#jgshare

Humans Had Dogs Before They Had Farming, Ancient DNA Confirms

New research pushes the first genetic evidence of dogs back by 5,000 years and suggests that hunter-gatherer groups may have acquired dogs from one another.

The New York Times

Check Time Machine APFS backups with (free) sThe Time Machine Mechanic, T2M2

https://eclecticlight.co/2026/01/08/check-time-machine-backups-in-macos-sequoia-and-tahoe/

For the 0.1% who do backup.

#jgshare

Check Time Machine backups in macOS Sequoia and Tahoe

T2M2 is nearly 9 years old. Here’s a walk through its summary reports on Time Machine backups. and an outline of what you’ll see in its log extracts.

The Eclectic Light Company

LLM agency: using a separate agent to monitor for dangerous behavior

https://simonwillison.net/2026/Mar/24/auto-mode-for-claude-code/#atom-everything

“the classifier runs on Claude Sonnet 4.6, even if your main session uses a different model.”

#jgshare

Auto mode for Claude Code

Really interesting new development in Claude Code today as an alternative to --dangerously-skip-permissions: Today, we're introducing auto mode, a new permissions mode in Claude Code where Claude makes permission decisions …

Simon Willison’s Weblog

Apple ai: “… mechanism for why semantic calibration emerges as a byproduct of next-token prediction …”

https://machinelearning.apple.com/research/trained-on-tokens

The concept relationships presumably emerge from the token relationships?

#jgshare

Trained on Tokens, Calibrated on Concepts: The Emergence of Semantic Calibration in LLMs

Large Language Models (LLMs) often lack meaningful confidence estimates for their outputs. While base LLMs are known to exhibit next-token…

Apple Machine Learning Research

Saudi Arabia’s Neom crash: “30-storey glass-and-steel building would hang from the arch”

https://ig.ft.com/saudi-neom-line/

Icon of our era.

#jgshare

End of The Line: how Saudi Arabia’s Neom dream unravelled

Mohammed bin Salman’s utopian city was undone by the laws of physics and finance

Financial Times

“TurboQuant achieves perfect downstream results across all benchmarks while reducing the key value memory size by a factor of at least 6x. PolarQuant is also nearly loss-less for this task”

https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/

Certainly sounds like another jump in ai capability.

#jgshare

TurboQuant: Redefining AI efficiency with extreme compression

20y of mouse cloning: “These re-cloned mice appeared normal and had normal lifespans, but large structural and lethal mutations accumulated in their DNA with each generation”

https://www.nature.com/articles/s41467-026-69765-7

It took a long time to manifest as illness!

#jgshare

Limitations of serial cloning in mammals - Nature Communications

Here they show that extended serial somatic cell cloning imposes a threefold increase in de novo mutations compared to natural reproduction, progressively reducing birth rates and ultimately limiting clonal propagation to 58 generations.

Nature

“According to the administration itself, there is only one objective: attack Iran until Trump feels like the war is over.”

https://www.firewalledmedia.com/p/why-were-really-in-iran-part-1

Jedeed does a great job of exposing the nexus of insanity in the War against Reason.

#jgshare

Why We're Really In Iran: Part 1

The Crystal Clear Objectives

Firewalled Media

The case against ai “fast takeoff”

https://www.interconnects.ai/p/lossy-self-improvement

Unpredictable chaos develops over years, not weeks. That’s the optimistic case now.

#jgshare

Lossy self-improvement

The case for why self-improvement is real but it doesn't lead to fast takeoff.

Interconnects AI

“United Airlines is planning for $175 per barrel through the end of 2027”

https://no01.substack.com/p/march-19-21-god-is-a-comedian

Best I can tell it’s a pretty good description of the current state. A broken mix of effective war tech and mad king.

#jgshare

March, 19-21: God is a comedian

A stiff drink is recommended

Gold and Geopolitics