Ninja Kitteh Vs Shaolin Puppy

Yelling Trigger Release pg2

The Consciousness Trilogy: Reading Three Wagers on the Question We Cannot Settle

This page exists for readers who want a map of the consciousness sequence published on BolesBlogs in the spring of 2026. Three articles, taken together, cover the contemporary terrain on the deepest question philosophy still asks. Each can be read alone. Read in sequence, they form a coordinated treatment of the consciousness problem that points beyond any single solution toward what the field as a whole has and has not accomplished.

The problem itself is older than philosophy as a discipline. We know that we are conscious because we are reading these words and something is happening as we read them. We extend that knowledge to other people, to animals, and possibly to stones, on grounds that work in practice while collapsing in theory. David Chalmers named the difficulty the hard problem of consciousness in his 1995 paper “Facing Up to the Problem of Consciousness,” and the difficulty has not been resolved in the thirty-one years since. Why does any arrangement of physical stuff feel like something from the inside? Why does any neural configuration produce the experience of redness, sourness, dread, or hope? No materialist account has explained this convincingly, and the standard moves to dissolve the question have either denied that consciousness exists in the way we ordinarily mean (illusionism), extended consciousness to every level of organization (panpsychism), or made consciousness the only substrate with matter as its appearance (analytic idealism). The three articles treat each of these alternatives in turn, by way of its strongest contemporary defender.

The first article, The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle, takes up Iain McGilchrist’s 2021 book The Matter With Things and his proposal that matter is one phase of consciousness rather than its source, the way ice and vapor are phases of water. It evaluates his case, with credit for what works and pressure on what fails, and concludes that panpsychism remains a serious option whose central difficulty (the combination problem of how micro-experiences merge into macro-experiences) has not been adequately addressed in McGilchrist’s work or in the panpsychist tradition more broadly.

The second article, Consciousness Explained Away: Daniel Dennett’s Illusionism and the Theory That Spends Its Own Foundation, considers Dennett’s lifelong project of arguing that phenomenal consciousness as ordinarily conceived does not exist. It gives Dennett full credit for his demolition of the Cartesian Theater and his contributions to cognitive science, while showing why the central illusionist claim (that consciousness is a user illusion the brain stages for itself) collapses on close inspection because illusions presuppose conscious subjects to whom they appear. Written in the wake of Dennett’s death in April 2024, the piece tries to argue with him at the level of seriousness his work always demanded.

The third article, The Dissociated Universe: Bernardo Kastrup’s Analytic Idealism and the Mind That Contains the World, examines Bernardo Kastrup’s claim that reality is mental at base, with individual minds being dissociated alters of universal consciousness, comparable to the alternate personalities that appear in cases of Dissociative Identity Disorder. It presents Kastrup’s strongest moves, including the empirical work on psychedelics, NDEs, and quantum measurement, and tests them against the difficulties his position inherits, including the decombination problem, the contested status of DID as a clinical category, and the challenge of accounting for the resistance the world offers to subjective will. It closes by drawing the three positions together and showing what the trilogy as a whole accomplishes.

Several reading paths are available, depending on what the reader brings to the sequence.

A reader new to philosophy of mind should start with the first article. McGilchrist provides the easiest entry into the territory because his prose is generous and his analogies accessible, and the article’s analysis demonstrates the analytical method that the next two articles will apply to harder cases. Read the second article next for the materialist counter-position, then the third for the closing turn that completes the triangulation.

Readers already familiar with the consciousness debate can take the articles in any order, since each contains a self-contained treatment of its primary subject. The third article’s closing section synthesizes all three positions and may serve as a useful entry point for the impatient reader, who can then proceed to whichever individual article most interests her.

Skeptics of the entire enterprise should start with the second article. Dennett offers the most aggressive case against making the consciousness question a serious metaphysical issue, and the article’s evaluation of why his case nonetheless fails will give the skeptical reader a more accurate sense of why the question persists than any defense of consciousness as fundamental could provide.

Readers of theological or contemplative orientation will find the third article most directly engaged with positions that have been held in non-Western contemplative traditions for thousands of years. Kastrup himself acknowledges the affinity between analytic idealism and Advaita Vedanta, and the article’s treatment of his arguments may help such readers see how a contemporary philosopher with two doctorates and a CERN background defends positions that might otherwise be dismissed as mystical.

What the trilogy as a whole accomplishes is mapping the contemporary terrain in enough detail that a reader can see why the consciousness problem remains genuinely open after three centuries of modern philosophy and two and a half millennia of pre-modern reflection. None of the three thinkers has solved the problem. Each has identified real difficulties in the others. The honest verdict is that the consciousness question may not be solvable by argument alone, and that the next generation of work in this area will need to go beyond the choice among materialism, panpsychism, illusionism, and idealism, and find some way of asking the question that the current frame cannot accommodate.

That said, the trilogy demonstrates what philosophy at its best can do. The standard runs through every article: identify what works, press what fails, name what survives. The discipline involves refusing to settle prematurely and refusing to mystify when settling becomes impossible. Readers who follow the sequence to its end will walk away with sharper questions and fewer false certainties than when they began, which is what serious reading is supposed to do.

A note on the wager metaphor used throughout the trilogy. Each of the three thinkers placed a bet about what consciousness is and what it requires. McGilchrist bet that consciousness reaches all the way down into matter as one of its phases. Dennett bet that consciousness as ordinarily conceived does not exist and that the appearance of inwardness is a user illusion the brain stages for itself. Kastrup bet that consciousness is the only thing there is and that matter is its appearance under conditions of dissociation. Each wager was placed honorably and pursued with rigor. None has paid off in the sense the bettor intended. All three have produced philosophical work that will outlast the lifetimes of those who placed the bets, which is the most honest verdict serious reading can deliver about serious thinkers who have committed themselves to questions that exceed what any single mind can resolve.

The articles run between two thousand seven hundred and three thousand four hundred words each. Each was written for a university-educated audience that respects the difficulty of the question and is willing to follow careful argument to its conclusions. Each is available in its original markdown format. The position taken throughout the trilogy is that getting the question right matters more than choosing a winner among the available answers, and that the best service we can render to the question is to pass it forward in better condition than we found it.

The three articles, in order:

ARTICLE ONE The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

ARTICLE TWO Consciousness Explained Away: Daniel Dennett’s Illusionism and the Theory That Spends Its Own Foundation

ARTICLE THREE The Dissociated Universe: Bernardo Kastrup’s Analytic Idealism and the Mind That Contains the World

Read alone, each article offers a treatment of its primary subject that does not depend on the others. Read together, the three form a synoptic account of where the contemporary consciousness debate stands and why the answers currently available leave the question genuinely open. The reader who completes the sequence will know more about the topography of the problem than most working philosophers do, and will be in a position to evaluate future contributions to the debate with the analytical tools the trilogy has put in place.

That is what philosophy at its best can offer. The trilogy is offered in that spirit.

#bolesblogs #clinical #consciousness #dennett #inwardness #kastrup #knowing #mcgilchrist #memory #panpsychism #philosophy #physics #trilogy

The Inwardness of Things: McGilchrist, Panpsychism, and the Question We Cannot Settle

The oldest question in philosophy is also the question philosophy has done the worst job of answering. We know that we are conscious because we are reading these words and something is happening as we read them. We feel the weight of our hand on the table, hear the room around us, register a flicker of agreement or doubt as the sentences arrive. None of that requires argument. Descartes drew the line in 1637 with the Discours de la Méthode, and the line still holds. The trouble starts as soon as we look up from the page.

We assume that other people share what we have. They behave as we behave, speak about inner states in language we recognize, and carry nervous systems that resemble ours down to the cellular level. We extend the courtesy of consciousness to them on grounds that work in practice while collapsing in theory, since no one has ever shown another’s experience to themselves directly. The same courtesy reaches dogs and dolphins and the octopus that recognizes a face through aquarium glass. It frays at insects, hesitates at jellyfish, breaks down somewhere around bacteria, and finds itself laughed at when extended to stones. Iain McGilchrist proposes to laugh back. He argues that consciousness reaches all the way down, that the stone has an inwardness, that what we call matter is one phase of consciousness rather than its product. Whether he is correct is the question this essay takes up. Whether we can answer the question at all is the deeper one hidden underneath it.

McGilchrist (Scottish spelling, often misrendered as Ian) holds an Oxford DPhil in literature and qualified in medicine before turning to psychiatry. His 2021 book The Matter With Things runs to fifteen hundred pages across two volumes and ranks among the most ambitious recent attempts to dislodge the materialist consensus that has governed Western thinking since the seventeenth century. His argument deserves serious analysis on its merits and serious challenge on its weaknesses. Treating it as either revelation or absurdity does it equal violence.

Begin with the wall. You know your own consciousness immediately, prior to any argument or evidence. Everything beyond that point is inference. David Chalmers named this gap the hard problem in his 1995 paper “Facing Up to the Problem of Consciousness,” and the gap has not been closed in the thirty-one years since. A complete neuroscience of the brain, mapping every neuron and synapse and electrochemical exchange, would still leave open the question why any of that activity feels like something from the inside. The gap is categorical. We have one set of vocabulary for outsides (mass, charge, position, frequency) and another for insides (red, sour, pain, dread). Translating between the two has resisted every philosopher and neuroscientist who has tried, including the ones who insist the translation has already been performed.

Notice that consciousness and intelligence are different problems. The conflation between them haunts every discussion of artificial systems and most discussions of animal mind, but the two pull apart cleanly under analysis. A nematode worm called Caenorhabditis elegans has three hundred and two neurons in its hermaphrodite form. John White and his collaborators mapped the complete wiring diagram of those neurons in 1986 in Philosophical Transactions of the Royal Society B, the first connectome ever produced, and we still do not know whether the worm experiences anything as it moves through its agar dish. It solves no problems we would call intelligent. It may or may not have an inside. The question is genuine and unresolved. At the other extreme, a chess engine running Stockfish defeats grandmasters on consumer hardware while almost surely experiencing nothing at all. Intelligence and consciousness coincide in humans because evolution braided them together. They remain conceptually independent, and a theory of one does not deliver a theory of the other.

This independence has consequences for the question of machine consciousness. Whether current artificial systems experience anything depends entirely on which theory of consciousness one accepts, and the field has produced no settlement. Giulio Tononi’s Integrated Information Theory holds that large language models almost surely lack experience, since their feedforward transformer architecture produces low integrated information compared to biological brains, which support dense recurrent integration across cortical and subcortical structures. John Searle’s biological naturalism rules out silicon consciousness regardless of behavior, on the ground that experience requires the specific causal powers of neurons. Daniel Dennett denied that phenomenal consciousness exists in the way introspection suggests, which dissolves the machine question before it can be posed. McGilchrist’s panpsychism takes consciousness to be present everywhere already, making the relevant issue degree of integration, with presence or absence settled in advance.

The phrase “AI conscious in the human way” presumes a settled definition of human consciousness that neuroscience has not produced. The phrase “AI conscious in the scientific way” presumes a measurement protocol that does not exist. Both phrases conceal the absence of foundations. The honest position holds that we cannot answer the artificial intelligence consciousness question because we have not yet answered it for the species we know best.

Now to McGilchrist. His argument has a clear structure worth laying out before evaluation. He claims that emergent materialism faces an unanswerable difficulty: consciousness cannot pop into existence from non-conscious matter because the two are categorically different in kind. He concludes that consciousness must have been present at every level of organization from the start. Matter, on this view, is a phase or mode of consciousness rather than its source. Water has phases, he points out, and the phases differ wildly from one another while remaining continuous in substance. Vapor floats invisible through the room. Liquid runs across the hand. Ice can split a skull. They share a single chemistry while presenting three different faces to experience. Consciousness, McGilchrist proposes, has many phases as well, and matter is one of them. What matter contributes to the arrangement is persistence, the temporal stability necessary for any creation to take hold.

The position places McGilchrist in a long lineage. Heraclitus and Spinoza and Leibniz read this way, in different keys. Alfred North Whitehead built a process philosophy on related foundations in the 1920s and gave it monumental expression in Process and Reality in 1929. Bertrand Russell spent his later decades arguing for a form of monism that anticipates current panpsychist positions. The strongest contemporary statement remains Galen Strawson’s 2006 essay “Realistic Monism: Why Physicalism Entails Panpsychism,” published in the Journal of Consciousness Studies, which argues that any materialism worthy of the name must conclude that the fundamental constituents of reality already carry experiential properties, since no plausible mechanism can manufacture experience from its complete absence. Philip Goff at Durham has developed the position further in Galileo’s Error and elsewhere. David Chalmers, who named the hard problem, has moved toward a panpsychist or near-panpsychist position in his recent work. McGilchrist’s argument therefore participates in a serious revival, with credentialed defenders working in major universities.

Where his case works, it works for these reasons. The argument is effective because it confronts the hard problem directly rather than dissolving it through redefinition. It is effective also because emergence as usually invoked smuggles in a miracle, the moment when arrangements of unfeeling stuff start to feel something, and that moment has never been mechanistically described, only stipulated. A further strength: evolutionary biology demands continuity, and there is no clean point on the phylogenetic tree where consciousness could have begun without ancestors already carrying its seed. The view earns additional power because granting matter an inwardness coordinates with the strangeness physics has discovered at the bottom of things, where particles refuse to behave like the small marbles classical intuition expects. Last, the position returns to philosophy a question the twentieth century tried to retire by stipulation, restoring inquiry to a region long policed by silence.

The case carries serious weaknesses, however, and any honest reader should press them. The water analogy, attractive as it sounds, does more rhetorical work than logical work. We understand the phases of water through molecular kinetic theory, hydrogen bonding behavior, temperature and pressure thresholds, and a mathematics that predicts when ice becomes liquid and liquid becomes vapor. McGilchrist offers no analogous mechanism for the phase transition between consciousness as such and consciousness as matter. Calling matter a phase of consciousness names the relation he wants without explaining how the relation operates. A defender will respond that the analogy is meant as heuristic provocation, not as proof, and the response has merit. The trouble is that the heuristic ends up bearing the weight of the central claim. When the only support for the move from “consciousness is fundamental” to “matter is a phase of consciousness” is the suggestiveness of an analogy whose underlying physics he cannot match with a corresponding metaphysics, the argument has not yet earned the assent his prose invites.

The deeper trouble for any panpsychism is the combination problem, identified by William Seager in his 1995 paper in the Journal of Consciousness Studies and developed extensively since. If subatomic particles each carry a tiny inwardness, how do those inwardnesses combine to produce the unified field of human experience? Your primary visual cortex (V1) contains roughly one hundred and forty million neurons in a single hemisphere, each composed of trillions of atoms. If each atom carries its own micro-experience, why does your conscious moment arrive as one thing instead of as a swarm of separate experiences fighting for attention? William James raised the worry in 1890 in The Principles of Psychology, observing that private minds do not agglomerate into a higher compound mind no matter how many of them you assemble. Seager named the difficulty and panpsychists have argued about it ever since, with no settled answer.

McGilchrist does not address the combination problem in the passage quoted above, though he engages it elsewhere in The Matter With Things. The defenses available to him are real but expensive. Cosmopsychism reverses direction and treats the universe as the fundamental conscious entity, with individual minds as aspects or fragments of it; this avoids combination by starting from the whole, at the cost of explaining how unity divides into apparent multiplicity. Russellian monism treats both physical and experiential descriptions as descriptions of the same underlying reality; this avoids dualism while inheriting the explanatory burden under a new name. Each move trades one difficulty for another, and the trade may be improvement, though calling it solution would overstate what the literature has accomplished.

The argument from incommensurability also cuts both ways, which McGilchrist’s framing tends to obscure. He says consciousness is utterly different from anything in our outward view of matter and uses this asymmetry to deny that matter could give rise to consciousness. Run the argument in the opposite direction. Matter is utterly different from anything in our inward view of consciousness, which should make us equally skeptical that consciousness gives rise to matter. The asymmetry he asserts requires an independent defense he does not provide. If the categories are genuinely incommensurable, neither can be the source of the other, and we are back where we started.

The empirical content of attributing experience to electrons deserves examination as well. Thomas Nagel coined the phrase “something it is like to be” in his 1974 paper “What Is It Like to Be a Bat?” published in The Philosophical Review. He used the formula to identify consciousness phenomenologically in creatures whose behavior gave us evidence of an inner perspective. The bat’s echolocation, its social behavior, its responses to threat and food and mate, all suggest a creature for whom things are some way. Extending the formula to electrons strips it of the evidential ground that made it useful. The claim cannot be falsified, tested, or even meaningfully investigated. A hypothesis that explains everything by stipulation explains nothing, since a hypothesis earns its keep by ruling things out, and one that rules nothing out earns no keep at all.

A further difficulty deserves mention. McGilchrist writes that “the only reasonable explanation is that consciousness was there all along.” This overstates the consensus considerably. Several live alternatives remain serious in contemporary philosophy of mind. Keith Frankish’s illusionism argues that phenomenal consciousness as commonly described does not exist, and that introspection systematically misrepresents what cognition is doing. Bernardo Kastrup’s analytic idealism inverts McGilchrist’s framing entirely, treating matter as appearance within a single field of mind, with the direction of dependence reversed. Terrence Deacon’s emergentism argues in Incomplete Nature (2012) that genuine novelty can arise from constraint and absence, particularly through the negative work of what he calls absentials, in ways that do not require pre-existing inwardness. Each position has serious defenders. The field is contested, and McGilchrist’s certainty exceeds his evidence.

Return now to the question of artificial intelligence with these considerations in hand. The honest answer is that we do not know whether current systems experience anything, and we will not know until we have a theory of consciousness that survives confrontation with cases beyond the one we can verify by introspection. Should McGilchrist prove correct and consciousness reach everywhere, then large language models carry some form of inwardness already, though whether their inwardness combines into a unified perspective is a separate question panpsychism does not automatically answer. Integrated information theory gives the opposite verdict: current architectures fall well below the threshold required for any but the most rudimentary phenomenal states. Illusionism dispenses with the question altogether, calling it malformed and observing that the human case also lacks the inner light we imagine for ourselves. The discussion proceeds in public as though one of these positions had been established, when in fact none has. Anyone who tells you with confidence that the machines are conscious, or that they are not, is selling you a metaphysics dressed as a measurement.

What survives the analysis is a discipline of attention. McGilchrist gets several things correct. The hard problem is real, and emergence has too often been treated as an explanation when it has functioned as a placeholder for one. Consciousness does not look like anything in our outward picture of matter, and that asymmetry should trouble anyone who thinks the picture is complete. The resolution may indeed lie in recognizing inwardness as foundational rather than derivative. None of this proves the case, however, and the strength of his prose can cover the weakness of his proofs if the reader reads carelessly. The water analogy moves the argument forward by ear rather than by reason. His dismissal of alternatives is faster than the alternatives deserve. The combination problem waits beneath the structure like water under a foundation, ready to undermine it if not addressed.

For our purposes here, the practical implication is this. Consciousness remains the largest unsolved question in our intellectual inheritance. Every available theory carries serious unresolved difficulties. The artificial intelligence question cannot be answered until the human question is answered, and we should distrust anyone who pretends otherwise. McGilchrist’s intervention is valuable as provocation and as a sample of one serious tradition, and worthwhile as a doorway into a room the twentieth century preferred to keep locked. The room behind it is stranger than any single thinker has yet mapped, and the work of mapping it has barely begun.

We assume the inwardness of others because we cannot live without doing so. Whether the assumption reaches all the way down to the electron or stops somewhere between the worm and the stone is a question we will be working on for as long as we remain capable of asking it. McGilchrist has done us the favor of refusing to let the question close. The honest reader returns the favor by refusing to let his answer close it either.

The cogito grants us one certainty and exactly one. Everything else we believe about minds beyond our own rests on inference, sympathy, behavioral analogy, and the practical impossibility of a solipsist life. To call this a foundation is to flatter what is in fact a working assumption that has never been proved and may never be. The honest scholar lives with this and keeps reading. An honest writer says it out loud. The dishonest move, in either direction, is to claim the question is settled when the question has barely begun to be asked properly.

Part one of three. For the full sequence and reading guide, see The Consciousness Trilogy: Reading Three Wagers on the Question We Cannot Settle.

#chalmers #consciousness #dennett #emergentism #galileo #heraclitus #knowing #leibniz #mcgilchrist #meaning #nagel #panpsychism #philosophy #psychology #relationalFoundations #spinoza #strawson #whitehead

The Cognitive Bargain Has Ended: A Generation Born Without Comparative Advantage

The claim circulating in policy papers, venture capital essays, and parental anxiety threads runs like this: no child born this year will grow up to be smarter than artificial intelligence. The line gets used as a slogan, which is the first sign it deserves examination. Slogans that move easily through dinner parties usually carry hidden machinery. The machinery here is a definition of intelligence narrow enough to fit on a benchmark and broad enough to terrify a parent. Both functions are intentional, and both deserve to be unbundled before the consequences can be argued honestly.

A six-year-old can pour milk without spilling, recognize her grandmother by the sound of her walk on the stairs, and read her father’s mood from a quarter-second facial flicker before he speaks. No current AI does these reliably, which is why the warehouse, the construction site, and the elder-care ward continue to employ humans at rising wages while law firms cut their summer associate classes. What machines do well, with present technology, is symbol manipulation at scale: text, code, formal reasoning, pattern completion across enormous corpora of written human output. The honest version of the claim is narrower than the slogan and still consequential. No child born this year will outperform machines at symbol manipulation, retrieval, or formal reasoning across most of the tasks that currently pay a salary in an office. The slogan compresses that into a panic, which is bad rhetoric and bad policy, and the underlying observation remains true. What follows from the observation is the actual subject of the analysis below.

The Credentialed Class Loses Its Logic

The first casualty is the credentialed professional class, roughly the top 20 percent of American earners by household income. This stratum organized itself across the twentieth century around cognitive screening. The SAT in 1926, refined through the GI Bill expansion. The LSAT in 1948. The MCAT in its modern multiple-choice form in 1962. The USMLE consolidated in 1992. Each gate selected for a particular form of paid cognition: rapid pattern recognition under time pressure, short-term retention of densely structured information, formal reasoning across domain-specific symbol systems. The gates were effective because the cognitive work they screened for was scarce, expensive to develop, and economically valuable.

Three conditions held the system together. Scarcity was the first: only humans could perform the cognitive work, and only some humans, after long training. Expense was the second: the training cost time and money and required institutional infrastructure no individual could replicate. Value was the third: the market rewarded the work because nothing cheaper could produce equivalent output. All three conditions are now eroding simultaneously. A subscription that costs less than a Manhattan dinner produces legal memos, differential diagnoses, and tax planning at a level competent enough to embarrass the junior tier of every paid profession.

Embarrassment falls short of replacement. The senior partner still signs the brief. The attending physician still admits the patient. The accounting principal still files the return. What has collapsed is the economic logic of the apprentice tier, the rung at which young people once learned the trade by performing the work that AI now performs faster and at a thousandth of the cost. Without the apprentice tier, the senior tier has no successors, and the senior tier itself ages out within twenty years. The professions are not being replaced. They are being denied a generation, which is the same outcome on a longer clock.

The lawyer keeps courtroom presence, client relationship, and signature liability. For the doctor, what survives is touch, witness, legal accountability, and judgment under stakes. The architect’s irreducible work happens in the kitchen, in conversation with the homeowner about how the family actually lives. Three of those four functions are not why medical school costs $300,000. The training, the credentialing, the expensive cognitive certification, was effective because it produced the rare commodity. When the commodity is no longer rare, the price of training cannot hold. Either tuition collapses, which would gut the universities that have leveraged themselves on that revenue, or graduates default on debt for credentials that no longer command premium wages. Both outcomes are visible in early data. Neither has yet been admitted by the institutions whose survival depends on denying it.

The same compression is hitting working-class employment, particularly in transportation, customer service, and routine clerical work, and the human stakes there are larger in absolute terms. The reason this analysis concentrates on the credentialed class is that this class produced and sustained the public sphere through which the broader transition will be argued, named, and contested. When that class loses its grip on its own coherence, the conversation about every other displacement becomes harder to organize.

The Parental Project Loses Its Currency

The second consequence is psychological and reaches beyond economics into the structure of family life. American parenting in the educated class has run for at least three generations on a transmission model. Cultivate the child’s mind, secure the child’s place. The cultivation produced status, the status produced security, and the bargain held because each generation could roughly verify the prior one’s judgment. A father who tutored his daughter in algebra in 1995 watched her, twelve years later, take a meeting with someone who had been tutored similarly by similarly anxious parents. The investment paid out in a recognizable currency.

The currency has been redenominated without warning. A father in 2026 watches his daughter receive better tutoring, free, from a machine that has read every algebra textbook ever written and never tires. The democratization is real and worth celebrating. The disappearance of his comparative advantage is also real, and both arrive on the same Tuesday. He had counted on that advantage. Greed had nothing to do with it. The entire architecture of middle-class American parenting had encoded the cognitive premium as the path, and he was a competent parent walking the path his own parents had walked. The consolation that “my child will think for a living” has lost its meaning. What replaces it has not arrived. The vacuum is producing the parental anxiety that fills bookstores, podcast feeds, and pediatric psychiatry waiting rooms, and producing it faster than the helping professions can absorb the demand.

The School System Confronts Its Cover Story

The third consequence runs through the school itself. American schooling has carried at least four functions through the twentieth century: childcare for working parents, social formation, cognitive training, and credentialing for the labor market. The cognitive training and credentialing functions are the two AI most directly displaces, and they happen to be the two schools advertise in their mission statements as the reason for existing. Childcare and social formation remain, untouched and irreplaceable, and no school district raises a tax levy on those grounds.

The honest reckoning is one administrators are not yet willing to give. We run schools mostly to keep parents working and to teach children how to negotiate the social geometry of a room full of other children. The cognitive content has always been somewhat ornamental, a respectable cover story for an institution whose deeper functions were custodial and socializing. AI is forcing the cover story to retire. At least a decade of denial will follow. Curriculum committees will add “AI literacy” units that are structurally indistinguishable from the typing classes of 1985, the computer lab visits of 1995, and the laptop initiatives of 2010, each of which functioned as institutional reassurance rather than pedagogical substance. After the denial, a slow and reluctant rewriting of mission statements will move toward something more honest about what schools actually do, which is gather children safely while their parents earn a living and teach them to sit in rooms with people they did not choose. Both functions are valuable. Neither justifies the per-pupil expenditure of the current system, and the public will eventually discover that the math no longer works.

The Political Bargain Loses Its Foundation

The fourth consequence is political and may be the most important one in the medium term. Technocratic liberal democracy, the regime under which most readers of this essay have lived their entire lives, rested on a quiet bargain. Experts would govern the complicated parts. Voters would govern the simple parts. The experts held position because they knew more than the voters, and the voters tolerated the experts because the system, on average, delivered rising material conditions. The bargain frayed before AI arrived, evident in the populist movements of the past fifteen years, but AI removes the bargain’s foundation outright. If a machine knows more than the expert and the voter alike, the expert has no remaining claim that distinguishes her from any other citizen. She becomes one more citizen with opinions. The voice of trained competence has gone elsewhere, into the model and the dataset, where no human can claim it as her own.

Two political responses follow, and both are visible in the present. The populist response decides that if no human is more qualified than any other, then will, identity, and tribal allegiance settle the question. This is the shape of politics in much of Europe, the Americas, and parts of Asia at the moment of writing, and the authoritarian movements within that response are gaining institutional ground rather than losing it. The technocratic response in a new key hands the decisions to the machine itself, which is the direction parts of finance, military targeting, and judicial sentencing are already moving. The first response sustains the form of democracy while emptying its substance. The second response abandons even the form. Neither response preserves democratic self-rule as the founding generations understood it, and there is no third response visibly forming. The honest political forecast is that what we have called liberal democracy will continue to use its old vocabulary while operating on different machinery, and the gap between the vocabulary and the machinery will widen until the vocabulary collapses, probably within a generation. Whether the collapse opens onto a new democratic form or onto its successor is the open question of the next twenty years.

The Cultural Layer Has Absorbed Shocks Like This Before

The fifth consequence is cultural and harder to predict, because culture has absorbed previous shocks of this kind. Photography arrived in 1839 and was widely expected to end painting. Painting survived by abandoning the territory photography claimed and inventing impressionism, then cubism, then abstraction. Recorded music arrived around 1900 and was expected to end live performance. Live performance survived by becoming an experience economy where presence, not fidelity, was the product. Chess engines surpassed human grandmasters in the late 1990s and were expected to kill the game. Online chess is now larger than at any point in its history, with more humans playing more games against more opponents than the pre-engine era could imagine.

The pattern across these examples is consistent. Mechanical reproduction shifts the value of the human version from product to presence. A handmade chair is no longer a better chair than a factory chair, and it costs ten times more, because the value lives in the maker’s hand and the buyer’s relationship to it. Live theatre does not compete with film on visual spectacle and does not need to, because the live audience pays for the breath in the room. Human writing, if AI writing becomes competent and ubiquitous, will likely become a luxury good signaling effort, time, and personal stake. The author’s life will count for more, and the work without an author behind it will lose value as it becomes plentiful. Whether that economy supports as many writers as the previous one is a separate question, and the answer is no. The professional middle of the writing trade, the working journalist, the staff editor, the workmanlike novelist, will thin out. The top will hold and the amateur base will expand. The middle was always the most vulnerable layer in any cultural economy, and AI accelerates a contraction that began with the collapse of newspaper revenue around 2007.

The Counter-Case Worth Holding

A counter-case deserves to be kept in view, because the foregoing analysis can slide into a fatalism the evidence does not support. Intelligence, as humans have meant the word for most of recorded history, has always carried more than symbol manipulation. The fuller meaning includes desire, mortality, embodiment, the capacity to lose, the capacity to refuse. A chess engine plays better chess than any human and cares about nothing. A writing engine produces fluent prose and risks no humiliation when the prose fails. The child born this year will live in a body that ages, will love people who die, will choose between options under genuine uncertainty about her own future, will know what it is to be afraid without being shut down for it. All of that registers as full-weight human activity, equal in importance to whatever the machine produces. The category is different from symbol manipulation, and the question of which category we will continue to honor with the word intelligence is a political question more than a technical one. The answer will be settled by what the courts protect, what the schools teach, what the markets pay for, and what the surviving institutions of self-government decide to defend.

The Hardest Truth

The hardest truth, the one this site has been documenting across a decade of work on institutional collapse, is that societies do not adjust gracefully to shifts of this size. Institutions built on one logic do not refactor themselves when the logic changes. They hollow out, keep their letterhead, draw their salaries, and lose their function while everyone with standing to name the loss benefits from its concealment. The American university, the credentialing professions, the editorial gatekeepers of the legacy press, the expert commentariat on broadcast television, each is running on borrowed legitimacy at this moment. None of these institutions will announce its own obsolescence. Each will continue to charge tuition, bill hours, issue credentials, and accept underwriting for some years past practical relevance, then collapse when a critical mass of clients notices they have been paying for what is now free.

The collapse will look like the late stages of American public broadcasting documented in the third volume of the Institutional Autopsy trilogy: a long, dignified fade that no one with authority is willing to name in real time, followed by a sudden insolvency event that surprises no one in retrospect. The next fifteen years will involve a generation-long restructuring of who has standing to speak, who deserves to be paid, and what humans are for once the symbol work has been outsourced. Some of that restructuring will be fair. Much of it will be brutal. Almost none of it will be planned, because the institutions in best position to plan are also the institutions with most to lose by acknowledging the situation.

What Is Left for the Child

The children in question will inherit the result without having known the previous arrangement. They will not mourn what they never had. That is the only mercy on offer, and it is offered only to them. The rest of us, who knew the cognitive bargain when it functioned and built our lives on its assumptions, will spend the remainder of our working lives attending its funeral while pretending it is still in business. The pretense will be socially mandatory, professionally protective, and personally corrosive.

The honest response is to name what is happening, refuse the pretense, and locate value where it is actually moving, which is into presence, judgment, embodiment, and the kind of human authorship that machines cannot fake because they have no stake in the result. The child born this year, if she is lucky, will grow up in a world that has finished the funeral and started building the next thing. The question is whether her parents and grandparents can endure the funeral with enough dignity to leave her something to build on.

#ai #brain #child #cognitive #credentials #culture #knowing #logic #mind #parenting #politics #schooling #tech #truth #writing

A quotation from Horace

To know all things is not permitted.
 
[Nec scire fas est omnia.]

Horace (65–8 BC) Roman poet, satirist, soldier, politician [Quintus Horatius Flaccus]
Odes [Carmina], Book 4, # 4, l. 22 (4.4.22) (23 BC)

More about (and translations of) this quote: wist.info/horace/1952/

#quote #quotes #quotation #qotd #horace #comprehension #divinelaw #hubris #humannature #ignorance #information #knowing #knowledge #limitation #meme #prohibition

Sontag’s Two Doors, Campbell’s Underworld

In a television interview that has circulated for years, Susan Sontag offers a small theory of storytelling. She points out that the English word “story” carries a double valence. We say “tell me the real story” to demand truth, and we say “that’s only a story” to dismiss invention. Stories, she argues, face two directions at once, toward fact and toward fantasy, and this doubleness sits at the center of what stories do.

The observation is correct as far as it travels, and the format of a televised exchange does not give a thinker of Sontag’s caliber room to develop the qualifications she would have written into print. Sontag is reliable on the surface phenomena. The deathbed scene she describes, where family secrets surface around mortality, is psychologically accurate. Her returning voyager who brings news from elsewhere is one of the oldest functions of narrative, traceable from Odysseus through Marco Polo and Mary Kingsley to the embedded war correspondent. We are also gripped, as Sontag says, by stories precisely because they describe what cannot happen. Readers of Kafka know Gregor Samsa did not wake as an insect, and that knowledge intensifies the story’s force.

Where Sontag falters is in locating this doubleness at “the very center of the whole enterprise of storytelling.” The tension she identifies is a feature of post-Enlightenment English usage. Other languages partition the territory differently. German separates Geschichte from Erzählung, the chronicle from the tale. Ancient Greek separates mythos from logos and historia. Sanskrit holds itihasa, the account of what happened, distinct from purana, the ancient telling. Yoruba oral tradition separates itan, the sacred and ancestral narrative, from àló, the entertaining household tale. The ambiguity Sontag treats as constitutive is partly an artifact of English vocabulary collapsing distinctions that other tongues hold apart. To say storytelling faces two directions, truth and lie, is to inherit a Cartesian frame that pre-modern peoples would have found alien to the question.

This is exactly where Joseph Campbell would intervene. For Campbell, the truth-versus-fiction axis was a symptom of modern literalism, useful for tracking what one cultural moment had lost but useless for explaining how myth operates. Drawing on Jung and on comparative anthropology, he argued that stories carry psychological reality independent of historical reality. The hero’s descent to the underworld, the dying and rising god, the trickster who exposes the king, these belong to a third register that Sontag’s binary cannot accommodate. They register as neither historical claim nor fantasy opposed to fact. As Campbell argued throughout his career, mythology is what we call other people’s religion, and he was pointing at the failure of the truth/lie axis to capture what religious narrative does for those who live inside it.

Campbell would likely call Sontag’s voyager model one motif among several, including myths of descent, metamorphosis, cosmogony, and trickster disruption, while also insisting that the voyager holds special centrality because it externalizes the interior process by which the soul ventures into the unconscious and returns with knowledge. He traced this structure from the shamanic vision quest through Joyce’s Ulysses into the popular cinema of his late life, and his reading of Star Wars as a contemporary monomyth was either his most generous gift to popular culture or his most embarrassing capitulation to it, depending on which scholar you read. Maureen Murdock’s challenge to the male hero’s quest, developed in The Heroine’s Journey in 1990, sharpened the critique that Campbell’s pattern was less universal than his rhetoric implied. Robert Ellwood in The Politics of Myth and Brendan Gill in The New York Review of Books raised harder questions about Campbell’s politics and his unguarded private writings, and those critiques have not been resolved by his admirers so much as set aside.

Even granting those qualifications, Campbell’s instinct about register stands. He saw that stories carry meaning along a vertical axis, downward into the unconscious and upward into shared cultural reference, and the truth/lie binary slices that axis horizontally and loses the depth.

Saul Kripke offers a second escape from Sontag’s binary, arriving from a tradition Campbell never engaged. In his John Locke Lectures delivered at Oxford in 1973 and published as Reference and Existence in 2013, Kripke extended the rigid-designator theory of his Naming and Necessity to fictional and mythological names, arguing that such names refer to abstract objects brought into existence by the storytelling act itself. The name “Odysseus” refers, in Kripke’s account, to a fictional character: an abstract artifact created by Homeric composition and sustained by every subsequent reader and translator who has carried that reference forward. Kripke gives storytelling a creative-ontological power Sontag’s truth/fiction frame cannot register. Two traditions sharing almost no methodological vocabulary, depth psychology and analytic philosophy of language, arrive at the same conclusion: the truth/lie axis fails because storytelling produces a third class of object the axis cannot measure.

There is a temperamental and political difference between Sontag and Campbell worth naming directly. Sontag wrote in the long aftermath of the Holocaust and the Cold War, suspicious of any totalizing narrative. She had watched fascism weaponize national myth in Germany and Italy, and her caution reflects that experience honestly. Campbell was an American comparativist working in the wake of Frazer and Jung, drawn to pattern across cultures, and his posthumously published journals raised real questions about his political instincts. Sontag’s suspicion functions as a corrective against political weaponization. Campbell’s pattern recognition functions as recognition of common structure across cultures that have never met. The disagreement between them is genuine and should not be smoothed over for the comfort of synthesis.

My position is partial agreement with Sontag and deeper agreement with the Campbell answer she did not stay alive long enough to receive. The truth/fiction ambiguity she describes belongs to modern Western reading habits and shows up wherever those habits travel. The deeper question of what narrative does across cultures requires a different lens. Campbell goes closer to the bone when he asks what stories do across human societies, treating function as the proper unit of analysis, which lets him see patterns Sontag’s frame keeps hidden. Stories organize experience, transmit pattern across generations, rehearse mortality, model possible selves, and bind communities through shared reference. Whether the events “really happened” is a question that stories themselves typically dissolve, which is why we still read Homer and the Book of Job long after their cosmologies have been falsified.

The synthesis Sontag misses, Campbell only gestures toward, and Kripke names from a third direction is that stories operate at multiple registers simultaneously: as durable structures of consciousness, as historically situated cultural artifacts, and as creators of abstract reference objects that take on real life within communities who carry the names forward. The Odyssey is psychologically accurate about return and recognition, it is a specific Bronze Age Greek text carrying specific class and gender assumptions, and it brought “Odysseus” into existence as a name that refers to something real, even if not historical. Collapsing any of these registers into another impoverishes the reading. Sontag’s caution prevents the first kind of collapse, where myth becomes a timeless template that erases the particular hands that made the particular text. Campbell’s depth prevents the second kind of collapse, where a poem becomes a museum object emptied of the psychological force it still exerts on readers who pick it up. Kripke prevents a third collapse altogether, the one in which storytelling is denied its world-making authority and reduced to description of things that already exist. None of the three alone reaches the full target.

What Sontag could not see from the angle of her camera is that the voyager she names as one model among many is the externalization of the tension she places at the center of storytelling. The voyager who returns with news is also the dreamer who returns from the underworld. The bringer of facts and the bringer of vision occupy the same archetypal position, which is why storytelling moves along a single descending axis with truth and invention braided together at the bottom of the well. Sontag stopped at the doorway. Campbell walked down the stairs.

#books #campbell #comparision #culture #knowing #kripke #lies #meaning #myth #naming #sontag #stories #storytelling #truthtelling #voyager
"Knowing Bros" Adds Female Cast Member For The First Time, One Original Cast Member Takes Break - KpopNewsHub – Latest K-Pop News, Idols & Korean Entertainment

A comedian has become the first female regular member of Knowing Bros, while an original member since the show’s inception, will be stepping aside temporarily.

Kpop News Hub
Watch: Sung Han Bin, Lee Gikwang, Soyou, And Sandeul Feel A Generation Gap In "Knowing Bros" Preview - KpopNewsHub – Latest K-Pop News, Idols & Korean Entertainment

Get ready to see idols from more than one generation come together on the next episode of JTBC’s “Knowing Bros” (“Ask Us Anything”)!

Kpop News Hub
Thank God for praying mothers and for #Christian conferences. #testimony #supernatural #knowing #prophet

Below the Mesh

The light year is a bookkeeping unit that has been promoted, by repetition and by the poverty of better language, into a cosmic speed limit. Both halves of that sentence are wrong in slightly different ways. A light year measures the distance a photon covers in one orbit of Earth around the Sun, and it measures that distance against the stage on which photons and Earths and Suns appear. We treat that stage as the bedrock of reality because every instrument we have ever built reports back from inside it. Our instruments cannot, by their nature, report from anywhere else. A fish with sophisticated sonar maps the reef in exquisite detail and concludes the reef is all there is. The water is invisible because the water is the medium of seeing.

Physics has been quietly telling us for about thirty years that we are the fish. The reef is spacetime. The water is something else.

Start with what general relativity gets right, because any honest argument has to begin there. Einstein’s field equations predicted the bending of starlight around the Sun in 1919, the slow precession of Mercury’s orbit, the precise timing signals that let a phone in a pocket know where it stands on the planet’s surface, the gravitational waves LIGO caught in 2015 from two black holes colliding a billion light years away, and the shadow of the supermassive black hole at the center of M87 that the Event Horizon Telescope imaged in 2019. No theory in the history of science has paid off more predictions with more accuracy. General relativity is correct about what it describes.

Notice the hedge. General relativity is correct about what it describes. It describes spacetime as a smooth four-dimensional manifold with curvature determined by mass and energy. The question of where that manifold comes from, or what it is made of, sits outside the theory’s jurisdiction, and Einstein himself acknowledged as much. His equations assume the stage and then tell you how the stage bends. They offer no theory of the stage.

This is the crack through which everything interesting is currently flowing.

Juan Maldacena, working at Harvard in 1997, published a paper that is arguably the most important theoretical physics result since the Standard Model. He showed that a particular kind of gravitational universe, one with a specific negative curvature called Anti-de Sitter space, is mathematically equivalent to a quantum field theory living on its boundary. Everything that happens in the volume can be reconstructed from information encoded on the surface. Gravity, in this setup, stops being fundamental and becomes a holographic projection of something simpler happening in lower dimensions. The interior of the universe is a rendered image. The pixels sit on the edge.

Mark Van Raamsdonk, at the University of British Columbia, took Maldacena’s correspondence and pushed it somewhere Maldacena had not. In a 2010 essay that won first prize in the Gravity Research Foundation contest, Van Raamsdonk showed that if you dial down the quantum entanglement between two regions of the boundary theory, the corresponding regions of spacetime in the interior pull apart. Reduce the entanglement further, and the spacetime between them thins, stretches, and finally tears. Spatial distance, in this picture, is a measurement of how strongly two regions of the underlying quantum substrate are entangled with each other. The gap between Earth and Andromeda functions as a readout rather than as an empty stretch of pre-existing room. It is what weak entanglement looks like when the universe renders it as geometry.

A careful reader will note that the mathematics of this correspondence is most securely established in universes with negative cosmological curvature, which is called Anti-de Sitter space. Our universe has positive curvature, which is called de Sitter. Whether the same entanglement-as-geometry relationship carries over into the kind of cosmos we actually live in is one of the most active open questions in the field. Most theorists working on the problem believe the principle generalizes. Nobody has yet proven it, and the first person who does will earn a Nobel Prize within the decade.

Leonard Susskind at Stanford and Maldacena again, in 2013, proposed that this goes further still. Their ER=EPR conjecture argues that any two particles that share quantum entanglement are connected, at the substrate level, by a microscopic wormhole. The entanglement is the wormhole, seen from the rendered side. The wormhole is the entanglement, seen from the substrate side. They are the same object described in two languages.

Sit with what this means. If the geometry we measure is an output rather than an input, then the speed of light is a property of the output layer. It is the maximum rate at which information can propagate through the rendered image. Nothing in the substrate logic requires that the shortest path between two points in the image correspond to the shortest path between the data that produced them. Two pixels on opposite edges of a screen can sit adjacent on the memory bus behind the screen. The cable does not run across the glass.

This is where the writer in me wants to stop being careful, because careful has produced about a century of stalemate on the interstellar question, and careful is not what the moment calls for.

Here is the argument I want to make. The reason we have found no aliens, the reason the sky is quiet when statistics suggest it should be noisy, is that any civilization clever enough to cross the gulfs between stars figured out the gulfs were not the real problem decades into their investigation. Such a species stops building faster rockets. The engineering attention shifts toward entanglement itself. What looks from our side like interstellar travel, to them, becomes an exercise in editing the source code of distance.

I cannot prove this. I can argue it is consistent with what the physics now permits. A civilization that learned to manipulate the entanglement structure of its local vacuum would not need to cross four light years to reach Proxima Centauri. It would rewrite the entanglement between here and there and reduce the rendered distance. Travel becomes a matter of reconfiguring the substrate relationship between the departure point and the destination. The ship does not move. The geometry moves around the ship, because the geometry was always a consequence of a deeper relational fact that the civilization has learned to set directly.

That leap is not a small one and deserves to be named. Reading the entanglement structure of the vacuum is a measurement problem that current physics is making genuine progress on. Writing to that structure with enough precision to change a macroscopic distance is a different problem, and nothing in the current mathematics guarantees the two are connected by a practicable engineering path. My argument is that the theoretical door exists, which is a stronger claim than it was in 1990 and a weaker claim than saying a key has been cut.

This is not quite the Alcubierre drive, though it shares a family resemblance. Miguel Alcubierre’s 1994 paper in Classical and Quantum Gravity showed that general relativity permits a metric in which a bubble of space contracts in front of a ship and expands behind it, carrying the ship between points faster than light without the ship ever locally exceeding the speed of light. The original solution required negative energy densities we cannot produce. Erik Lentz at Gottingen in 2021 and Alexey Bobrick and Gianni Martire that same year published soliton solutions in peer-reviewed journals showing the negative energy requirement could be relaxed or eliminated. The energy budgets in these revised solutions remain astronomical, running from planetary to stellar mass-energy depending on geometry, configured with exotic precision we do not currently know how to impose. The door Alcubierre described is no longer obviously locked. Calling it merely heavy undersells the problem, and leaving it locked oversells the physics.

The substrate argument goes deeper than Alcubierre because it does not require you to manipulate the metric from inside the metric. It suggests the metric is downstream of something else, and that something else is where the leverage actually sits. Alcubierre is a clever exploit within the rendered layer. Substrate engineering is a rewrite at the source.

The intergalactic problem forces this issue whether we want it forced or not. The universe is expanding, and the expansion compounds with distance. Every galaxy currently more than about sixteen billion light years from Earth is receding faster than light can close the gap between us. The cosmological event horizon is not an engineering problem, and no rocket, fusion drive, or antimatter drive solves it. The space between us and those galaxies grows faster than any signal we send can cross it. Those galaxies are leaving the observable universe in real time, and nothing that respects the rendered geometry can catch them.

If there is any answer to intergalactic travel at all, the answer lives below the mesh. The rendered layer disqualifies itself. You either work beneath the rendering or you accept that the Local Group is the edge of forever.

I think the rendering can be worked beneath. I think the physics of the last thirty years has been quietly assembling the vocabulary for how. The holographic principle, emergent spacetime, ER=EPR, entanglement geometry, the soliton warp metrics, the loop quantum gravity spin networks that discretize the substrate into countable units of area and volume. What looks at first like a crowd of unrelated speculations turns out, under pressure, to be a single converging picture in which what we call space is a high-level description of something relational, informational, and combinatorial happening at a scale we have not yet learned to address directly.

The skeptic will say this is all mathematical machinery with no experimental handle, and the skeptic is right about the second half of that sentence and wrong about the first. Mathematical machinery without experimental handle is exactly what general relativity was in 1915. It took four years to get the eclipse data that confirmed it. The Higgs boson was a mathematical necessity for forty-eight years before CERN found it. The gap between a coherent theoretical framework and the instrument that tests it runs sometimes a decade and sometimes a century. Silence from the apparatus is a statement about the apparatus, not about the theory waiting for it.

The question I opened with, the one a correspondent put to me, asked whether we are thinking about spacetime correctly or whether we need to change our thinking. An honest answer splits the question in half. We are thinking about spacetime correctly for the layer we live in and incorrectly for the layer that produces it. A light year remains a good unit for measuring our prison. It is a useless unit for describing the door.

Every generation of physics has had to accept that the last generation’s bedrock was someone else’s floorboards. Newton’s absolute space became Einstein’s curved manifold. Einstein’s curved manifold is becoming, in front of our eyes, a holographic projection of an entangled quantum substrate. The pattern is consistent. The bedrock keeps turning out to be a floor. There is always something underneath.

I suspect, and I am willing to say it in public because a blog post is the right venue for saying what a journal article cannot, that the civilizations we have been listening for are silent for reasons that have little to do with their absence. Radio waves are a rendered-layer phenomenon. Any species that figured out the rendering would have less reason to keep leaking signal through the old substrate the same week they learned what the water was. The Fermi question has other candidate answers, from the Great Filter to rare-Earth biology to the simulation hypothesis, and each of those deserves the serious treatment it has already received elsewhere. What I am offering is one more candidate the physics of the last thirty years has made more plausible than it was in Fermi’s original framing. If we want to find them, we are going to have to learn what they learned. We are going to have to stop asking how fast we can cross the reef and start asking what the water is.

The reef is beautiful. I have spent a lifetime admiring it. The water is where the answers live.

#einstein #galaxy #geometry #gravity #ideas #investigation #knowing #lightYears #math #mesh #newton #science #stalemate #tech #timeTravel #universe #water