평양 City Rockers #407 – Réveil Forcé À New York (01-10-2025)

Mais où va le monde, Ô Auditrices et Teurs, où va le monde ?!? On s’acharne sur un pauvre homme qui a fait don de sa personne à son pays, on le traîne dans la boue, le met plus bas que terre et l’envoie dans un cul de basse-fosse comme un vulgaire escroc !

Il faut dire que l’escroc en question, très occupé à sauver son pays de menaces imaginaires tout en sauvant le cul de terroristes bien réels, avait négligé la lecture de la meilleure littérature, ce que lui reprochera jusqu’à son avocat couréen (notre photo, garantie sans trucage).

Chez Pyongyang City Rockers par contre, on la lit, la meilleure littérature, et on l’invite pour causer plaisamment du numéro 3 de Réveil Forcé, avec autour une bande son punk hardcore et boucan du plus bel effet.

C’est ce soir 20h. sur Campus Lille 106,6fm et en DAB+ (7C) vers Arras-Lens-Lille, ou sur le web tout partout grâce au http://www.campuslille.com !

Post-scriptum : Hop, retour du Pierrock dans Pyongyang City Rockers pour causer de son zine Réveil Forcé #3, entièrement consacré à son voyage à New York en juin dernier… Ici c’est la radio donc y a pas les (nombreuses) photos qui parsèment le zine, et puis on cause surtout de trucs qui sont pas écrits dedans, donc y a pas moyen vous allez devoir le choper ! Dans la playlist du monsieur y avait que du bruit bruyant au son fumé : Contrast Attitude – The Dark – State Manufactured Terror ; et sinon on a réussi à caler quelques annonces concerts (Péniche, Hero Dishonest, Battery March, Gimic), la chronique crasher à GL Henry (Dislike), et pour conclure on danse avec le dernier Dear Deer… Merci !

La liste :
Péniche – Treize À La Douzaine
Hero Dishonest – Sä
Battery March – Arme Blanche
Dislike – Live
Contrast Attitude – Who Can Change The Future
The Dark – No One To Grieve
State Manufactured Terror – No Compromise For Genocidal Ethnostates
Gimic – Irrational Demographic
Dear Deer – Science-fiction

#3 #BatteryMarch #ContrastAttitude #DearDeer #Dislike #Gimic #HeroDishonest #Péniche #RéveilForcé #StateManufacturedTerror #TheDarkUs_

"tiny rabbit is obsessed with his giant girlfriend that's 4x his size" core


#/ref #;3

The Agile Manifesto: Rearranging Deck Chairs While Five Dragons Burn Everything Down

The Agile Manifesto: Rearranging Deck Chairs While Five Dragons Burn Everything Down

Why the ‘Sound’ Principles Miss the Dragons That Actually Kill Software Projects

The Agile Manifesto isn’t wrong, per se—it’s addressing the wrong problems entirely. And that makes it tragically inadequate.

For over two decades, ‘progressive’ software teams have been meticulously implementing sprints, standups, and retrospectives whilst the real dragons have been systematically destroying their organisations from within. The manifesto’s principles aren’t incorrect; they’re just rearranging deck chairs on the Titanic whilst it sinks around them.

The four values and twelve principles address surface symptoms of dysfunction whilst completely ignoring the deep systemic diseases that kill software projects. It’s treating a patient’s cough whilst missing the lung cancer—technically sound advice that’s spectacularly missing the point.

The Real Dragons: What Actually Destroys Software Teams

Whilst we’ve been optimising sprint ceremonies and customer feedback loops, five ancient dragons have been spectacularly burning down software development and tech business effectiveness:

Dragon #1: Human Motivation Death Spiral
Dragon #2: Dysfunctional Relationships That Poison Everything
Dragon #3: Shared Delusions and Toxic Assumptions
Dragon #4: The Management Conundrum—Questioning the Entire Edifice
Dragon #5: Opinioneering—The Ethics of Belief Violated

These aren’t process problems or communication hiccups. They’re existential threats that turn the most well-intentioned agile practices into elaborate theatre whilst real work grinds to a halt. And the manifesto? It tiptoes around these dragons like they don’t exist.

Dragon #1: The Motivation Apocalypse

‘Individuals and interactions over processes and tools’ sounds inspiring until you realise that your individuals are fundamentally unmotivated to do good work. The manifesto assumes that people care—but what happens when they don’t?

The real productivity killer isn’t bad processes; it’s developers who have mentally checked out because:

  • They’re working on problems they find meaningless
  • Their contributions are invisible or undervalued
  • They have no autonomy over how they solve problems
  • The work provides no sense of mastery or purpose
  • They’re trapped in roles that don’t match their strengths

You can have the most collaborative, customer-focused, change-responsive team in the world, but if your developers are quietly doing the minimum to avoid getting fired, your velocity will crater regardless of your methodology.

The manifesto talks about valuing individuals but offers zero framework for understanding what actually motivates people to do their best work. It’s having a sports philosophy that emphasises teamwork whilst ignoring whether the players actually want to win the game. How do you optimise ‘individuals and interactions’ when your people have checked out?

Dragon #2: Relationship Toxicity That Spreads Like Cancer

‘Customer collaboration over contract negotiation’ assumes that collaboration is even possible—but what happens when your team relationships are fundamentally dysfunctional?

The real collaboration killers that the manifesto ignores entirely:

  • Trust deficits: When team members assume bad faith in every interaction
  • Ego warfare: When technical discussions become personal attacks on competence
  • Passive aggression: When surface civility masks deep resentment and sabotage
  • Fear: When people are afraid to admit mistakes or ask questions
  • Status games: When helping others succeed feels like personal failure

You hold all the retrospectives you want, but if your team dynamics are toxic, every agile practice becomes a new battlefield. Sprint planning turns into blame assignment. Code reviews become character assassination. Customer feedback becomes ammunition for internal warfare.

The manifesto’s collaboration principles are useless when the fundamental relationships are broken. It’s having marriage counselling techniques for couples who actively hate each other—technically correct advice that misses the deeper poison. How do you collaborate when trust has been destroyed? What good are retrospectives when people are actively sabotaging each other?

Dragon #3: Shared Delusions That Doom Everything

‘Working software over comprehensive documentation’ sounds pragmatic until you realise your team is operating under completely different assumptions about what ‘working’ means, what the software does, and how success is measured. But what happens when your team shares fundamental delusions about reality?

The productivity apocalypse happens when teams share fundamental delusions:

  • Reality distortion: Believing their product is simpler/better/faster than it actually is
  • Capability myths: Assuming they can deliver impossible timelines with current resources
  • Quality blindness: Thinking ‘works on my machine’ equals production-ready
  • User fiction: Building for imaginary users with imaginary needs
  • Technical debt denial: Pretending that cutting corners won’t compound into disaster

These aren’t communication problems that better customer collaboration can solve—they’re shared cognitive failures that make all collaboration worse. When your entire team believes something that’s factually wrong, more interaction just spreads the delusion faster.

The manifesto assumes that teams accurately assess their situation and respond appropriately. But when their shared mental models are fundamentally broken? All the adaptive planning in the world won’t help if you’re adapting based on fiction.

Dragon #4: The Management Conundrum—Why the Entire Edifice Is Suspect

‘Responding to change over following a plan’ sounds flexible, but let’s ask the deeper question: Why do we have management at all?

The manifesto takes management as a given and tries to optimise around it. But what if the entire concept of management—people whose job is to direct other people’s work without doing the work themselves—is a fundamental problem?

Consider what management actually does in most software organisations:

  • Creates artificial hierarchies that slow down decision-making
  • Adds communication layers that distort information as it flows up and down
  • Optimises for command and control rather than effectiveness
  • Makes decisions based on PowerPoint and opinion rather than evidence
  • Treats humans like interchangeable resources to be allocated and reallocated

The devastating realisation is that management in software development is pure overhead that actively impedes the work. Managers who:

  • Haven’t written code in years (or ever) making technical decisions
  • Set timelines based on business commitments rather than reality
  • Reorganise teams mid-project because a consultant recommended ‘matrix management’ or some such
  • Measure productivity by story points rather than needs attended to (or met)
  • Translate clear customer needs into incomprehensible requirements documents

What value does this actually add? Why do we have people who don’t understand the work making decisions about the work? What if every management layer is just expensive interference?

The right number of managers for software teams is zero. The entire edifice of management—the org charts, the performance reviews, the resource allocation meetings—is elaborate theatre that gets in the way of people solving problems.

Productive software teams operate more like research labs or craftsman guilds: self-organising groups of experts who coordinate directly with each other and with the people who use their work. No sprint masters, no product owners, no engineering managers—just competent people working together to solve problems.

The manifesto’s principles assume management exists and try to make it less harmful. But they never question whether it has any value at all.

Dragon #5: Opinioneering—The Ethics of Belief Violated

Here’s the dragon that the manifesto not only ignores but actually enables: the epidemic of strong opinions held without sufficient evidence.

William Kingdon Clifford wrote in 1877 that

‘it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence’
(Clifford, 1877).

In software development, we’ve created an entire culture that violates this ethical principle daily through systematic opinioneering:

Technical Opinioneering: Teams adopting microservices because they’re trendy, not because they solve actual problems. Choosing React over Vue because it ‘feels’ better. Implementing event sourcing because it sounds sophisticated. Strong architectural opinions based on blog posts rather than deep experience with the trade-offs.

Process Opinioneering: Cargo cult agile practices copied from other companies without understanding why they worked there. Daily standups that serve no purpose except ‘that’s what agile teams do.’ Retrospectives that generate the same insights every sprint because the team has strong opinions about process improvement but no evidence about what actually works.

Business Opinioneering: Product decisions based on what the CEO likes rather than what users require. Feature priorities set by whoever argues most passionately rather than data about user behaviour. Strategic technology choices based on industry buzz rather than careful analysis of alternatives.

Cultural Opinioneering: Beliefs about remote work, hiring practices, team structure, and development methodologies based on what sounds right rather than careful observation of results.

The manifesto makes this worse by promoting ‘individuals and interactions over processes and tools’ without any framework for distinguishing between evidence-based insights and opinion-based groupthink. It encourages teams to trust their collective judgement without asking whether that judgement is grounded in sufficient evidence. But what happens when the collective judgement is confidently wrong? How do you distinguish expertise from persuasive ignorance?

When opinioneering dominates, you get teams that are very confident about practices that don’t work, technologies that aren’t suitable, and processes that waste enormous amounts of time. Everyone feels like they’re making thoughtful decisions, but they’re sharing unfounded beliefs dressed up as expertise.

The Deeper Problem: Dysfunctional Shared Assumptions and Beliefs

The five dragons aren’t just symptoms—they’re manifestations of something deeper. Software development organisations operate under shared assumptions and beliefs that make effectiveness impossible, and the Agile Manifesto doesn’t even acknowledge this fundamental layer exists.

My work in Quintessence provides the missing framework for understanding why agile practices fail so consistently. The core insight is that organisational effectiveness is fundamentally a function of collective mindset:

Organisational effectiveness = f(collective mindset)

I demonstrate that every organisation operates within a “memeplex“—a set of interlocking assumptions and beliefs about work, people, and how organisations function. These beliefs reinforce each other so strongly that changing one belief causes the others to tighten their grip to preserve the whole memeplex.

This explains why agile transformations consistently fail. Teams implement new ceremonies whilst maintaining the underlying assumptions that created their problems in the first place. They adopt standups and retrospectives whilst still believing people are motivated, relationships are authentic, management adds value, and software is always the solution.

Consider the dysfunctional assumptions that pervade conventional software development:

About People: Most organisations and their management operate under “Theory X” assumptions—people are naturally lazy, require external motivation, need oversight to be productive, and will shirk responsibility without means to enforce accountability. These beliefs create the very motivation problems they claim to address.

About Relationships: Conventional thinking treats relationships as transactional. Competition drives performance. Hierarchy creates order. Control prevents chaos. Personal connections are “unprofessional.” These assumptions poison the collaboration that agile practices supposedly enable.

About Work: Software is the solution to every problem. Activity indicates value. Utilisation (of eg workers) drives productivity. Efficiency trumps effectiveness. Busyness proves contribution. These beliefs create the delusions that make teams confidently ineffective.

About Management: Complex work requires coordination. Coordination requires hierarchy. Hierarchy requires managers. Managers add value through oversight and direction. These assumptions create the parasitic layers that impede the very work they claim to optimise.

About Knowledge: Strong opinions indicate expertise. Confidence signals competence. Popular practices are best practices. Best practices are desirable. Industry trends predict future success. These beliefs create the opinioneering that replaces evidence with folklore.

Quintessence (Marshall, 2021) shows how “quintessential organisations” operate under completely different assumptions:

  • People find joy in meaningful work and naturally collaborate when conditions support it
  • Relationships based on mutual care and shared purpose are the foundation of effectiveness
  • Work is play when aligned with purpose and human flourishing
  • Management is unnecessary parasitism—people doing the work make the decisions about the work
  • Beliefs must be proportioned to evidence and grounded in serving real human needs

The Agile Manifesto can’t solve problems created by fundamental belief systems because it doesn’t even acknowledge these belief systems exist. It treats symptoms whilst leaving the disease untouched. Teams optimise ceremonies whilst operating under assumptions that guarantee continued dysfunction.

This is why the Qunitessence approach differs so radically from ‘Agile’ approaches. Instead of implementing new practices, quintessential organisations examine their collective assumptions and beliefs. Instead of optimising processes, they transform their collective mindset. Instead of rearranging deck chairs, they address the fundamental reasons the ship is sinking.

The Manifesto’s Tragic Blindness

Here’s what makes the Agile Manifesto so inadequate: it assumes the Five Dragons don’t exist. It offers principles for teams that are motivated, functional, reality-based, self-managing, and evidence-driven—but most software teams are none of these things.

The manifesto treats symptoms whilst ignoring diseases:

  • It optimises collaboration without addressing what makes collaboration impossible
  • It values individuals without confronting what demotivates them
  • It promotes adaptation without recognising what prevents teams from seeing their shared assumptions and beliefs clearly
  • It assumes management adds value rather than questioning whether management has any value at all
  • It encourages collective decision-making without any framework for leveraging evidence-based beliefs

This isn’t a failure of execution—it’s a failure of diagnosis. The manifesto identified the wrong problems and thus prescribed the wrong solutions.

Tom Gilb’s Devastating Assessment: The Manifesto Is Fundamentally Fuzzy

Software engineering pioneer Tom Gilb delivers the most damning critique of the Agile Manifesto: its principles are

‘so fuzzy that I am sure no two people, and no two manifesto signers, understand any one of them identically’

(Gilb, 2005).

This fuzziness isn’t accidental—it’s structural. The manifesto was created by ‘far too many “coders at heart” who negotiated the Manifesto’ without

‘understanding of the notion of delivering measurable and useful stakeholder value’

(Gilb, 2005).

The result is a manifesto that sounds profound but provides no actionable guidance for success in product development.

Gilb’s critique exposes the manifesto’s fundamental flaw: it optimises for developer comfort rather than stakeholder value. The principles read like a programmer’s wish list—less documentation, more flexibility, fewer constraints—rather than a framework for delivering measurable results to people who actually need the software.

This explains why teams can religiously follow agile practices whilst consistently failing to deliver against folks’ needs. The manifesto’s principles are so vague that any team can claim to be following them whilst doing whatever they want. ‘Working software over comprehensive documentation’ means anything you want it to mean. ‘Responding to change over following a plan’ provides zero guidance on how to respond or what changes matter. (Cf. Quantification)

How do you measure success when the principles themselves are unmeasurable? What happens when everyone can be ‘agile’ whilst accomplishing nothing? How do you argue against a methodology that can’t be proven wrong?

The manifesto’s fuzziness enables the very dragons it claims to solve. Opinioneering thrives when principles are too vague to be proven wrong. Management parasitism flourishes when success metrics are unquantified Shared delusions multiply when ‘working software’ has no operational definition.

Gilb’s assessment reveals why the manifesto has persisted despite its irrelevance: it’s comfortable nonsense that threatens no one and demands nothing specific. Teams can feel enlightened whilst accomplishing nothing meaningful for stakeholders.

Stakeholder Value vs. All the Needs of All the Folks That Matter™

Gilb’s critique centres on ‘delivering measurable and useful stakeholder value’—but this phrase itself illuminates a deeper problem with how we think about software development success. ‘Stakeholder value’ sounds corporate and abstract, like something you’d find in a business school textbook or an MBA course (MBA – maybe best avoided – Mintzberg)

What we’re really talking about is simpler, less corporate and more human: serving all the needs of all the Folks That Matter™.

The Folks That Matter aren’t abstract ‘stakeholders’—they’re real people trying to get real things done:

  • The nurse trying to access patient records during a medical emergency
  • The small business owner trying to process payroll before Friday
  • The student trying to submit an assignment before the deadline
  • The elderly person trying to video call their grandchildren
  • The developer trying to understand why the build is broken again

When software fails these people, it doesn’t matter how perfectly agile your process was. When the nurse can’t access records, your retrospectives are irrelevant. When the payroll system crashes, your customer collaboration techniques are meaningless. When the build and smoke takes 30+ minutes, your adaptive planning is useless.

The Agile Manifesto’s developer-centric worldview treats these people as distant abstractions—’users’ and ‘customers’ and ‘stakeholders.’ But they’re not abstractions. They’re the Folks That Matter™, and their needs are the only reason software development exists.

The manifesto’s principles consistently prioritise developer preferences over the requirements of the Folks That Matter™. ‘Working software over comprehensive documentation’ sounds reasonable until the Folks That Matter™ require understanding of how to use the software. ‘Individuals and interactions over processes and tools’ sounds collaborative until the Folks That Matter™ require consistent, reliable results from those interactions.

This isn’t about being anti-developer—it’s about recognising that serving the Folks That Matter™ is the entire point. The manifesto has it backwards: instead of asking ‘How do we make development more comfortable for developers?’ we might ask ‘How do we reliably serve all the requirements of all the Folks That Matter™?’ That question changes everything. It makes motivation obvious—you’re solving real problems for real people. It makes relationship health essential—toxic teams can’t serve others effectively. It makes reality contact mandatory—delusions about quality hurt real people. It makes evidence-based decisions critical—opinions don’t serve the Folks That Matter™; results do.

Most importantly, it makes management’s value proposition clear: Do you help us serve the Folks That Matter™ better, or do you get in the way? If the answer is ‘get in the way,’ then management becomes obviously a dysfunction.

What Actually Addresses the Dragons

If we want to improve software development effectiveness, we address the real dragons:

Address Motivation: Create work that people actually care about. Give developers autonomy, mastery, and purpose. Match people to problems they find meaningful. Make contributions visible and valued.

Heal Toxic Relationships: Build psychological safety where people can be vulnerable about mistakes. Address ego and status games directly. Create systems where helping others succeed feels like personal victory.

Resolve Shared Delusions: Implement feedback loops that invite contact with reality. Measure what actually matters. Create cultures where surfacing uncomfortable truths is rewarded rather than punished.

Transform Management Entirely: Experiment with self-organising teams. Distribute decision-making authority to where expertise actually lives. Eliminate layers between problems and problem-solvers. Measure needs met, not management theatre.

Counter Evidence-Free Beliefs: Institute a culture where strong opinions require strong evidence. Enable and encourage teams to articulate the assumptions behind their practices. Reward changing your mind based on new data. Excise confident ignorance.

These aren’t process improvements or methodology tweaks—they’re organisational transformation efforts that require fundamentally different approaches than the manifesto suggests.

Beyond Agile: Addressing the Real Problems

The future of software development effectiveness isn’t in better sprint planning or more customer feedback. It’s in organisational structures that:

  • Align individual motivation with real needs
  • Create relationships based on trust
  • Enable contact with reality at every level
  • Eliminate management as dysfunctional
  • Ground all beliefs in sufficient evidence

These are the 10x improvements hiding in plain sight—not in our next retrospective, but in our next conversation about why people don’t care about their work. Not in our customer collaboration techniques, but in questioning whether we have managers at all. Not in our planning processes, but in demanding evidence for every strong opinion.

Conclusion: The Problems We Were Addressing All Along

The Agile Manifesto succeeded in solving the surface developer bugbears of 2001: heavyweight processes and excessive documentation. But it completely missed the deeper organisational and human issues that determine whether software development succeeds or fails.

The manifesto’s principles aren’t wrong—they’re just irrelevant to the real challenges. Whilst we’ve been perfecting our agile practices, the dragons of motivation, relationships, shared delusions, management being dysfunctional, and opinioneering have been systematically destroying software development from within.

Is it time to stop optimising team ceremonies and start addressing the real problems? Creating organisations where people are motivated to do great work, relationships enable rather than sabotage collaboration, shared assumptions are grounded in reality, traditional management no longer exists, and beliefs are proportioned to evidence.

But ask yourself: Does your organisation address any of these fundamental issues? Are you optimising ceremonies whilst your dragons run wild? What would happen if you stopped rearranging deck chairs and started questioning why people don’t care about their work?

Because no amount of process optimisation will save a team where people don’t care, can’t trust each other, believe comfortable lies, are managed by people who add negative value, and make decisions based on opinions rather than evidence.

The dragons are real, and they’re winning. Are we finally ready to address them?

Further Reading

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., … & Thomas, D. (2001). Manifesto for Agile Software Development. Retrieved from https://agilemanifesto.org/

Clifford, W. K. (1877). The ethics of belief. Contemporary Review, 29, 289-309.

Gilb, T. (2005). Competitive Engineering: A Handbook for Systems Engineering, Requirements Engineering, and Software Engineering Using Planguage. Butterworth-Heinemann.

Gilb, T. (2017). How well does the Agile Manifesto align with principles that lead to success in product development? Retrieved from https://www.gilb.com/blog/how-well-does-the-agile-manifesto-align-with-principles-that-lead-to-success-in-product-development

Marshall, R.W. (2021). *Quintessence: An Acme for Software Development Organisations. *[online] leanpub.com. Falling Blossoms (LeanPub). Available at: https://leanpub.com/quintessence/ [Accessed 15 Jun 2022].

#1 #2 #3 #4 #5

From diagnosis to duty: health workers confront their own role in inequity

A thirteen-year-old girl in Nigeria, bitten by a snake, arrived at a hospital with her frantic family. The hospital demanded payment before administering the antivenom. The family could not afford it. The girl died.

This was one of the stark stories shared by health professionals on September 10, 2025, during “Exploration Day,” the third day of The Geneva Learning Foundation’s inaugural peer learning exercise on health equity. The previous day had been about diagnosing the external systems that create such tragedies. But today, the focus shifted.

“Yesterday, we looked at the problem,” said TGLF facilitator Dr María Fernanda Monzón. “Today, we look in the mirror. We move from analyzing the situation to analyzing ourselves, our own role, our own power, and our own assumptions”.

The practitioner’s role

The day’s intensive, small-group workshops challenged participants to move beyond naming a problem to questioning their own connection to it. Groups brought their findings back to the plenary, where the work of exploration continued.

Oyelaja Olayide, a medical laboratory scientist from Nigeria, presented her group’s analysis of a child’s death following a lab misdiagnosis. The group’s root cause analysis pointed to a systemic issue: the lack of a quality management system in the laboratory. But then the facilitator turned the question back to her. “What was your role in this?”.

The question hung in the air, shifting the focus from an abstract system to individual responsibility. This pivot is central to the learning process, and the cohort’s diversity is a core element of its design. The majority of participants are frontline health workers—nurses, midwives, doctors, and community health promoters. They work side-by-side as peers with national-level staff and international partners, with government employees making up over 40% of the group. This mix intentionally breaks down traditional hierarchies, creating a space where a policy-maker can learn directly from the lived experience of a clinician in a remote village.

Learn more about the Certificate peer learning programme for equity in research and practice https://www.learning.foundation/bias

After a moment of reflection, Olayide acknowledged her role as a professional with the expertise to see the gap. “My role is to be an advocate,” she concluded, recognizing her duty to push for the implementation of quality control systems that could prevent future tragedies.

From reflection to a plan for action

This deep self-reflection is the foundation for the next stage of the process: creating a viable action plan. For the remainder of the day, participants worked on the third part of their course project, which is due by the end of the week.

The programme’s methodology insists that a good plan is not made for a community, but with a community. Participants were guided to develop action steps that involve listening to the people most affected and ensuring they help lead the change. This requires practitioners to think honestly about their position and power and how they can share it to empower others.

The day’s exploration pushed participants beyond easy answers. It asked them to confront their own biases, acknowledge their power, and accept their professional duty not just to treat patients, but to help fix the broken systems that make them sick. By turning the analytical lens inward, they began to forge the tools they need to build a more equitable future.

About the Certificate peer learning programme for equity in research and practice

The Geneva Learning Foundation is an organization that helps health workers from around the world learn together as equals. It offers the Certificate peer learning programme for equity in research and practice, where health professionals work with each other to make health care more fair for everyone, both in how care is given and in how health is studied. The first course in this program is called EQUITY-001 Equity matters, which introduces a method called HEART. This method helps you turn your experience into a real plan for change. HEART stands for Human Equity, Action, Reflection, and Transformation. This means you will learn to see unfairness in health (Human Equity), create a practical plan to do something about it (Action), think carefully about the problem to find its root cause (Reflection), and make a lasting, positive change for your community (Transformation).

Image: The Geneva Learning Foundation Collection © 2025

#1 #2 #3 #4 #5 #CertificatePeerLearningProgrammeForEquityInResearchAndPractice #experientialLearning #healthEquity #HEART #inequity #peerLearning #TheGenevaLearningFoundation

The practitioner as catalyst: How a global learning community is turning frontline experience into action on health inequity

“In this phase of my life, I want to work directly with the communities to see what I can do,” said Dr. Sambo Godwin Ishaku, a public health leader from Nigeria with over two decades of experience. His words opened the second day of The Geneva Learning Foundation’s first-ever peer learning exercise on health equity. They also spoke to the very origin of the event itself.

The Geneva Learning Foundation’s Certificate peer learning programme for equity in research and practice was created because thousands of health workers like Dr. Ishaku joined a global dialogue about equity and demanded a new kind of learning—one that moved beyond theory to provide practical tools for action.

This inaugural session on 9 September 2025, called “Discovery Day,” was a direct answer to that call. It was not a lecture, but a three-hour, high-intensity workshop where the participants’ own experiences of inequity became the curriculum.

The goal for the day was one step in a carefully designed 16-day process: to help practitioners see a familiar problem in a new way, setting the stage for them to build a viable action plan they can use in their communities.

The anatomy of unfairness

The session began with practitioners sharing true stories of systemic failure. These accounts gave a human pulse to the clinical definition of health inequity: the avoidable and unjust conditions that make it harder for some people to be healthy.

To demonstrate how to move from story to analysis, the entire cohort engaged in a collective diagnosis. They focused on a first case presented by Dr. Elizabeth Oduwole, a retired physician, about a 65-year-old man unable to afford his diabetes medication on a meager pension. Together, in a live plenary, they used a simple analytical tool to excavate the root causes of this single injustice.

The tool, known as the “Five Whys,” is less about power and more about simplicity. Its strength lies in its accessibility, providing a common language for a cohort of remarkable diversity. In this programme, community health workers, doctors, nurses, midwives, and others who work for health on the front lines of service delivery make up the majority of participants. They work side-by-side as peers with national-level staff and international partners. Government staff comprise over 40% of the group.

The group’s collective intelligence peeled back the layers of Dr. Oduwole’s story. The man’s inability to afford medicine was not just about poverty (Why #1) , but about a lack of government policy for the elderly (Why #2). This, in turn, was linked to a lack of advocacy (Why #3) , which stemmed from biased social norms that devalue the lives of older adults (Why #4). The root cause they uncovered was a deep-seated cultural belief, passed down through generations, that this was simply the natural order of things (Why #5). In minutes, the problem had transformed from a financial issue into a profound cultural challenge.

A crucible for discovery

With this shared experience, the practitioners were plunged into a rapid series of timed, small-group workshops. In these intense breakout sessions, they applied the same methodology to situations each group identified.

The stories that emerged were stark. One group analyzed the experience of a participant from Nigeria whose father died after being denied oxygen at a hospital because the only available tank was being reserved for a doctor’s mother. Their analysis traced this act back to a root cause of systemic decay and a breakdown in the ethics of the health profession. Another group tackled the insidious spread of health misinformation preventing rural girls in a conflict-afflicted area from receiving the HPV vaccine, identifying the root cause as an inadequate national health communication strategy.

A learning community was born in these workshops. They became a crucible where practitioners, often isolated in their daily work, connect with peers who understand their struggles. By unpacking a real-world problem together, they practice the skills needed for their final course project: a practical action plan due at the end of the week, which they will then have peer-reviewed and revised.

The process is designed to generate unexpected insights. Day 2, “Discovery,” is followed by Day 3, “Exploration,” both dedicated to this intensive peer analysis. By the end of the journey, each participant will have an action plan to tackle a local challenge, one that is often radically different from what they might have first envisioned, because it targets a newly discovered root cause.

The session ended, as it began, with the voices of health workers. The chat filled with a sense of energy and purpose. “We are all eager to learn, to know more, and to make an equitable Africa,” wrote Vivian Abara, a pre-hospital emergency services responder . “We’re really, really ready to go the whole nine yards and do everything, help ourselves, hold each other’s hand and move.”

About The Geneva Learning Foundation

The Geneva Learning Foundation is an organization that helps health workers from around the world learn together as equals. It offers the Certificate peer learning programme for equity in research and practice, where health professionals work with each other to make health care more fair for everyone, both in how care is given and in how health is studied. The first course in this programme is called EQUITY-001 Equity matters, which introduces a method called HEART. This method helps you turn your experience into a real plan for change. HEART stands for Human Equity, Action, Reflection, and Transformation. This means you will learn to see inequity in health (Human Equity), create a practical plan to do something about it (Action), think carefully about the problem to find its root cause (Reflection), and make a lasting, positive change for your community (Transformation).

Image: The Geneva Learning Foundation Collection © 2025

#1 #2 #3 #4 #5 #CertificatePeerLearningProgrammeForEquityInResearchAndPractice #experientialLearning #healthEquity #HEART #inequity #peerLearning #TheGenevaLearningFoundation

FAUN – new video released
https://eternal-terror.com/?p=72583

Photo: Iseris Art

FAUN release new album HEX with focus track “Belladonna” 

Album out now via Pagan Folk Records (Distribution: Believe)

„Belladonna“ is the final single and will be simultaneous released with FAUNs 12th studio album „HEX“, a journey into witchcraft.

The band also says about the song: “Belladonna is a poisonous nightshade and witch’s plant. “Belladonna” comes […]

#3

This Friday linkfest can sing all the lyrics to “It’s the End of the World” (UPDATEDx3)

This week: the self-handicap principle, COPE vs. retractions, terrible activism and terrible science in one convenient package, separation theorems, one album wonders (?), and more.

Stephen Heard and Bethann Garramon Merkle’s new book on teaching and mentoring scientific writing now has a cover and blurbs.

New Committee on Publication Ethics (COPE) guidelines for journal retraction processes just dropped. One change from the last update in 2019 is that the guidelines are now clearer that journals don’t necessarily need to wait for the results of institutional investigations of alleged misconduct in order to retract papers. I’ve always found this puzzling, because at the two journals for which I’ve been an editor (Oikos, and now Am Nat), that’s always been the practice, so it’s always seemed weird to me that that practice isn’t universal. It’s entirely up to the journal whether to publish a paper, not the institution employing the author. So why wouldn’t it be entirely up to the journal whether to un-publish it (that is, retract it)? Especially because journals don’t need a misconduct finding, or have to make one themselves, in order to retract. Even in the Pruitt case, where the fabrication was dead obvious as soon as you looked for it, if you look closely at the retraction notices from Am Nat and other EEB journals, they’re all careful not to say that the data were fabricated, much less that Pruitt did the fabricating. Rather, the retraction notices simply describe “unexplained data anomalies” (or similar phrasing), and leave it to readers to draw the obvious inference as to the origin of those anomalies (Jonathan Pruitt’s copy-pasting). The notices were phrased that way not because the journals weren’t sure if Pruitt committed misconduct (believe me, we were sure!), and mostly not even because the journals didn’t want Pruitt to sue them for defamation (although that may have been a consideration at some journals). Rather, the notices were phrased that way simply because the notices only said what they needed to say to explain and justify the retraction. The data could not longer be relied upon, therefore the paper no longer met the standard required for publication and had been retracted.

For the aspiring self-handicapper, the best causes are lost causes.” (UPDATE: link fixed) (UPDATE 2: gah, ok NOW it’s fixed) Adam Mastroianni’s astute psychological diagnosis of how a lot of educated people these days feel about a lot of things (such as, oh, this example that I definitely chose at random). Closely related to Meghan’s old post on whether ecology instructors should try to leave students feeling empowered to tackle climate change. Also related: against purity. If you only read one of the links this week, make it Adam Mastroianni’s post.

A commonly cited “statistic” on massive water use due to AI, from a group at Berkeley, is based almost entirely on…blaming AI firms that use hydropower for natural evaporation of the water behind hydroelectric dams. Yes, really. Because if AI didn’t use hydropower then the water…wouldn’t…evaporate? You can yadda yadda all you want about the undoubted complexities of attribution of responsibility here, but come on. This is clearly a garbage estimate, possibly because the assumptions were deliberately chosen to get the highest estimated water use possible. The thing is, I think it’s garbage as political activism too! Just the worst of both worlds, eating away at the credibility of non-activist science in exchange for zero political upside and non-zero political downside. Though of course I’m obviously not a political activist, so what do I know. And yes, before anyone points it out, I’m aware that this particular report is hardly unique; activists on every side of every political issue are constantly releasing “expert” reports filled with similarly dubious “statistics” (as well as pursuing their causes in ways that don’t involve any statistics, dubious or otherwise). So yes, you are probably right to wonder why I would get so annoyed with this one particular example that I just happened to stumble across. What can I say, I’m human, for “grouchy old guy” values of “human”. My level of annoyance with things is not at all rationally calibrated to either their objective importance, or my ability to affect them.

Here’s are a couple of sobering quotes from someone who talks to a lot of higher education administrators, elected officeholders, and government officials about higher education policy in Canada:

Literally the worst thing universities can say right now is “universities are crucial, give us more money”. It’s an utterly tone-deaf approach, even if you give it an “elbows up” spin. The sector has been saying it for years and it clearly hasn’t worked, so continuing with this approach is the literal definition of insanity.

You can argue all you want about how basic research is more cost-efficient in terms of driving long-term discovery, but i) the public likes some short-term wins mixed in with the long-term ones and b) nobody outside universities is buying that one story about NSERC funding Geoffrey Hinton’s AI research 30 year ago as a business case for science. Like, nobody. Get over it.

I would be very curious to hear any good pushback to this. By which I mean, pushback grounded in data and first-hand experience related to science funding and policymaking, not just “pushback” in the sense of “I disagree.” By the way, I say that as someone whose own arguments for the value of basic research definitely include the sorts of arguments that, according to the linked quotes, have long been falling on deaf ears in Ottawa and provincial capitals. I would very much like the linked quotes to be wrong, but I don’t have any good reason to think they are. So if you have a good reason, please do share it! (And conversely, if you have good reason to think they’re right, please do share that!)

This news article projects that hundreds of US colleges will close in the next decade due to the combination of a falling college-age population and declining international enrollments.

This feels correct to me, but what do I know?

What is the most currently underrated ’80s song? Not sure why the author needed to compile data on this. Clearly the answer is “Whatever ’80s song that I, Jeremy Fox, personally like best, that people are no longer listening to.” 🙂 Scroll down to learn the identity of that song.

You’ve heard of one-hit wonders, but what about one-album wonders? Turns out that it’s hard to define what a “one-album wonder” even is.

Separation theorems in finance and their relevance to government policy. I know that sounds incredibly obscure and boring, but it’s not, honest.

I’m always interested to read disagreements among people who usually agree with one another, and with me. So here’s Andrew Gelman vs. Ben Recht and Dan Davies on reporting and interpreting polling uncertainty. I think Andrew Gelman wins this one hands down. (UPDATE #3: Dan Davies responds to Gelman. I find it totally unconvincing, which is rarely how I react to Dan Davies posts. It unfairly accuses people who do a good job of visually presenting, and writing about, polling data and its uncertainties of doing the equivalent of intentionally burying important technicalities in footnotes that nobody will ever read. And it proves way too much. Davies’ argument implies that nobody should ever present any quantitative data or analysis of anything the general public cares about to the general public, because the data will inevitably be too imperfect and too many people will inevitably misunderstand it. Davies claims that we just have to bite the bullet and admit that public opinion polling response rates have dropped so low that opinion polling can no longer be done, or analyzed, responsibly. But why is that the bullet we have to bite? I’d say the bullet you should bite is that many members of the general public are going to want (and be provided with, and discover for themselves) all sorts of quantitative information that many of them will often badly misinterpret. You should bite that bullet because the alternative to biting it is totalitarianism. /end update #3)

A single mutation made horses rideable, apparently. The underlying study sounds super-interesting, though note that I have no ability to evaluate it.

Speaking of rare mutations, here’s a bright orange nurse shark.

We found him: the only person who can sing all the lyrics to REM’s “It’s the End of the World”. 🙂

Perfect pop song of the week:

Man, I’ve got a “I’m old and grouchy and haven’t encountered anything new that I like since the 1980s” bit going, don’t I? Oh well, too late to do anything but lean into it:

https://www.youtube.com/watch?v=iPUmE-tne5U

Coming up

Sept. 8: Highlights from recent comment threads

Sept. 10: Who are the top science influencers on YouTube, TikTok, and podcasts? And are any of them scientists? (In which I continue to lean into the “old guy” schtick…)

#3

Owls, blurbs, and a cover – oh my! “Teaching and Mentoring Writers in the Sciences” is getting close

We (that’s Bethann Garramon Merkle and I) are getting very excited about our new book, Teaching and Mentoring Writers in the Sciences: An Evidence-Based Approach. Over the last few months, we’ve be…

Scientist Sees Squirrel

The Real Cost of Vibe Coding: When to Stop Futzing (Part 3 of 3)

In part one of the blog series, I introduced Glyph Lefkowitz’s “Futzing Fraction” and discussed how vibe coding is likely inefficient across all skill levels of development tasks. In part two, I extended the formula to account for real-world factors like developer skill, task complexity, and error costs. The results make vibe coding look even less appealing than with the original formula.

The Story So Far: Both the original and extended futzing fractions consistently show vibe coding as inefficient (FF > 1) versus traditional development. The extended formula reveals a harsher reality: citizen developers can waste up to 6x their time, competent developers can still experience a loss of efficiency ranging from 80% to 240%, and even experts struggle to get AI to produce acceptable code without coaxing, especially on complex or high-stakes tasks. The formula casts a negative light on the “AI replaces developers” narrative by showing that skill, complexity, and error costs matter enormously.

Quick Formula Reference:

Glyph’s Original Futzing Fraction:

My Extended Futzing Fraction:

Where:

  • I = Inference time (waiting for AI)
  • W = Writing prompts
  • C = Checking/fixing AI output
  • H = Human baseline (time to code manually)
  • P = Probability AI gets it right
  • S = Skill factor (your ability to evaluate/fix AI output)
  • L = Learning factor (overhead of figuring out AI workflows)
  • X = Complexity multiplier
  • E = Error cost multiplier

Rule of thumb: FF > 1 means you’re wasting time. FF < 1 means AI is actually helping.

Now comes the crucial question: what to do with this information?

Practical Framework: How to Stop Futzing Around

After playing with the numbers and working with vibe coding on my AI assistant, here’s what the futzing fraction taught me about using AI effectively.

The “AI Replaces Developers” Story Is Mathematical Nonsense

The standard vendor narrative assumes that coding is coding, that building a to-do app and implementing OAuth have the same complexity, error tolerance, and skill requirements. The improved futzing fraction shows this is, at best, wishful thinking. Even expert developers struggle to break even on moderately complex tasks, and citizen developers are consistently burning 3-6x the time they would save by simply hiring a competent developer.

The standard vendor narrative assumes that coding is coding, that building a to-do app and implementing OAuth have the same complexity, error tolerance, and skill requirements. The improved futzing fraction shows this is, at best, wishful thinking. Even expert developers struggle to break even on moderately complex tasks, and citizen developers are consistently burning 3-6x the time they would save by simply hiring a competent developer.

Set Futzing Budgets, Not Futzing Goals

Based on my experience, here’s what I wish I’d done from the start:

Time-box vibe sessions. Set a hard limit: “I’ll spend 30 minutes trying to get AI to solve this. If FF’ > 1 by then, I code it myself.” I wasted hours on features where I knew by attempt #3 that I should have sucked it up and written the code.

Track your actual success rates. Stop trusting vendor benchmarks and start measuring your P for different types of tasks. My success rate for UI work was maybe 15%, but for authentication flows it was closer to 5%.

Apply the formula as a decision filter. Before using Copilot or Windsurf, estimate your variables. High complexity (X > 2)? High error cost (E > 2)? Low skill for this specific task (S < 1)? Go ahead and code it yourself and save some frustration.

Team Applications: Who Should Futz and When

Junior developers: Focus on high-L, low-E tasks where learning matters more than efficiency. Use vibe coding for exploring patterns and understanding concepts, but not for shipping features. Always have a senior developer review the work, because your ability to spot errors may be lower than you think (I know mine was at times).

Senior developers: Use vibe coding selectively for prototyping and exploration where mistakes are cheap. However, when the stakes are high or the complexity is difficult to suss out, trust your skills over those of a chatbot.

Critical features: FF’ < 0.5 or write it yourself. Authentication, payment processing, data handling, anything where E > 3 probably isn’t worth the risk. AI’s tendency to hallucinate APIs and skip error handling makes it fundamentally unsuitable for high-stakes code.

Measure, Don’t Just Feel

The most significant insight from formalizing this in a simple formula is that most of us are terrible at estimating our productivity. The intermittent reinforcement of occasional big wins (when vibing saves you 2 hours) makes us forget the frequent small losses (when it wastes 20 minutes 10 times in a day).

Start tracking your C (checking time), as that’s where the hidden costs live. If you’re going to spend more than five minutes going back and forth with an AI, debugging and trying to fix errors caused by previous AI output, you probably should go ahead and write it yourself.

Track your actual P by task type, you might discover that vibe coding is great for CSS or inserting logging statements, but terrible for async logic, or helpful for boilerplate but dangerous for edge cases. Use that data to inform your futzing decisions.

The formula is a reality check for when vibe coding stops being productive and starts causing non-productive water-treading.

The Formula for Honest AI Adoption

Building a project via vibe coding taught me that the futzing fraction – both Glyph’s original and my extended version – aligns perfectly with what I experienced. The formula validates the intuitive sense that something was off, that I was treading water and wasting time for hours arguing with a Markov chain, even when I occasionally hit those satisfying moments where it generated exactly what I needed.

Looking back, there were times when the futzing fraction was under one, meaning it was the right choice.

AI coding assistants genuinely helped with specific tasks, such as generating boilerplate code, adding consistent logging patterns throughout a codebase, scaffolding new modules with standard structures, and refactoring repetitive patterns. These tasks typically had low complexity multipliers (X), minimal error costs (E), and high learning factors (L) since the code was reusable.

The sweet spot seems to be tasks where:

  • The scope is well-defined and narrow
  • Mistakes are obvious and easily fixed
  • The output serves as a starting point rather than the final destination
  • You’re working in familiar territory where your skill factor (S) is high

Suppose you are spending time writing small utility scripts, generating test data, creating configuration templates, or handling routine code transformations. In that case, you may be better off using AI, as those tasks yield futzing fractions closer to or below 1.

Some vendors sell the myth that coding is about to become as easy as writing an email (which, let’s be honest, many still struggle with). Vibe coding can indeed be genuinely helpful for specific, well-defined tasks where the cost of errors is low and the scope is well defined. But for complex features, security-critical code, or anything requiring deep system understanding, good software still requires skill, judgment, and experience that can’t be prompt-engineered away.

I’ve shifted to a more surgical approach based on these calculations. I vibe code selectively for the tasks where my measured success rates are high and the futzing fraction consistently stays below 1: boilerplate generation, adding logging, refactoring patterns, and exploring new libraries in low-stakes environments. But for anything complex, security-related, where I can’t waste the time debugging hallucinated APIs and phantom imports, I crack my knuckles and reach for my keyboard.

If you made it this far, thanks for sticking with me through this slog of a blog series. In other news, I recently took a road trip to North Dakota and captured the following picture of the night sky.

#3 #ai #ArtificialIntelligence #productivity #technology #vibe #vibeCoding

🎥 Niekończąca się opowieść – dziecięca baśń, której nie powinny oglądać dzieci



W tym odcinku rozbieramy na czynniki pierwsze kultową produkcję – od niemieckich korzeni, przez traumatyczną scenę z bagnami, po meta-fabułę o sile wyobraźni. Będzie o smoku-psie, depresyjnym żółwiu, wilku z misją, a także o tym, jak ta baśń dla dzieci wcale nie jest taka niewinna.
Dowiedz się, co w filmie jest incepcją i jak naprawdę działa „nicość”.

💰 WSPARCIE:
https://tipply.pl/u/retrogralnia
https://patronite.pl/RetroGralnia
https://www.youtube.com/retrogralniapl/join

Niekończąca się opowieść (niem. Die unendliche Geschichte) – anglojęzyczny film fantasy produkcji niemieckiej z 1984 roku w reżyserii Wolfganga Petersena, na podstawie powieści fantasy Michaela Ende Niekończąca się historia z 1979 roku. W dniu premiery film był najdroższą produkcją filmową spoza USA i Związku Radzieckiego. Przewodnim motywem muzycznym filmu jest piosenka Limahla Never Ending Story. W 1990 roku powstała kolejna część filmu pt. Niekończąca się opowieść II: Następny rozdział, a w 1994 roku część trzecia pt. Niekończąca się opowieść III: Ucieczka z krainy Fantazji.

Obsada:
– Barret Oliver – Bastian
– Noah Hathaway – Atreyu
– Tami Stronach – cesarzowa Fantazji
– Alan Oppenheimer –
Falkor (głos),
Gmork (głos),
Pożeracz Skał (głos),
narrator (głos)
– Thomas Hill – pan Koreander
– Gerald McRaney – ojciec Bastiana
– Sydney Bromley – Engywook
– Patricia Hayes – Urgl
– Deep Roy – Malutki Pan
– Frank Lenart – Malutki Pan (głos)
– Tilo Prückner – Chochlik
– Heidi Brühl – Wyrocznia (głos)
– Moses Gunn – Cairon
– Darryl Cooksey – prześladowca Bastiana #1
– Drum Garrett – prześladowca Bastiana #2
– Nicholas Gilbert – prześladowca Bastiana #3

🔴 ZASUBSKRYBUJ NASZ KANAŁ!
https://www.youtube.com/retrogralniapl?sub_confirmation=1

🔴SERWER DISCORD RG

Discord

✅ FACEBOOK:
http://www.FB.com/RetroGralnia

✅ STRONA:
https://retrogralnia.pl

✅ MUZEUM GRY I KOMPUTERY MINIONEJ ERY:

Muzeum Gry i Komputery

🎵 Muzyka w tle:
https://youtube.com/c/momentvm

#RetroSprzęt #RetroGaming #TheGameIsNotOver

#1 #2 #3 #RetroGaming #TheGameIsNotOver