@cwebber precisely that!
A #shitposting - Program is anything but #reproduceable and I want #ReproduceableBuilds for #auditability, #security and #transparency.
- That's the whole reason I do @OS1337: To have something so fundamentally simple and compact that it is (at least in theory - at some point) financially feasible to crowdfund complete code audits of the entire system.
- I don't want people to trust me blindly, but to earn trust in the few things I code.
That's why I treat any "#AI" / #AIslop the same way @dolphin treat any leaks from Nintendo:

We need to talk about that Massive Nintendo Leak | MVG
YouTube@bschorr @thunderbird yeah, that's something that only works in systems that
allow you to do that which isn't even desireable in most cases - and in some juristictions like
#Germany it would not even be legal due to
#Auditability requirements per law.
🔧 Why You Need to Centralize USB Management Using USB over IP
Centralize USB device management with USBManager Server for enhanced visibility and control. Replace outdated "Dongle Room" practices with digital, real-time management, simplifying access and tracking. Enjoy full auditability with detailed logs and access tracking, ensuring security and compliance.
Learn more 👉 https://usbmanager.net/why-you-need-to-centralize-usb-management-using-usb-over-ip/
#USBoverIP #DeviceManagement #USBManagerServer #Auditability #Security
🧠 New paper
The Grammar of Objectivity
Language models simulate neutrality not by removing bias, but by formalizing it.
🔍 Based on 1,500 LLM outputs (medical/legal, 2019–2024)
⚠️ 64 % of medical and 57 % of legal texts flagged
🔗 Read / download:
Zenodo: https://doi.org/10.5281/zenodo.15729518
SSRN: https://ssrn.com/abstract=5319520
Neutrality is no longer a meaning. It’s a structure.
#AI #LLM #Objectivity #Syntax #CriticalCode #Auditability #LLMTransparency #GrammarOfPower #agustinvStartari #social
The Grammar of Objectivity: Formal Mechanisms for the Illusion of Neutrality in Language Models
Abstract Simulated neutrality in generative models produces tangible harms (ranging from erroneous treatments in clinical reports to rulings with no legal basis) by projecting impartiality without evidence. This study explains how Large Language Models (LLMs) and logic-based systems achieve neutralidad simulada through form, not meaning: passive voice, abstract nouns and suppressed agents mask responsibility while asserting authority. A balanced corpus of 1 000 model outputs was analysed: 600 medical texts from PubMed (2019-2024) and 400 legal summaries from Westlaw (2020-2024). Standard syntactic parsing tools identified structures linked to authority simulation. Example: a 2022 oncology note states “Treatment is advised” with no cited trial; a 2021 immigration decision reads “It was determined” without precedent. Two audit metrics are introduced, agency score (share of clauses naming an agent) and reference score (proportion of authoritative claims with verifiable sources). Outputs scoring below 0.30 on either metric are labelled high-risk; 64 % of medical and 57 % of legal texts met this condition. The framework runs in <0.1 s per 500-token output on a standard CPU, enabling real-time deployment. Quantifying this lack of syntactic clarity offers a practical layer of oversight for safety-critical applications. This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29390885 and SSRN (In Process ) Resumen La neutralidad simulada en los modelos generativos produce daños tangibles, desde tratamientos erróneos en informes clínicos hasta sentencias sin fundamento jurídico, al proyectar imparcialidad sin evidencia. Este estudio analiza cómo los modelos de lenguaje de gran tamaño (LLM) y los sistemas lógicos reproducen dicha neutralidad mediante la forma y no el contenido. Patrones como la voz pasiva, los sustantivos abstractos y la supresión del agente ocultan la responsabilidad y, al mismo tiempo, afirman autoridad. Se examinó un corpus equilibrado de 1 000 salidas de modelo: 600 textos médicos de PubMed (2019-2024) y 400 resúmenes legales de Westlaw (2020-2024). Se emplearon herramientas estándar de análisis sintáctico para detectar estructuras asociadas con la simulación de autoridad. Por ejemplo, una nota oncológica de 2022 afirma «Se aconseja el tratamiento» sin citar ensayos clínicos; en un resumen migratorio de 2021 se lee «Se determinó» sin referencia a precedentes jurídicos. El artículo introduce dos métricas de auditoría: la puntuación de agencia, que mide la proporción de cláusulas con agente explícito, y la puntuación de referencia, que calcula el porcentaje de afirmaciones autoritativas respaldadas por fuentes verificables. Las salidas con valores inferiores a 0,30 en cualquiera de estas métricas se clasifican como de alto riesgo; el 64 % de los textos médicos y el 57 % de los jurídicos cumplen este criterio. El marco se ejecuta en menos de 0,1 segundos por salida de 500 tokens en una CPU estándar, lo que demuestra su viabilidad en tiempo real. Cuantificar esta falta de claridad sintáctica aporta una capa práctica de supervisión para aplicaciones críticas.
Zenodo🚨 New academic publication:
The Grammar of Objectivity – Agustin V. Startari
🧠 How language models simulate neutrality without source or justification.
🔍 Structural audit on 1,000 LLM outputs
⚙️ INS: Simulated Neutrality Index
📎 DOI: https://doi.org/10.5281/zenodo.15729518
#AI #LLM #Epistemology #AIethics #Auditability #GrammarsOfPower
🚨 New academic article by Agustín V. Startari:
The Grammar of Objectivity: Formal Mechanisms for the Illusion of Neutrality in Language Models
🔍 Focus: How LLMs use syntax to simulate neutrality without epistemic grounding.
📊 Introduces the Simulated Neutrality Index (INS), based on 1,000 model outputs.
📁 Open access: https://doi.org/10.5281/zenodo.15729518
#LLM #AIethics #SyntacticAuthority #Auditability #Humanities #Epistemology
The Grammar of Objectivity: Formal Mechanisms for the Illusion of Neutrality in Language Models
Abstract Simulated neutrality in generative models produces tangible harms (ranging from erroneous treatments in clinical reports to rulings with no legal basis) by projecting impartiality without evidence. This study explains how Large Language Models (LLMs) and logic-based systems achieve neutralidad simulada through form, not meaning: passive voice, abstract nouns and suppressed agents mask responsibility while asserting authority. A balanced corpus of 1 000 model outputs was analysed: 600 medical texts from PubMed (2019-2024) and 400 legal summaries from Westlaw (2020-2024). Standard syntactic parsing tools identified structures linked to authority simulation. Example: a 2022 oncology note states “Treatment is advised” with no cited trial; a 2021 immigration decision reads “It was determined” without precedent. Two audit metrics are introduced, agency score (share of clauses naming an agent) and reference score (proportion of authoritative claims with verifiable sources). Outputs scoring below 0.30 on either metric are labelled high-risk; 64 % of medical and 57 % of legal texts met this condition. The framework runs in <0.1 s per 500-token output on a standard CPU, enabling real-time deployment. Quantifying this lack of syntactic clarity offers a practical layer of oversight for safety-critical applications. This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29390885 and SSRN (In Process ) Resumen La neutralidad simulada en los modelos generativos produce daños tangibles, desde tratamientos erróneos en informes clínicos hasta sentencias sin fundamento jurídico, al proyectar imparcialidad sin evidencia. Este estudio analiza cómo los modelos de lenguaje de gran tamaño (LLM) y los sistemas lógicos reproducen dicha neutralidad mediante la forma y no el contenido. Patrones como la voz pasiva, los sustantivos abstractos y la supresión del agente ocultan la responsabilidad y, al mismo tiempo, afirman autoridad. Se examinó un corpus equilibrado de 1 000 salidas de modelo: 600 textos médicos de PubMed (2019-2024) y 400 resúmenes legales de Westlaw (2020-2024). Se emplearon herramientas estándar de análisis sintáctico para detectar estructuras asociadas con la simulación de autoridad. Por ejemplo, una nota oncológica de 2022 afirma «Se aconseja el tratamiento» sin citar ensayos clínicos; en un resumen migratorio de 2021 se lee «Se determinó» sin referencia a precedentes jurídicos. El artículo introduce dos métricas de auditoría: la puntuación de agencia, que mide la proporción de cláusulas con agente explícito, y la puntuación de referencia, que calcula el porcentaje de afirmaciones autoritativas respaldadas por fuentes verificables. Las salidas con valores inferiores a 0,30 en cualquiera de estas métricas se clasifican como de alto riesgo; el 64 % de los textos médicos y el 57 % de los jurídicos cumplen este criterio. El marco se ejecuta en menos de 0,1 segundos por salida de 500 tokens en una CPU estándar, lo que demuestra su viabilidad en tiempo real. Cuantificar esta falta de claridad sintáctica aporta una capa práctica de supervisión para aplicaciones críticas.
Zenodo@TheQuinbox nodds in agreement to me "#AI" coding ruins the #readability, #maintainability and #auditability of the #sourcecode, and I do require this.
@jameskoole #Germany in fact #banned #VotingMachines because they violate #transparency and #auditability demands as in 'everyone who is eligible to vote must be able to verify the election procedures from start to finish without relying on external help or trusting anyone'…
But if all (most) CPUs are FGPAs, how does one bootstrap and assume the payload is not malicious?
Programmers using discrete electronics and punched tape would be a human-auditable (if tedious) way of bootstrapping.
A minimum viable target & programs for it to bootstrap everything else would be needed.
I consider this analogous to the
#Guix bootstrap seed endeavor.
#FPGA #Bootstrap #Bootstrapping #CPU #Hardware #Security #Auditability