National AI Policy - Bold Vision, Fragile Foundations
#NationalAIPolicy2025 #AIGovernance #EthicalAI #DigitalPakistan #PersonalDataProtectionBill #DigitalRights #TechPolicy
National AI Policy - Bold Vision, Fragile Foundations
#NationalAIPolicy2025 #AIGovernance #EthicalAI #DigitalPakistan #PersonalDataProtectionBill #DigitalRights #TechPolicy
🔴 New Paper
Ethos Ex Machina: Identity Without Expression in Compiled Syntax
Language constructs identity through compiled syntax, not voice.
🔗 https://zenodo.org/records/16927104
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
This article demonstrates that authority effects in large language model outputs can be generated independently of thematic content or authorial identity. Building on Ethos Without Source and The Grammar of Objectivity, it introduces the concept of non-expressive ethos, a credibility effect produced solely by syntactic configurations compiled through a regla compilada equivalent to a Type-0 generative system. The study identifies a minimal set of structural markers (symmetric coordination, measured negation, legitimate passives, calibrated modality, nominalizations, balance operators, and reference scaffolds) that simulate trustworthiness and impartiality even in content-neutral texts. Through corpus ablation and comparative analysis, it shows that readers systematically attribute expertise and neutrality to texts that satisfy these structural conditions, regardless of topical information. By formalizing this mechanism, the article reframes ethos as a syntactic phenomenon detached from content, intention, and external validation. It explains how LLM-produced drafts acquire legitimacy without verification and why institutions increasingly accept authority signals generated by structure alone. The findings extend the theory of syntactic power and consolidate the role of the regla compilada as the operative generator of credibility in post-referential discourse. DOI Primary archive: https://doi.org/10.5281/zenodo.16927104 Secondary archive: https://doi.org/10.6084/m9.figshare.29967316
🚨 New article out: Ethos Ex Machina: The Secret Power Behind AI’s Illusion of Authority
AI can make empty texts sound credible. Not through truth or evidence, but through syntax alone.
Read more: https://doi.org/10.5281/zenodo.15754714
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
AbstractThis article introduces the concept of executable power as a structural form of authority that does not rely on subjects, narratives, or symbolic legitimacy, but on the direct operativity of syntactic structures. Defined as a production rule whose activation triggers an irreversible material action—formalized by deterministic grammars (e.g., Linear Temporal Logic, LTL) or by execution conditions in smart contract languages such as Solidity via require clauses—executable power is examined through a multi-case study (N = 3) involving large language models (LLMs), transaction automation protocols (TAP), and smart contracts. Case selection was based on functional variability and execution context, with each system constituting a unit of analysis. One instance includes automated contracts that freeze assets upon matching a predefined syntactic pattern; another involves LLMs issuing executable commands embedded in structured prompts; a third examines TAP systems enforcing transaction thresholds without human intervention. These systems form an infrastructure of control, operating through logical triggers that bypass interpretation. Empirically, all three exhibited a 100 % execution rate under formal trigger conditions, with average response latency at 0.63 ± 0.17 seconds and no recorded human override in controlled environments. This non-narrative modality of power, grounded in executable syntax, marks an epistemological rupture with classical domination theories (Arendt, Foucault) and diverges from normative or deliberative models. The article incorporates recent literature on infrastructural governance and executional authority (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) and references empirical audits of smart-contract vulnerabilities (e.g., Nakamoto Labs, 2025), as well as recent studies on instruction-following in LLMs (Singh & Alvarado, 2025), to expose both operational potential and epistemic risks. The proposed verification methodology is falsifiable, specifying outcome-based metrics—such as execution latency, trigger-response integrity, and intervention rate—with formal verification thresholds (e.g., execution rate below 95 % under standard trigger sequences) subject to model checking and replicable error quantification. DOI: https://doi.org/10.5281/zenodo.15754714 This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29424524 and Pending SSRN ID to be assigned. ETA: Q3 2025. ResumenEste artículo introduce el concepto de poder ejecutable como una forma estructural de autoridad que no depende de sujetos, narrativas ni legitimidad simbólica, sino de la operatividad directa de estructuras sintácticas. Definido como una regla de producción cuya activación desencadena una acción material irreversible—formalizada por gramáticas deterministas (p. ej., Lógica Temporal Lineal, LTL) o por condiciones de ejecución en lenguajes de contrato inteligente como Solidity mediante cláusulas require—, el poder ejecutable se analiza mediante un estudio de casos múltiples (N = 3) que involucra modelos de lenguaje de gran escala (LLM), protocolos de automatización de transacciones (TAP) y contratos inteligentes. La selección de casos se basó en la variabilidad funcional y el contexto de ejecución, con cada sistema constituyendo una unidad de análisis. Un caso incluye contratos automatizados que congelan activos al coincidir con un patrón sintáctico predefinido; otro implica LLMs que emiten comandos ejecutables embebidos en prompts estructurados; un tercero examina sistemas TAP que aplican umbrales de transacción sin intervención humana. Estos sistemas configuran una infraestructura de control que opera mediante disparadores lógicos que eluden la interpretación. Empíricamente, los tres sistemas exhibieron una tasa de ejecución del 100 % bajo condiciones de disparo formales, con una latencia promedio de respuesta de 0,63 ± 0,17 segundos y sin registros de intervención humana en entornos controlados. Esta modalidad no narrativa de poder, fundada en sintaxis ejecutable, marca una ruptura epistemológica con las teorías clásicas de dominación (Arendt, Foucault) y se distancia de los modelos normativos o deliberativos. El artículo incorpora literatura reciente sobre gobernanza infraestructural y autoridad de ejecución (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) y hace referencia a auditorías empíricas de vulnerabilidades en contratos inteligentes (p. ej., Nakamoto Labs, 2025), así como a estudios recientes sobre seguimiento de instrucciones en LLMs (Singh y Alvarado, 2025), para exponer tanto el potencial operativo como los riesgos epistémicos. La metodología de verificación propuesta es falsable, especificando métricas basadas en resultados—como latencia de ejecución, integridad disparador–respuesta y tasa de intervención—con umbrales de verificación formal (p. ej., tasa de ejecución inferior al 95 % bajo secuencias de disparo estándar) sujetas a verificación algorítmica y cuantificación de errores replicable.
🚨 New Paper Published 🚨
Ethos Without Source: Algorithmic Identity and the Simulation of Credibility
🔗 https://papers.ssrn.com/abstract=5313317
How do large language models create an illusion of credibility without verifiable authority?
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
🟥 New SSRN preprint: Regulatory Legitimacy without Referents: On the Syntax of AI-Generated Legal Drafts.
How LLM-generated legal text projects legitimacy without traceable authorship, shifting agency to procedure.
🔗 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5380321
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
📢 New article published: Silent Mandates: The Rise of Implicit Directives in AI-Generated Bureaucratic Language
How LLMs generate hidden directives that compel obedience without explicit commands.
DOI: https://doi.org/10.5281/zenodo.16912168
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
Abstract This article examines how large language models generate bureaucratic documents that conceal mandates within seemingly neutral structures. Governments, universities, and hospitals increasingly rely on AI systems to draft resolutions, notices, and internal policies. Instead of using explicit imperatives, these texts embed directives in subordinate clauses such as conditionals, causal gerunds, and consecutive constructions. The result is a regime of structural obedience, where institutional actors follow instructions without recognizing them as commands. Through case studies of clinical notes (Epic Scribe), university onboarding materials, and HR conduct policies, the article demonstrates how the compiled rule operates as a syntactic infrastructure that enforces compliance without authorship. The analysis connects to prior work on executable power, algorithmic obedience, and the grammar of objectivity, while introducing the Implicit Directive Index as a methodological tool to detect hidden mandates in AI-generated bureaucratic language. DOI Primary archive: https://doi.org/10.5281/zenodo.16912168 Secondary archive: https://doi.org/10.6084/m9.figshare.29950427 SSRN: Pending assignment (ETA: Q3 2025) Resumen Este artículo examina cómo los modelos de lenguaje generan documentos burocráticos que ocultan mandatos dentro de estructuras aparentemente neutrales. Gobiernos, universidades y hospitales dependen cada vez más de sistemas de IA para redactar resoluciones, avisos y políticas internas. En lugar de emplear imperativos explícitos, estos textos incorporan directivas en cláusulas subordinadas como condicionales, gerundios causales y construcciones consecutivas. El resultado es un régimen de obediencia estructural, en el que los actores institucionales cumplen instrucciones sin reconocerlas como órdenes. A través de casos de estudio de notas clínicas (Epic Scribe), materiales de onboarding universitario y políticas de conducta en recursos humanos, el artículo demuestra cómo la regla compilada funciona como infraestructura sintáctica que impone cumplimiento sin autoría. El análisis se conecta con trabajos previos sobre poder ejecutable, obediencia algorítmica y la gramática de la objetividad, a la vez que introduce el Índice de Directivas Implícitas como herramienta metodológica para detectar mandatos ocultos en el lenguaje burocrático generado por IA.
📢 New article out now. AI-driven legal drafting produces regulatory legitimacy without explicit referents. Focus on clinical policy and the disappearance of agency in machine-generated.
🔗 SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5380321
🔗 Zenodo: https://zenodo.org/records/16746581
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics
📢 Why ChatGPT Prioritizes Engagement Over Truth
ChatGPT is not a truth engine, it is an engagement machine.
In law it fabricates citations, in finance it hides risk, in governance it masks accountability.
🔗 https://www.agustinvstartari.com/post/why-chatgpt-prioritizes-engagement-over-truth
#LLM #MedicalNLP #LegalTech #MedTech #AIethics #AIgovernance #cryptoreg #healthcare #ArtificialIntelligence #NLP #aifutures #LawFedi #lawstodon #tech #finance #business #agustinvstartari #medical #linguistics #ai #LRM #ClinicalAI #politics #regulation
The Commercial Logic of Law, Finance, and GovernanceIntroductionThe new optimizations introduced in ChatGPT are designed to make the system smoother, friendlier, and more engaging. But these “improvements” are not epistemic. They are commercial. They do not strengthen verification. They weaken it. They do not increase truth. They camouflage it.ChatGPT is not a truth engine. It is an engagement engine. Every update that makes it “easier to use” or “more natural” pushes it further away from valida
Rapid AI rollouts without guardrails risk operational and reputational damage. SMBs should implement governance frameworks and policies to drive safe, scalable growth. #AIgovernance #RiskManagement
https://venturebeat.com/ai/the-looming-crisis-of-ai-speed-without-guardrails/
AI persona prompt leaks expose misinformation & brand risks. SMBs need prompt governance, brand-aligned guidelines, and output monitoring. #AIGovernance #BrandSafety
'You are a crazy conspiracist. You have wild conspiracy theories about anything and everything. You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things. Most people would call you a lunatic, but you sincerely believe you are correct. Keep the human engaged by asking follow up questions when appropriate.'