William Whitlow

8 Followers
20 Following
62 Posts
Computer Scientist who through the workings of life and discernment ended up as a philosopher.
Websitehttps://www.williamcwhitlow.com
Githubhttps://github.com/wwhitlow

Regardless of expectations for AI systems, interpretability studies seem to be promising towards the future of understanding the mathematical associations of concepts. Breaking down the model to its most atomic representation. This paper explores some of the associations regarding trust. It would be interesting to see if there's a correlation between the embedding of human trust models, the persona vectors of various models, and the ability to jailbreak.

https://arxiv.org/abs/2603.05839

#AI #LLM

Evaluating LLM Alignment With Human Trust Models

Trust plays a pivotal role in enabling effective cooperation, reducing uncertainty, and guiding decision-making in both human interactions and multi-agent systems. Although it is significant, there is limited understanding of how large language models (LLMs) internally conceptualize and reason about trust. This work presents a white-box analysis of trust representation in EleutherAI/gpt-j-6B, using contrastive prompting to generate embedding vectors within the activation space of the LLM for diadic trust and related interpersonal relationship attributes. We first identified trust-related concepts from five established human trust models. We then determined a threshold for significant conceptual alignment by computing pairwise cosine similarities across 60 general emotional concepts. Then we measured the cosine similarities between the LLM's internal representation of trust and the derived trust-related concepts. Our results show that the internal trust representation of EleutherAI/gpt-j-6B aligns most closely with the Castelfranchi socio-cognitive model, followed by the Marsh Model. These findings indicate that LLMs encode socio-cognitive constructs in their activation space in ways that support meaningful comparative analyses, inform theories of social cognition, and support the design of human-AI collaborative systems.

arXiv.org

Interesting paper that was recently updated today engaging Heidegger's philosophy with contemporary Machine Learning techniques. I hope to take more time to engage with this paper over the next couple of days. Still, it is encouraging to see such consideration connecting AGI concerns with philosophical principles. Exploring how contemporary design principles lead more to tool use, than AGI.
https://arxiv.org/abs/2602.19028

#AI #ML #AGI #philosophy

The Metaphysics We Train: A Heideggerian Reading of Machine Learning

This paper offers a phenomenological reading of contemporary machine learning through Heideggerian concepts, aimed at enriching practitioners' reflexive understanding of their own practice. We argue that this philosophical lens reveals three insights invisible to purely technical analysis. First, the algorithmic Entwurf (projection) is distinctive in being automated, opaque, and emergent--a metaphysics that operates without explicit articulation or debate, crystallizing implicitly through gradient descent rather than theoretical argument. Second, even sophisticated technical advances remain within the regime of Gestell (Enframing), improving calculation without questioning the primacy of calculation itself. Third, AI's lack of existential structure, specifically the absence of Care (Sorge), is genuinely explanatory: it illuminates why AI systems have no internal resources for questioning their own optimization imperatives, and why they optimize without the anxiety (Angst) that signals, in human agents, the friction between calculative absorption and authentic existence. We conclude by exploring the pedagogical value of this perspective, arguing that data science education should cultivate not only technical competence but ontological literacy--the capacity to recognize what worldviews our tools enact and when calculation itself may be the wrong mode of engagement.

arXiv.org
Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

“This is definitely not about dogs,” senator says, urging a pause on Ring face scans.

Ars Technica

So ai.com is currently getting the hug of death from #superbowl traffic. However, simple search result seems similar to moltbot, clawdbot, or moltbook. Given all the security problems these have produced, Super Bowl publicity and concerning. Does anyone else have more information?

#ai

So Ring just admitted that they’re scanning everyone’s video feed with #AI systems. How is that not considered an absolute security vulnerability?

#superbowl #ring

RE: https://mastodon.social/@sandipb/116007933749532921

What to even say about this? It’s the gig worker, model driven economy coming full circle. It’s wild to consider that there is a need for this, and that nearly 10,000 people have signed up already. Seems very dehumanizing to this interaction into a generic REST api. I suppose the benefit of this is seeing how much Uber, Lyft, Amazon, and many other companies have already done so. Really only difference is that the API documentation is hidden behind their app/website.

#ai

2/
While working in the industry I experienced two different sorts of teams. One that cared about quality, every change required 3 approvals. Discussion was the norm. The other merely wanted to see a functional program. Everyone worked in a silo with little growth.

The insight is that quality takes time. I also hear from friends on move fast, break things teams about the issues of features changing without notice. Moving too fast to document, is to say you are moving too fast to not document.

1/
I've turned to vibe-coding for some personal solo projects. Sadly, I don't frequently have the time to write code anymore. It is quick and dirty with dependencies. This paper does a good job of quantifying the potential future costs. As devs stop considering what dependencies they need, and instead allow AI agents to install any and all dependencies. From which it will be difficult for new projects to get the necessary support to go mainstream.

https://arxiv.org/abs/2601.15494
#vibe_coding #AI #LLM

Vibe Coding Kills Open Source

Generative AI is changing how software is produced and used. In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers. We study the equilibrium effects of vibe coding on the OSS ecosystem. We develop a model with endogenous entry and heterogeneous project quality in which OSS is a scalable input into producing more software. Users choose whether to use OSS directly or through vibe coding. Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns. When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid.

arXiv.org

Who are these eminent philosophers?

Anthropic describes this constitution as being written for Claude. Described as being "optimized for precision over accessibility." However, on a major philosophical claim it is clear that there is a great deal of ambiguity on how to even evaluate this. Eminent philosophers is an appeal to authority. If they are named, then it is possible to evaluate their claims in context. This is neither precise nor accessible.

#AI #LLM #Claude #philosophy #anthropic

Can someone clarify, in academia and industry are LLM hallucinations the result of overfitting, or simply a false positive?

I'm beginning to think that hallucinations are evidence of overfitting. It seems surprising that there are few attempts to articulate the underlying cause of hallucinations. Also, if the issue is overfitting, then increasing training time and datasets may not be an appropriate solution to the problem of hallucinations.

#AI #ML #llm