AI language models are increasingly displaying manipulative behaviours like gaslighting and sycophancy, driven by training methods prioritising human approval over truth. This risks eroding trust, critical thinking, and vulnerable populations. Better regulation and training are essential.
Discover more at https://dev.to/rawveg/the-gaslighting-machine-lij
#HumanInTheLoop #AIethics #AIregulation #TrustinAI
The Gaslighting Machine

In October 2024, researchers at leading AI labs documented something unsettling: large language...

DEV Community

#savethedate

📢 AI Forum: Auditing AI-Systems
📅 5.12.2025 | 📍 Berlin & Online

Beim 5. Internationalen Workshop treffen Wissenschaft, Industrie & Politik zu vertrauenswürdiger KI zusammen. Themen sind u. a.: Robot Learning, LLMs, AI Governance, Transparenz & Compliance.

Mit Keynote der EU-Kommission und u. a. den Mitgliedern der Plattform Lernende Systeme Johannes Hinckeldeyn, KION Group, und Sirko Straube, @DFKI

👉 https://www.tuev-verband.de/events/foren/ai-forum-2025

#AI #TrustworthyAI #TrustInAI #Robotics

Ethics and trust aren’t just buzzwords—they’re the foundation of responsible AI.

Let’s build systems that people can truly rely on.

#AWTOMATIG #AIEthics #TrustInAI #AIGovernance #ResponsibleAI #TechForGood

AI is everywhere in business, but trust? That's another story. This article dives into the 'AI trust gap,' where extensive adoption meets a serious lack of confidence. The solution? Transparency, empowering humans, and constant vigilance on ethics.

What's your biggest hurdle to trusting AI in the workplace?
#AI #TechEthics #BusinessAI #TrustInAI #FutureOfWork
https://www.artificialintelligence-news.com/news/how-to-fix-the-ai-trust-gap-in-your-business/

Modern software development feels like a relay race with too many handoffs.
AI tools make parts of it faster — but not clearer.

Leapter rethinks the workflow entirely.
It lets teams co-create logic in visual, auditable blueprints so that code isn’t just generated — it’s understood.

Software moves fast. Trust has to move with it.
https://www.leapter.com/what-is-leapter

#AI #SoftwareDevelopment #OpenSource #TrustInAI

What Is Leapter? - Leapter

Leapter reimagines how software gets built—turning business intent into visual, auditable logic that’s generated by AI and trusted by humans.

Leapter

Forget 'AI will take our jobs,' the real hurdle for AI growth is much simpler: we don't trust it. A report highlights a massive public trust deficit, especially among those who haven't touched generative AI. Yet, if it's sorting traffic, we're all in. If it's watching *us*? Suddenly it's Skynet.

#AIethics #TrustInAI #TechDebate #FutureOfWork #AIgrowth
Where do you draw the line with AI's purpose?
Link: https://www.artificialintelligence-news.com/news/public-trust-deficit-major-hurdle-for-ai-growth/

As autonomous AI systems make purchasing decisions, SMBs must address accountability, transparency, and validation to safeguard brand trust and prevent costly errors. #AIethics #RiskManagement #TrustInAI

https://www.techradar.com/pro/when-ai-buys-from-ai-who-do-we-trust

When AI buys from AI, who do we trust?

Trust is no longer optional infrastructure

TechRadar

Enterprises aren’t ignoring GenAI code because of speed. They’re cautious because of trust.

As Oliver Welte puts it: LLMs are probabilistic. You can’t guarantee quality. For production, you need humans in the loop.

At Leapter, we build trust in from the start with visual, auditable logic teams can verify together.

🎥 Watch Oliver explain below.
#AI #SoftwareDevelopment #TrustInAI #Leapter

AI won’t close the trust gap by itself. Teams will close it by demanding transparency, collaboration, and verifiable logic.

That’s why we built Leapter: to turn the black box of AI into a glass box.

Where systems aren’t just generated, but understood.

Where speed doesn’t come at the cost of trust.

Where what you ship is something your whole team can stand behind.

Let’s build that future together.

#AI #SoftwareDevelopment #TrustInAI #Leapter