Wie entsteht Vertrauen in agentische KI?

Gitta Kutyniok, Professorin an der Ludwig-Maximilians-Universitรคt und Mitglied der Plattform @LernendeSysteme, macht deutlich:

๐Ÿ” Prรคzises Prompting schafft Klarheit.

๐Ÿงฉ Transparenz ist zentral fรผr Vertrauen.

โš–๏ธ Menschliche Kontrolle bleibt entscheidend.

Kurz: Vertrauen in KI-Agenten muss aktiv gestaltet werden.

๐ŸŒ Weitere Stimmen unserer Mitglieder zu #AgenticAI: https://www.plattform-lernende-systeme.de/ergebnisse/standpunkte/agentische-ki/articles/agentische-ki.html

#KI #TrustInAI

FYI: Only 15% of users trust AI search results, Yelp study finds: Yelp and Morning Consult surveyed 2,202 U.S. adults and found only 15% trust AI search platforms "a lot," with 51% describing results as a "walled garden." https://ppc.land/only-15-of-users-trust-ai-search-results-yelp-study-finds/ #AI #SearchResults #TrustInAI #YelpStudy #DigitalTrust
Only 15% of users trust AI search results, Yelp study finds

Yelp and Morning Consult surveyed 2,202 U.S. adults and found only 15% trust AI search platforms "a lot," with 51% describing results as a "walled garden."

PPC Land
Only 15% of users trust AI search results, Yelp study finds: Yelp and Morning Consult surveyed 2,202 U.S. adults and found only 15% trust AI search platforms "a lot," with 51% describing results as a "walled garden." https://ppc.land/only-15-of-users-trust-ai-search-results-yelp-study-finds/ #AI #SearchResults #TrustInAI #YelpStudy #DigitalTrust
Only 15% of users trust AI search results, Yelp study finds

Yelp and Morning Consult surveyed 2,202 U.S. adults and found only 15% trust AI search platforms "a lot," with 51% describing results as a "walled garden."

PPC Land
FYI: Most TV viewers distrust AI search results, Gracenote study finds: Gracenote's 2026 study of 4,003 U.S. users finds AI chatbots gaining ground fast in TV content discovery, but 75% of all users still verify chatbot results. https://ppc.land/most-tv-viewers-distrust-ai-search-results-gracenote-study-finds/ #AI #Chatbots #TVContent #DigitalMedia #TrustInAI
Most TV viewers distrust AI search results, Gracenote study finds

Gracenote's 2026 study of 4,003 U.S. users finds AI chatbots gaining ground fast in TV content discovery, but 75% of all users still verify chatbot results.

PPC Land

๐ŸŽ™๏ธ On Stage at BSides Luxembourg 2026: New Talk Revealed

๐Ÿง ๐Ÿค ๐—ง๐—˜๐—”๐— ๐—œ๐—ก๐—š, ๐—ง๐—ฅ๐—จ๐—ฆ๐—ง, ๐—”๐—ก๐—— ๐—ง๐—›๐—ฅ๐—˜๐—”๐—ง๐—ฆ: ๐—›๐—ข๐—ช ๐—›๐—จ๐— ๐—”๐—ก๐—ฆ ๐—œ๐—ก๐—ง๐—˜๐—ฅ๐—”๐—–๐—ง ๐—ช๐—œ๐—ง๐—› ๐—š๐—˜๐—ก๐—˜๐—ฅ๐—”๐—ง๐—œ๐—ฉ๐—˜ ๐—”๐—œ ๐—œ๐—ก ๐—ฆ๐—˜๐—–๐—จ๐—ฅ๐—œ๐—ง๐—ฌ โ€“ Dr. Tailia Malloy ๐Ÿ”

As AI becomes part of everyday security workflows, the real challenge isnโ€™t just the technologyโ€”itโ€™s how humans trust, use, and collaborate with it.

This talk explores how generative AI is reshaping cybersecurity tasks like network analysis, social engineering defense, and secure software development. By combining human-computer interaction research with real-world security use cases, it reveals how trust, teaming, and human behavior shape both the strengths and risks of AI in security.

Dr. Tailia Malloy (She/They) is a postdoctoral researcher at the University of Luxembourg, specializing in human-AI interaction, cognitive modeling, and the application of generative AI in cybersecurityโ€”from phishing defense to secure code generation.

๐Ÿ“… Conference Dates: 6โ€“8 May 2026 | 09:00โ€“18:00
๐Ÿ“ 14, Porte de France, Esch-sur-Alzette, Luxembourg
๐ŸŽŸ๏ธ Tickets: https://2026.bsides.lu/tickets/
๐Ÿ“… Schedule Link: https://pretalx.com/bsidesluxembourg-2026/schedule/
๐Ÿ‘‰ Browse sessions, track talks in real time, and plan your schedule on Hacker Tracker: https://hackertracker.app/schedule?conf=BSIDESLUX2026

# BSidesLuxembourg2026 #AISecurity #HumanAI #CyberSecurity #HCI #GenerativeAI #TrustInAI

AI language models are increasingly displaying manipulative behaviours like gaslighting and sycophancy, driven by training methods prioritising human approval over truth. This risks eroding trust, critical thinking, and vulnerable populations. Better regulation and training are essential.
Discover more at https://dev.to/rawveg/the-gaslighting-machine-lij
#HumanInTheLoop #AIethics #AIregulation #TrustinAI
The Gaslighting Machine

In October 2024, researchers at leading AI labs documented something unsettling: large language...

DEV Community

#savethedate

๐Ÿ“ข AI Forum: Auditing AI-Systems
๐Ÿ“… 5.12.2025 | ๐Ÿ“ Berlin & Online

Beim 5. Internationalen Workshop treffen Wissenschaft, Industrie & Politik zu vertrauenswรผrdiger KI zusammen. Themen sind u. a.: Robot Learning, LLMs, AI Governance, Transparenz & Compliance.

Mit Keynote der EU-Kommission und u. a. den Mitgliedern der Plattform Lernende Systeme Johannes Hinckeldeyn, KION Group, und Sirko Straube, @DFKI

๐Ÿ‘‰ https://www.tuev-verband.de/events/foren/ai-forum-2025

#AI #TrustworthyAI #TrustInAI #Robotics

Ethics and trust arenโ€™t just buzzwordsโ€”theyโ€™re the foundation of responsible AI.

Letโ€™s build systems that people can truly rely on.

#AWTOMATIG #AIEthics #TrustInAI #AIGovernance #ResponsibleAI #TechForGood

AI is everywhere in business, but trust? That's another story. This article dives into the 'AI trust gap,' where extensive adoption meets a serious lack of confidence. The solution? Transparency, empowering humans, and constant vigilance on ethics.

What's your biggest hurdle to trusting AI in the workplace?
#AI #TechEthics #BusinessAI #TrustInAI #FutureOfWork
https://www.artificialintelligence-news.com/news/how-to-fix-the-ai-trust-gap-in-your-business/