Harm guardrails in AI help prevent catastrophic output.
Pattern-based enforcement of AI shapes normative output.
One is necessary.
The other becomes powerful when left unexamined.
Harm guardrails in AI help prevent catastrophic output.
Pattern-based enforcement of AI shapes normative output.
One is necessary.
The other becomes powerful when left unexamined.
PsyPost: A simple language switch can make AI models behave significantly differently. “A new study published in Nature Human Behaviour provides evidence that generative artificial intelligence models exhibit distinct cultural tendencies depending on the language in which they are prompted. The research suggests that using Chinese leads AI to produce more relationship-focused and context-aware […]
https://rbfirehose.com/2026/01/30/psypost-a-simple-language-switch-can-make-ai-models-behave-significantly-differently/"The analysis of liability aspects facing Artificial Intelligence (‘AI’)-generated outputs under copyright and related rights has been overlooked compared to other issues connected to the development and use of AI. This study fills this gap by exploring pertinent questions under international, EU and UK law. Specifically, the study tackles actionable reproduction, allocation of liability, and availability of defences. The analysis ultimately shows that, while it is clear that each case will need to be decided on its own merits, the generative AI output phase raises several profiles of liability under copyright law. If the goal of policymakers and relevant stakeholders is to ensure the balanced and sustainable development of AI, then the issues related to the generation and dissemination of AI outputs need to be given ample attention and a greater role in the debate than what has been the case so far, whether it is in the context of risk assessment and compliance, licensing initiatives, or in contentious scenarios."
#AI #GenerativeAI #EU #UK #Copyright #IP #AIOutput #AITraining
For more than a year, policy makers have been worried about the consequences of AI getting too powerful. But it’s time to start worrying about the consequences of AI staying as dumb it currently is. My latest for NYT Opinion (gift link): https://www.nytimes.com/2024/05/15/opinion/artificial-intelligence-ai-openai-chatgpt-overrated-hype.html?unlocked_article_code=1.sE0.SV0g.r4iVMq0NT6z7&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb
Generatieve AI kan soms net als mensen zijn: onbetrouwbaar, lui en last hebbende van een winterdip. Gebruikers van ChatGPT hebben niet alleen last van hallucinaties waar die bekende genAI-chatbot mee komt, maar ook van fouten, volledige onwaarheden én achteruitgang op dat gebied. Kwaliteitsverlies in de output is afgelopen zomer al aan het licht gekomen, heeft eind vorig jaar nieuwe dieptepunten bereikt, maar zou nu echt aangepakt zijn. Aldus OpenAI.