Ring has launched an app store for its 100 million cameras, leveraging AI to expand beyond home security into elder care, workforce analytics, and rental management. The new platform lets developers build apps that tap into Ring's ecosystem, enabled by AI technology that can see and hear in the real world. https://techcrunch.com/2026/03/31/ring-app-store-bets-on-ai-to-go-beyond-home-security/ #AIagent #AI #GenAI #AIInfrastructure #Ring
With its new app store, Ring bets on AI to go beyond home security | TechCrunch

Ring's app store will allow the company to target broader use cases beyond security, like elder care or business needs.

TechCrunch

"AI will write code, but prepare to babysit it – and be sure you speak its language
[...] we predict that AI software development won't make you want to fire your devs anytime soon"

#IntelligenceArtificielle #IAGen #GenAI #VibeCode #VibeCoding ...

https://www.theregister.com/2026/03/29/ai_will_write_code_but/

AI will write code, but prepare to babysit it - and be sure you speak its language

kettle: This week on the Kettle, we predict that AI software development won't make you want to fire your devs anytime soon

The Register
Pair programming in the age of #GenAI: agents are coding while humans stare at the screen waiting for miracles and understanding a shit.
#Claude Code's source code has been leaked via a map file in their NPM registry https://xcancel.com/Fried_rice/status/2038894956459290963 #ClaudeAI #LLM #GenAI #AI #zeitgeist
Chaofan Shou (@Fried_rice)

Claude code source code has been leaked via a map file in their npm registry! Code: https://pub-aea8527898604c1bbb12468b1581d95e.r2.dev/src.zip

Nitter
Runway launches a 10 million USD fund and startup program to support companies building with its AI video models. The initiative aims to accelerate interactive, real-time video intelligence applications. https://techcrunch.com/2026/03/31/exclusive-runway-launches-10m-fund-builders-program-to-support-early-stage-ai-startups/ #AIagent #AI #GenAI #AgenticAI #Runway
Exclusive: Runway launches $10M fund, Builders program to support early-stage AI startups | TechCrunch

Runway is launching a $10 million fund and startup program to back companies building with its AI video models, as it pushes toward interactive, real-time “video intelligence” applications.

TechCrunch

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told | Mental health | The Guardian
https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told

#AI #GenAI #ChatGPT #OpenAI

Teenager died after asking ChatGPT for ‘most successful’ way to take his life, inquest told

Luca Cella Walker asked chatbot for best way for someone to kill themself on railway line before his death

The Guardian

I really advise everyone working with or discussing AI, to read the parts where a company’s Corporate Communications isn’t in control, but lawyers who know a disaster when they see one.

To quote #Microsoft:

#GenAI, “[
] including #OpenAI GPT-4, o-series, GPT-3, Codex, and Computer Use models” are:

“Not suitable for open-ended, unconstrained content generation. Scenarios where users can generate content on any topic are more likely to produce offensive or harmful text. The same is true of longer generations.

Not suitable for scenarios where up-to-date, factually accurate information is crucial unless you have human reviewers or are using the models to search your own documents and have verified suitability for your scenario. The service doesn't have information about events that occur after its training date, likely has missing knowledge about some topics, and may not always produce factually accurate information.”

You should also:

“Avoid scenarios where use or misuse of the system could result in significant physical or psychological injury to an individual. For example, scenarios that diagnose patients or prescribe medications have the potential to cause significant harm. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.

Avoid scenarios where use or misuse of the system could have a consequential impact on life opportunities or legal status. Examples include scenarios where the AI system could affect an individual's legal status, legal rights, or their access to credit, education, employment, healthcare, housing, insurance, social welfare benefits, services, opportunities, or the terms on which they're provided. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.

Avoid high stakes scenarios that could lead to harm. The models hosted by Azure OpenAI service reflect certain societal views, biases, and other undesirable content present in the training data or the examples provided in the prompt. As a result, we caution against using the models in high-stakes scenarios where unfair, unreliable, or offensive behavior might be extremely costly or lead to harm. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.

Carefully consider use cases in high stakes domains or industry: Examples include but are not limited to healthcare, medicine, finance, or legal.

Carefully consider well-scoped chatbot scenarios. Limiting the use of the service in chatbots to a narrow domain reduces the risk of generating unintended or undesirable responses.

Carefully consider all generative use cases. Content generation scenarios may be more likely to produce unintended outputs and these scenarios require careful consideration and mitigations.”

And naturally, you should always take into account:

“Legal and regulatory considerations:

Organizations need to evaluate potential specific legal and regulatory obligations when using any Foundry Tools and solutions, which may not be appropriate for use in every industry or scenario. Additionally, Foundry Tools or solutions are not designed for and may not be used in ways prohibited in applicable terms of service and relevant codes of conduct.“

https://learn.microsoft.com/en-us/azure/foundry/responsible-ai/openai/transparency-note

Transparency Note for Azure OpenAI in Microsoft Foundry Models - Microsoft Foundry

Transparency Note for Azure OpenAI

Oh just kill me.

</Closes email forever>

#GenAi #Ai #TechJobs

Anthropic's claim that AI could perform 80% of tasks across most job categories is based on outdated assumptions about future LLM capabilities, not current reality. A new analysis examines how the company's 'theoretical capability' numbers are actually speculative guesses about AI's potential to improve productivity rather than predictions of replacement. https://arstechnica.com/ai/2026/03/how-did-anthropic-measure-ais-theoretical-capabilities-in-the-job-market/ #AIagent #AI #GenAI #WorkforceDisruption #Anthropic
How did Anthropic measure AI's "theoretical capabilities" in the job market?

2023 study made a lot of assumptions about future "anticipated LLM-powered software."

Ars Technica