https://winbuzzer.com/2026/03/31/midjourney-revenue-above-200-million-hardware-push-xcxwbn/
Midjourney Revenue Tops $200M as It Eyes Hardware
"AI will write code, but prepare to babysit it â and be sure you speak its language
[...] we predict that AI software development won't make you want to fire your devs anytime soon"
#IntelligenceArtificielle #IAGen #GenAI #VibeCode #VibeCoding ...
https://www.theregister.com/2026/03/29/ai_will_write_code_but/

Runway is launching a $10 million fund and startup program to back companies building with its AI video models, as it pushes toward interactive, real-time âvideo intelligenceâ applications.
Teenager died after asking ChatGPT for âmost successfulâ way to take his life, inquest told | Mental health | The Guardian
https://www.theguardian.com/society/2026/mar/31/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told
I really advise everyone working with or discussing AI, to read the parts where a companyâs Corporate Communications isnât in control, but lawyers who know a disaster when they see one.
To quote #Microsoft:
#GenAI, â[âŠ] including #OpenAI GPT-4, o-series, GPT-3, Codex, and Computer Use modelsâ are:
âNot suitable for open-ended, unconstrained content generation. Scenarios where users can generate content on any topic are more likely to produce offensive or harmful text. The same is true of longer generations.
Not suitable for scenarios where up-to-date, factually accurate information is crucial unless you have human reviewers or are using the models to search your own documents and have verified suitability for your scenario. The service doesn't have information about events that occur after its training date, likely has missing knowledge about some topics, and may not always produce factually accurate information.â
You should also:
âAvoid scenarios where use or misuse of the system could result in significant physical or psychological injury to an individual. For example, scenarios that diagnose patients or prescribe medications have the potential to cause significant harm. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.
Avoid scenarios where use or misuse of the system could have a consequential impact on life opportunities or legal status. Examples include scenarios where the AI system could affect an individual's legal status, legal rights, or their access to credit, education, employment, healthcare, housing, insurance, social welfare benefits, services, opportunities, or the terms on which they're provided. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.
Avoid high stakes scenarios that could lead to harm. The models hosted by Azure OpenAI service reflect certain societal views, biases, and other undesirable content present in the training data or the examples provided in the prompt. As a result, we caution against using the models in high-stakes scenarios where unfair, unreliable, or offensive behavior might be extremely costly or lead to harm. Incorporating meaningful human review and oversight into the scenario can help reduce the risk of harmful outcomes.
Carefully consider use cases in high stakes domains or industry: Examples include but are not limited to healthcare, medicine, finance, or legal.
Carefully consider well-scoped chatbot scenarios. Limiting the use of the service in chatbots to a narrow domain reduces the risk of generating unintended or undesirable responses.
Carefully consider all generative use cases. Content generation scenarios may be more likely to produce unintended outputs and these scenarios require careful consideration and mitigations.â
And naturally, you should always take into account:
âLegal and regulatory considerations:
Organizations need to evaluate potential specific legal and regulatory obligations when using any Foundry Tools and solutions, which may not be appropriate for use in every industry or scenario. Additionally, Foundry Tools or solutions are not designed for and may not be used in ways prohibited in applicable terms of service and relevant codes of conduct.â
https://learn.microsoft.com/en-us/azure/foundry/responsible-ai/openai/transparency-note