Сотрудники покидают OpenAI

-Вчера Мира Мурати, технический директор OpenAI, объявила о своем уходе из компании после шести с половиной лет работы, отметив, что это решение было сложным, но своевременным.

Уход Мурати следует за другими значительными отставками в OpenAI, среди которых выход соучредителя Ильи Сутскевера и бывшего руководителя по безопасности Яна Лейке.

-Сегодня главный научный сотрудник Боб МакГрю и вице-президент по исследованиям Баррет Зоп покинули компанию после объявления об уходе технического директора Миры Мурати.

-Резкие увольнения также были замечены среди других высокопрофильных сотрудников, включая Андреева Карпати и соучредителя Джона Шульмана.

Генеральный директор Сэм Альтман сообщил, что эти увольнения стали логичным продолжением и произошли в дружеской атмосфере, несмотря на их независимость.

src: https://www.cnbc.com/amp/2024/09/25/openai-cto-mira-murati-announces-shes-leaving-the-company.html
https://techcrunch.com/2024/09/25/openais-chief-research-officer-has-left/

#ai #openai #chatgpt #gpt3 #GPT_3 #GPT4 #gpt4o #gpt_4 #gpt35

OpenAI considering restructuring to for-profit, CTO Mira Murati and two top research execs depart

OpenAI's board is considering plans to restructure the firm to a for-profit business. CTO Mira Murati and two top research execs said they are leaving.

CNBC
An In-Depth Guide to Contrastive Learning: Techniques, Models, and Applications

Discover the fundamentals of contrastive learning, including key techniques like SimCLR, MoCo, and CLIP. Learn how contrastive learning improves unsupervised learning and its practical applications.

LangChain vs LlamaIndex: Choose the Best Framework for Your AI Applications

Explore the detailed comparison of Llamaindex vs Langchain to make informed decisions. Discover the strengths of each tool for your project needs.

Nvidia Conquers Latest AI Tests​

<p><em></em>GPU maker tops new MLPerf benchmarks on graph neural nets and LLM fine-tuning</p>

IEEE Spectrum
How LLMs Work, Explained Without Math

I'm sure you agree that it has become impossible to ignore Generative AI (GenAI), as we are constantly bombarded with mainstream news about Large Language Models (LLMs). Very likely you have tried…

Microsoft Reportedly Building ‘Stargate' to Transport OpenAI Into the Future

Microsoft and OpenAI might be concocting a $100 billion supercomputer to accelerate their artificial intelligence models.

Gizmodo
GPT4-V API 使い方 / OpenAI GPT-4 V API - Qiita

OpenAI GPT-4VのAPIが利用可能になったとのことなので早速使ってみた。入力画像の準備今回は、ChatGPTのDALLE-3で作成した以下のねこの画像を使用した。ファイル名: cat…

Qiita
Teach your LLM to always answer with facts not fiction

A vector database that supports Structured Query Language can store more than vectors. Common data types like timestamps and arrays can be accessed and filtered within the database, which improves the accuracy and efficiency of vector search queries. Accurate results from the database can teach LLMs to speak with facts, which reduces hallucination and enhance the quality and credibility of answers from LLM.

Evaluating Large Language Models Trained on Code

We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.

arXiv.org
chat.openai.com – how to – Düsiblog – Matthias Düsi