I think this #FutureLaw 2023 panel on GPT4 is good in terms of a balanced view of the risks and opportunities of using tools like GPT4. What surprises me is that no one is talking about how soon GPT4 will be obsolete, and replaced by something that improves over the previous iteration just as significantly. #LegalTech #LawFedi
OpenAI’s CEO confirms the company isn’t training GPT-5 and ‘won’t for some time’

OpenAI’s CEO Sam Altman has confirmed that the company is not currently training GPT-5 — the successor to its language model GPT-4, released this March. Altman was discussing fears about AI safety.

The Verge
@ltmccarty that article just says you can't assume that new versions are better than earlier versions in any stable degree. Granted. But no one is assuming. It is objectively getting significantly better. That they haven't started "training" GPT5 yet doesn't mean anything. I don't see any evidence of a plateau, and I see lots of evidence to the contrary.

@lexpedite @ltmccarty — Two ways this could play out:

1. OpenAI feels threatened by the many free + open-source competitors (e.g., Eleuther, Dolly-2), and wonders whether the time/expense of generating a Foundational Model is worth it — when they're competing with "free."

2. OpenAI — with Microsoft money — takes a run at improving the existing GPT-4 model incrementally. Like they did with the davinci releases of GPT3, GPT3.5, etc.

Seems like they're choosing Option 2. Long Microsoft runway

OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.

WIRED

@ltmccarty @lexpedite Yes, that's really helpful. Thanks, Thorne.

I wonder if this comes from the lack of high quality data sources. There are only so many human-created words. Reddit will only get you so far.

Last bastion of high quality legal data: Law? Judicial, statutory, and regulatory text seems like an evergreen source.

@damienriehl @lexpedite

My guess is: Both algorithms and data. They have probably been running tests beyond the GPT-4 horizon and seeing a sigmoid.

But it is infuriating that we have to guess like this, since they haven't disclosed any technical information about GPT-4.

@ltmccarty @lexpedite

On guessing and nondisclosure: Hard to be transparent when (1) for profit and (2) open source competitors are nipping at your heels.

Google can afford to be open. OpenAI (an oxymoron) apparently thinks that it can't.

@ltmccarty @lexpedite

Another consideration: Malfeasance and misuse of the model. Regulatory concerns abound.

@damienriehl @lexpedite

You are correct, of course. I am just unhappy about the current state of scientific research in this field.

https://arxiv.org/abs/2304.06035
Choose Your Weapon: Survival Strategies for Depressed AI Academics

Are you an AI researcher at an academic institution? Are you anxious you are not coping with the current pace of AI advancements? Do you feel you have no (or very limited) access to the computational and human resources required for an AI research breakthrough? You are not alone; we feel the same way. A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research. Here, we discuss what you can do to stay competitive while remaining an academic. We also briefly discuss what universities and the private sector could do improve the situation, if they are so inclined. This is not an exhaustive list of strategies, and you may not agree with all of them, but it serves to start a discussion.

arXiv.org