Mark Crowley

448 Followers
337 Following
324 Posts

It is tempting to view the capability of current AI technology as a singular quantity: either a given task X is within the ability of current tools, or it is not. However, there is in fact a very wide spread in capability (several orders of magnitude) depending on what resources and assistance gives the tool, and how one reports their results.

One can illustrate this with a human metaphor. I will use the recently concluded International Mathematical Olympiad (IMO) as an example. Here, the format is that each country fields a team of six human contestants (high school students), led by a team leader (often a professional mathematician). Over the course of two days, each contestant is given four and a half hours on each day to solve three difficult mathematical problems, given only pen and paper. No communication between contestants (or with the team leader) during this period is permitted, although the contestants can ask the invigilators for clarification on the wording of the problems. The team leader advocates for the students in front of the IMO jury during the grading process, but is not involved in the IMO examination directly.

The IMO is widely regarded as a highly selective measure of mathematical achievement for a high school student to be able to score well enough to receive a medal, particularly a gold medal or a perfect score; this year the threshold for the gold was 35/42, which corresponds to answering five of the six questions perfectly. Even answering one question perfectly merits an "honorable mention". (1/3)

@tao I think there is a broader problem wherein competitions (math, programming, games, whatever) are meant to measure something difficult for humans, but tools work so fundamentally differently from us that success for a tool isn't even necessarily meaningful. AI companies have long viewed the IMO Grand Challenge as a sign of achieving "AGI," but no matter what set of rules a machine follows, there's no reason to believe success for a machine will correlate with broader mathematical or "reasoning" abilities in the way it does for human participants.
Testing the new Mastodon to BlueSky bridge. This should let my posts here on Sigmoid.social shownup on BlueSky. I'm not planning to do the reverse yet, we'll see how it goes.
@[email protected] [spider mans pointing at each other meme]

We have two open post-doc positions. You dont' have to be a Bayesian but somebody who is interested to work with at the intersection of DL, Bayes, and optimization.

https://www.riken.jp/en/careers/researchers/20240917_2/index.html

Interest in understanding deep learning and continual lifelong learning is a plus!

Seeking a Research Scientist or a Postdoctoral Researcher at Approximate Bayesian Inference Team (W24162) | RIKEN

This is how AI is going to ruin the world. It’s not the terminator style action.

“Tokyo gov't launches AI dating app to match couples, boost births”

https://mainichi.jp/english/articles/20240929/p2g/00m/0li/020000c

Tokyo gov't launches AI dating app to match couples, boost births - The Mainichi

TOKYO (Kyodo) -- The Tokyo government has launched a new dating app for smartphones that uses artificial intelligence to match people who are serious

The Mainichi

Effects of AI are being felt in anticipated (bad) ways.

https://www.bbc.com/news/articles/ckg9k5dv1zdo

The AI clip that convinced - and divided - a Baltimore suburb

Almost a year after a fake clip of a high school principal went viral, the impact on an American town lingers.

testing posting from Flipboard to my mastodon account directly, hmmm.

https://www.bbc.com/news/articles/cg4yerrg451o

Deepfake is a problem that will keep@causing harm for many many years to come.

South Korea faces deepfake porn 'emergency'

The president has addressed the growing epidemic after Telegram users were found exchanging doctored photos of underage girls.

@Flipboard how about @FlipboardCS for some help on this?