@Jacob @pluralistic @HalvarFlake

HT x sharing, enjoyed listening:

• Compare current to #gofAI begging #ML algos & tools

• Cool #RichSutton #TheBitterLessonInML remark that generic algos, w/o use of larger real datasets for training models, will beat expert algos & shorter datasets.

• Remark of synthetic data for training, yet employ with caution, and only helpful for some cases.

• Cool Q&A long answer for now is too late for keeping free internet datasets non-contaminated of synthetic BS .

@emilygorcenski @kordinglab

» Not at all.

The point of the bitter lesson is that the right learning algorithms

(those that scale efficiently with massive computation)

are exactly what we need.

Massive computation does not alleviate the need for data efficiency «

#RichardSutton 24/11/2023

https://nitter.cz/RichardSSutton/status/1728129341287198885#m

#TheBitterLessonInML

http://www.incompleteideas.net/IncIdeas/BitterLesson.html