I've been to a lot of google offices, and a couple google berlin parties, but I've never seen security like today for their AI launch. There's a #tag#techGoogle which seems odd. I wasn't invited for the press event earlier, (the youtuber next to me was), but the panel AI Forum The Future of Science in the Age of AI (and a party later.) (edit: actually #tachGoogle )
Unexpectedly it's in English. TIL #berlin's googlers who were especially being welcomed (!!) are called "boogelers." The point of the kick off is this is "the" Google AI centre (?) So discussing the future of AI "with many different stakeholders." Again they repeatedly thank the googlers who made this all possible. Panelists are the greatest minds in AI, seriously (they say). Alena Buyx, Fabian Theis, Yossi Matias, Jakob Uszkoreit two more I can't see the names of. Lara St? BothurKlaus Robert?
So science used to be about understanding things then proving them, now we just get answers. Has science changed? JU – 20C was a weird exception with rich theories, alchemy is normal science (sounds like Zuckerberg on privacy, which I've debunked elsewhere.) Getting back to a place where concrete engineering leads before full understanding. YM: this is a golden age of research. Acceleration of sci discovery from AI agents, which help create, validate, rank hypothesese.
Every teenager can have their own virtual lab (still YM) need to think through how to cope with this, need to train next generation to operate in a new domain than we used to. Today is already over. Lara: syntehtic data is so cool but you need multiple sets of it. quantum! YM I think still: we already see this working in maths, health ai, products in our daily lives. It's astonishing that we are still nascent in this. Lara to FT: are scientists no longer discoverer but only interepreters?
FT answers: hey, when were we ever discovering. The last 50 years we were not as disruptive and stuff as before, we were already in trouble. He hated biology because it was too much random stuff to learn. But now kids are interested because they can use AI tools. Modern science happens in teams anyway. Quotes Hinton, misattributes radiology from 9 years ago to pathology to 15 years ago but gets the bottom line: that Hinto was so wrong. Klaus-Robert?: AI is just a tool.
KR?: Fields without data e.g. quantum you need to make the AI use all the knowledge of physics to deal with less data, so new universal language but still very dfferent in different domains. (Dying to hear what my friend AB says...ah she's on) AB: Not just an ethicist but a doctor, no one knew what asprin was but we kept using it, it's so common in medical science it has a Latin name, try random stuff. Also German, I don't get either. She comes teh exact other side.
AB: what's changed is speed, scale, transformative ptoetnial. Challenges are NOT transparency. The difference is that it's ubiquitous, so how do we keep science in control? It's about the infrastructure. How do I use the big ass infrastructure keeping science as independent as possible. Every grad student having their own lab. Transparency we can do, but people are the big question. If every grad student is a PI, we have to change our model (AI did that decades ago TBH)
AB: It's terrific that the grad students jump over the boring stuff, but still she has different wisdom from her students, so how do we make sure the deskilling–we're all in a grand experiment all of society, AB & FT at TUM do embedded ethics approach. Helmholtz uses AI, ethics consultants are present at all times. So we're all learning from each other, ew cannot do without multidisciplinarity. But experimentation training and challenge of how scientific people are changing. Full on enthusiasm.
YM jumps in – humans are trained in only one thing, but our AI collaborators we train in everything. AI is an amplifier (he is mixing metaphors. They have not thought about the scaling & noise issues as far as I can tell.) everone can have an atelier or invention factory But we're not set for that. Design principles needed...AB lvoes that. They constantly reflect on their AI use in her group, she's optimistic, but the design principle is the seeking of continuous knolwledge, productive doubut.
AB: that is NOT the design principle for wider society. If someone just beleives AI legal advice they are in trouble. Science is low hanging fruit because scienc never just believes anything, always questions all outputs, we can deal with uncertainty. Our design principles could help wider society too. JU: thinks people can figure out theings htemselves, but present scientific incentives are not useful, collaborator stuff all just breaks down. Focus on human slows us down. #AIEthics
JU "scientific papers are the synthetic data of humans" 🙄 YM talks about generate & test as if it is a new idea rather than something Patrick Winston rwote about in 1970. Now using Jazz qyote "if it sounds good it is good" (That is NOT science!) Ah, OK, yes he says the design goals is the main job of the human. KR: some salt– models are not perfect. How do we train students to judge truth and nonsense? We cannot shortcut these skills.
AB" a few decades ago the biggest change in surgery since anasthesia, the advent of noninvasive surgery, get rid of 50% of post oerpative damage / failures. ENORMOUS change, but the old workforce needs training in new stuff, and they did! They learnt the new technique, happens all the time in medicine. Which old skills do I still need, do sometimes have to still open them up, need both skills. She thinks this is goin on now in all fields trying to figure out which skills are essential.
(I've been super impressed with the German statisticians / accountants at this, their professional society is really rocking on this. They asked me to talk once recently.) Lara:maybe we no loner seek understanding, only capcity to intervene. JU: largely agree, makes the HUGE error of saying what works in surgery works for the clinet. There's only one planet. Lots of surgery experimental subjects die. YM disagrees, need to enforce LLM factuality. (good) need self critique.
YM: how do we improve the models, 2 how do we improve our ability to use it 3) what will ne need in the future, what do we train now. He thinks we aren't even training students to today let alone the future. FT also disagrees, Classification gets called understanding. KR: disagrees too. clever hans is there real insight, AB: yeah, you sometimes need to be able to cut! she also disagrees with premise. Maybe both, not either or. Human centred.
Lara: favourite use case? JU desigining interventions with complex parts of livfe YM generalised acceleration; climate resilnece, healthcare, education. FT: therapies / neurdegneration. KR qantum properties of matter; AB embedded ethics, swarm cards, very federated, decentralised card gies you all teh elements, but super smooth process. Makes her think about stuff.
Weirdly, the audience is overwhelmingly more excited than afraid about #AIScience ; but thinks by a slight majority that it will make science LESS trustworthy JU points out truts in scence is independent of scienc sometimes.
#tachGoogle I don't think there's going to be a q & a, but there are two major problems: a) that accelerating 8B people accelerated only creates noise, not signal. And b) that surgery was sadly innovated on soldiers expected to die, not on planets that support our entire biosphere. I would sincerely like to hear these guys think of those two cases, I don't have insight myself beyond the question. #AIEthics #AIScience

Boogler: I work on [product you’ve heard of]
me: so it’s not like Microsoft Berlin, just a place for salesmen to sleep, right? I’ve heard there’s three serious research teams here?
B: honestly it’s mostly sales and advertising. There’s about 100 engineers, and I am the lowliest.
Me: that’s why you weren’t briefed against talking to me :)

Mostly the lunch seems full of technical university berlin students.