Marcel Fröhlich

476 Followers
862 Following
586 Posts

Data Strategy, Building organizations building systems, Mathematics

Director Services @ eccenca
Chair OMG Enterprise Knowledge Graph Platform Task Force

Main accounthttps://mathstodon.xyz/@FrohlichMarcel
Blueskyhttps://bsky.app/profile/froehlichmarcel.bsky.social
Besides US bigtech, Huawei is hiring countless top #mathematicians in Europe. Why are the global European players too blind to see that true innovation will be driven from the foundational level, not just from applied engineering? So much talent here, but so little ambition in corporate strategy.
An ancient Nokia device was found in an archaeological dig with 17% power.

Its the weekend. I was triggered.

#sovereign #cloud

OMG. #Microsoft #Copilot bypasses #Sharepoint #security so you don’t have to!

“CoPilot gets privileged access to SharePoint so it can index documents, but unlike the regular search feature, it doesn’t know about or respect any of the access controls you might have set up. You can get CoPilot to just dump out the contents of sensitive documents that it can see, with the bonus feature* that your access won’t show up in audit logs.”

The S in CoPilot stands for Security!

https://pivotnine.com/the-crux/archive/remembering-f00fs-of-old/

Remembering F00Fs of Old

TeleMessage has suspended services after it was hacked last week.

In https://arxiv.org/pdf/2501.09274 a standard LLM (Llama-3.1-8B-Instruct) without fine tuning is used to successfully predict protein sequences fitter for certain purposes (not specified specifically!) by guiding mutations in an evolutionary optimisation framework.

Telling title of the paper is "LARGE LANGUAGE MODEL IS SECRETLY A PROTEIN SEQUENCE OPTIMIZER"

Are there universal patterns in statistical distributions of sequences that have a generating mechanism and therefore grammar like patterns?
I.e. something learnt about grammars of languages that can be transferred?

And if so, are biological mutations maybe not random either, because cells may beat random guesses in a similar fashion?

Reminds me of methods for causal deconvolution based on algorithmic probability.

Cory Doctorow on why he's on Mastodon and only Mastodon.

"Enshittification isn’t merely the result of greed or foolishness — it is the inevitable consequence of a captive userbase."

--thx @JoshuaACNewman for putting this in my timeline!

https://pluralistic.net/2023/08/06/fool-me-twice-we-dont-get-fooled-again/

Fool Me Twice We Don’t Get Fooled Again – Pluralistic: Daily links from Cory Doctorow

Participating in annual reviews this year, and it occurs to me that the process now mostly consists of groups of employees using various vendors’ LLMs to populate another vendor’s database.
Need is All You Need: Homeostatic Neural Networks Adapt to Concept Shift by Kingson Man (@kingson.bsky.social), Antonio Damasio, Hartmut Neven algorithmically operationalising 'skin in the game' arxiv.org/abs/2205.08645 #AI
Kingson Man (@kingson.bsky.social)

we must know. we will know. https://scholar.google.com/citations?user=hG2vvYgAAAAJ&hl=en

Bluesky Social
Why You May Never See the Documentary on Prince by Ezra Edelman

A revealing new documentary could redefine our understanding of the pop icon. But you will probably never get to see it.

The New York Times