Google details new 24-hour process to sideload unverified Android apps
https://android-developers.googleblog.com/2026/03/android-de...
Google details new 24-hour process to sideload unverified Android apps
https://android-developers.googleblog.com/2026/03/android-de...
At this point I'm convinced that there's something deeply wrong with how our society treats technology.
Ruining Android for everyone to try to maybe help some rather technologically-hopeless groups of people is the wrong solution. It's unsustainable in the long run. Also, the last thing this world needs right now is even more centralization of power. Especially around yet another US company.
People who are unwilling to figure out the risks just should not use smartphones and the internet. They should not use internet banking. They should probably not have a bank account at all and just stick to cash. And the society should be able to accommodate such people — which is not that hard, really. Just roll back some of the so-called innovations that happened over the last 15 years. Whether someone uses technology, and how much they do, should be a choice, not a burden.
I worked at a bank on the backend for architecture and security.. and I've posted this attestation here before, but the sheer volume of fraud and fraud attempts in the whole network is astonishing. Our device fingerprinting and no-jailbreak-rules weren't even close to an attempt at control. It was defense, based on network volume and hard losses.
Should we ever suffer a significant loss of customer identity data and/or funds, that risk was considered an existential threat for our customers and our institution.
I'm not coming to Google's defense, but fraud is a big, heavy, violent force in critical infrastructure.
And our phones are a compelling surface area for attacks and identity thefts.
How does preventing people from running software of their choice on their own device (what you call jailbreaking) prevent fraud in practice? It's a pretty strong claim you're making there. And it's being made frequently by institutions, yet I have never seen it actually explained and backed up with any real security model.
All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.
How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?
The app does fingerprinting and requires certain secure device profile characteristics before the app lets a user initiate certain kinds of financial transactions.
Those are based on APIs available from the mobile devices. Google and Apple can offer other means by which to secure these things, and to validate that the device hasn't been cracked and is submitting false attestations. But even a significant financial institution has no relationship with Apple on the dev side of things.. Apple does what it decides to do and the financial institution builds to what is available.
These controls work -- over time fraud and risk go down.
"and taking away their ability to run software of their choice in practice"
Who did that?