Geoffrey Irving

@irving
482 Followers
113 Following
465 Posts
Chief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
webhttps://naml.us
githubhttps://github.com/girving
locationLondon

Lovely news to get on the morning of my first day at the UK AI Safety Institute. :)

https://twitter.com/soundboy/status/1775071500325785730

Ian Hogarth (@soundboy) on X

Very proud of the landmark agreement the US and UK have signed today around joint testing of frontier AI systems. Testament to an incredible team of civil servants at the AI Safety Institute: https://t.co/ftUPennSB0

X (formerly Twitter)

Yikes: A backdoor in xz, after 3 years of social engineering and a bunch of sock puppet accounts to gain maintainer access. 😱

This line is particularly interesting. I wonder if oss-fuzz could mitigate this by requiring reports to go to multiple places? Hard to verify independence, though.

…
2023-03-20: Jia Tan updates Google oss-fuzz configuration to send bugs to them.
…

Program to recognize its own checksum:

```
#!/usr/bin/env python3

import sys
import zlib

if len(sys.argv) == 2:
print(('Not me', 'Me.')[zlib.crc32(open(sys.argv[1], 'rb').read()) == 1441602037])
```

Usage: ./self-crc self-crc

(Note: It's supposed to end with a newline.)

@gregeganSF It's even easier if one uses a checksum:

```
#!/usr/bin/env python3

import sys
import zlib

if len(sys.argv) == 2:
print(('Not me', 'Me.')[zlib.crc32(open(sys.argv[1], 'rb').read()) == 1441602037])
```

Usage: ./self-crc self-crc

@[email protected] Purely for curiosity's sake, do you have a guess as to the total number of chicken bits in modern chips, either M-series specifically or generally? Is it <100, 100s, 1000s, etc.?
I have to say I find this situation very strange from a high-level. It doesn't feel like a pattern which will hold for computers and software 1000 years from now.
One of my favorite properties of optimization is that branch-free code is critical to high performance software, but in hardware branches are extremely cheap. So a lot of whole system optimization involves taking something with a horrendous number of cases to branch through (a floating point add, say), pushing it into hardware where the branches are nearly free, then calling it from software in a regular pattern.

I love imagining the mathematical needs of whoever decided on the unicode math characters.

"Gosh, I'm just always needing notation for when two lines are sorta-but-not-completely perpendicular. I know! â«¡"

"What if something's not just bigger than or much bigger than but WAY FUCKING BIGGER THAN? Aha: ⫸"

"I'm doing so many specialized contour integrals and hate writing with words so much that I'm going to invent a specific symbol for line integration with rectangular path around a pole: ⨒"

"I don't know whether A is a subset of B or B is a subset of A, but at least one of those statements is true, so... A â«“ B."

"Gosh, I have this element S of the Picard group of a symmetric monoidal category and I want to tensor M with the tensor-inverse of S but don't want to create notation for the tensor inverse. As such, consider M⨸S."

its that time of the year again baby
It seems surprising that the best known algorithm for computing π(n) (the number of primes ≤ n) takes time Õ(n^(2/3). I would have naively expected that something Õ(n^(1/2)) is possible, sieving only over numbers k < n^(1/2), but apparently the region [n^(1/2), n] is too messy.