The expression of the day is an array literal whose elements are tuples of literals, for example this:

func f() {
let array = [
(0, 0, 0, 0), /* ... imagine this element repeated 100 times ... */
]
}

- Swift 6.0 took a leisurely 16 seconds to type check the body of f()
- Swift 6.2: 5 seconds
- Swift main: 2 seconds
- my latest tree: 15 milliseconds.

It's still doing too much work, but a 1000x improvement over Swift 6 isn't too shabby

@slava I'm curious, can you briefly say why it used to take so long? Something about numeric literals being polymorphic and solving a bunch of constraints, perhaps? (I don't actually know Swift at all.)
@slava Is there a "I once understood how HM type inference works, but forgot" level explanation as to why that array takes so long to typecheck?
@slava I'd say that a 1000x improvement would be awesome! 😃
@slava does this mean my "gigantic dictionary literal in the unit tests" issue is fixed??
@slava I just ran a smaller version of this in 6.2 with `-debug-constraints` and I suddenly have the urge to describe a type checker as a “schmuck”.
@slava This is a nice compiler example of a broader point: the underlying object here is still completely finite, but the cost of reasoning about it depends dramatically on how the system represents and traverses that finite structure. That’s very close to the distinction I’m drawing in my paper. Finiteness does not buy tractability for free; it buys a bounded ontology. The hard part is then whether your logic, type system, or algorithm respects that bounded structure directly or buries it under a more permissive formalism. A 1000x speedup here is basically a reminder that once the domain is finite, a lot of “difficulty” is representational and procedural rather than ontological.