Here is another slow expression to demonstrate the disjunction pruning optimization I talked about earlier.

Suppose you declare a variable of implicitly unwrapped optional type:

var x: Int! = ...

The value of x is stored like an ordinary optional "Int?", but a reference to x from inside an expression is presented as a disjunction---the type of the reference is either "Int", or "Int?". If both lead to a solution, we favor the "Int?" choice; otherwise, we get the "Int" choice, and the compiler then lowers the reference to a runtime check.

Now, consider this expression:

let result = x + x + x + x + x + x

It takes the Swift 6.3 compiler a third of a second to type check this expression, and now its instant with recent main snapshots. What changed?

Well, in 6.3, disjunction selection doesn't have much to go on, so we fall back to selecting the disjunction with the fewest number of active choices at each step.

The implicitly unwrapped optional has fewer choices (2) than + (around 30), so we bind the first IUO, and our picture looks sort of like this, where the _ represent type variables yet to be bound:

let result = Int? + _ + _ + _ + _ + _

We then select the next disjunction, which again will be one of the IUOs, and attempt a choice:

let result = Int? + Int? + _ + _ + _ + _

In fact, Swift 6.3 will bind all IUO disjunctions first, and there are 2^6 possible combinations:

let result = Int? + Int? + Int? + Int? + Int? + Int?
let result = Int? + Int? + Int? + Int? + Int? + Int
let result = Int? + Int? + Int? + Int? + Int + Int?
let result = Int? + Int? + Int? + Int? + Int + Int
...

Of these 2^N combinations, exactly one, where we select the "Int" choice from every IUO disjunction, can be extended to a solution, because in fact + has no overloads that take Optional<Int>:

let result = Int + Int + Int + Int + Int + Int

We must consider all combinations though, and so the simply rest fail.

With latest Swift from main, things go differently. We still bind the first IUO first:

let result = Int? + _ + _ + _ + _ + _

But before we select the next disjunction, we run the new disjunction pruning pass. Since + has no overloads where the first argument matches the type "Int?", we end up disabling all active choices in the first + disjunction.

Now, disjunction selection sees that one of the remaining disjunctions has zero active choices. This means the current partial solution cannot be extended to a full solution no matter what choices we make, so we backtrack.

We attempt the next choice in the first IUO:

let result = Int + _ + _ + _ + _ + _

This time, some choices from the first + disjunction are pruned, but nothing fails. We select the second IUO disjunction:

let result = Int + Int? + _ + _ + _ + _

With these choices so far, pruning is again able to detect that there are no suitable choices for the first + that take an "Int" and an "Int?", so we are left with zero active choices and we attempt the second choice in the second IUO:

let result = Int + Int + _ + _ + _ + _

This continues:

let result = Int + Int + Int? + _ + _ + _
let result = Int + Int + Int + _ + _ + _
let result = Int + Int + Int + Int? + _ + _
let result = Int + Int + Int + Int + _ + _
...
let result = Int + Int + Int + Int + Int + Int

At this stage, we've bound all the IUO disjunctions in linear time, and only the + disjunctions remain.

The optimizations that were already existing in Swift 6.3 take over. Disjunction selection favors the (Int, Int) -> Int overload choice in each +, these favored choices succeed, and the rest of the problem is solved without further backtracking.

@slava I love these insights!

Now given the following function:

func +<T: AdditiveArithmetic>(lhs: T?, rhs: T?) -> T? {
guard let lhs, let rhs else { return nil }
return lhs + rhs
}

Would that have A) sped up the process in 6.3 and B) does it slow down with the changes now made?

My guess from what you've described would be "yes" for the former and "no" for the latter…

@ffried Very interesting question!

I tried it out and it looks like if I add that overload of +, it becomes fast in 6.3 (and remains fast on main.)

We now find the solution where each IUO remains "Int?" and your new AdditiveArithmetic.+ overload is selected for each +. (This means if you add such an overload, you'll possibly change the meaning of existing code that uses IUOs. Don't declare new overloads of stdlib operators like this, please ;-) )

This is because we prefer not to unwrap IUOs if possible; since the "Int?" choice succeeds this time, we don't have to try the "Int" choices at all. So 6.3 is able to find this solution in linear time.

@slava Thanks for checking!

As long as that func has visibility internal (or lower), it won't affect other code though, right?
Not that I (usually) overload stdlib operators anyway…

This is just something I have a hate-love relationship with in C#, which behaves similar to SQL in this, where using nulls with operators results in null.

Out of curiosity: Would such an operator in user code benefit from special optimizations? I reckon such operators in the stdlib are "mapped” to ASM operations?

@ffried of the overload isn’t public it won’t affect type checking of other modules that import yours.

The type checker hardcodes some special behaviors for various stdlib operators like +, mostly the favoring heuristics are slightly different, and there are still some older optimizations there as well. The new optimizations I’m working on don’t do that though, so they apply equally well to user-defined overload sets as well as standard library operators like +.

The general rule for overloading stdlib operators is that at least one of the argument types should be a type defined in your own module. This way, your new overload won’t be chosen in existing code that does not make use of any types from your module but happens to import it, eg via a dependency.

@slava So the disjunction checks all this before the operator itself runs? That’s cool.
When developing something like this do you usually test the performance of cases that got worse? Like I believe an expression that resolves with Int?s might be slightly slower now, though not much because it would still be linear.
@vini We have a collection of fast and slow expressions in the test suite that I’m building up by screening bug reports. When an expression is sped up we move it from “slow” to “fast” and add an expectation that it type checks successfully, with a much lower operation limit than the default “reasonable time” limit. This helps catch small regressions and it’s satisfying gradually moving more expressions to the “fast” directory!