Compile-Time Constant Expressions for Swift

Chris Lattner and I have been working on a proposal to add compile-time constant expressions to Swift. Our goal is to create a very general feature that enables many other useful features that need compile-time execution, like: static assertions, value parameters in generics, some TensorFlow operations in Swift for TensorFlow, and more (see the motivation section of the draft proposal for a detailed discussion).

We would like your feedback on our draft proposal!

The draft proposal is here, and I'll keep it up-to-date as we incorporate feedback: Compile Time Constant Expressions for Swift · GitHub

55 Likes

This is awesome! Thanks for pushing this forward :pray:t2:

Now, I have some feedback. When you say:

We will add a recursion depth limit (e.g. 16). This prevents infinite compile time for cases of infinite recursion.

Isn't that too coarse? I mean, it's better than not having recursion, of course! But I'd like to know why it can't be one of these:

  1. Checking for infinite recursion (is it undecidable for these semantics? If so, well, :man_shrugging:t3: gg).
  2. Accepting a hard-coded UInt as a parameter of recursion.
  3. Accepting a pure function that, based on some parameter n in the @compilerEvaluated function, gives you an upper bound (UInt) on the recursion depth. If possible, the value of this bound should he polynomial. Exponential bounds would raise an error. Since it would be a pure function, you can decide whether it is exponential or not, right?

(3) would be the best explicitly parametrized compromise, I think.


Edit: Apart from that, everything looks good up to my current knowledge of compilers and languages (which is basic, but I try). Again, thanks for pushing this forward. It looks like you've put a lot of thought on this :slight_smile:

1 Like

This looks good at a high level. I have some small observations:

  • You mention that evaluating asserts in constant expressions is necessary to supplant the use of @_transparent for literal overflow diagnostics, but I didn't see anywhere in the proposal that lays out how this would be made to work. My educated guess would be that the constant evaluator could recognize the assert functions specially, but that's not explicitly stated.
  • Supporting MemoryLayout in constant expressions is a layering violation if constant evaluation is expected to happen during mandatory passes, since SIL does not lay out types. Constant expressions involving memory layout could only be evaluated when we begin LLVM IR evaluation, which normally happens after all SIL generation and optimization has happened. Maybe that's OK, since optimization shouldn't introduce new compilation-stopping diagnostics, but it may mean you need to do evaluation in two phases. Alternatively, maybe supporting MemoryLayout should be left for a future extension.
  • If your constant evaluator can handle metatypes and generic calls, then it's a bit weird not to also support existentials containing constant-evaluable values and using constant-evaluable conformances, since an existential is in essence a tuple of (value, metatype, protocol conformance).
  • There are a few places where @compilerEvaluatable and @compilerEvaluable get mixed up in the proposal.

Looks good from a first reading otherwise.

7 Likes

Thanks for the feedback!

I didn't think about this tradeoff between simplicity and power for max recursion depth. So there's no reason to think that the proposal currently chooses the best compromise! And indeed, a completely hardcoded limit is probably bad because users who really need more depth will be stuck.

Not sure if you meant #2 to be a global limit, or per-function, but a configurable global limit sounds like a pretty good point in the tradeoff space to me. Looking to other languages for precedent, I find that a few C++ compilers (clang and gcc) have a default limit of 512 for constexpr max recusion depth, and a compiler flag that allows you to change the limit if you really need to.

Something dynamic like #3 sounds a bit too powerful to me. It would add a totally new concept to the language (polynomial bounds), and I expect that most practical compile-time evaluated functions will be small and simple enough to not need it. Users can easily deal with the few large and variable-depth functions by occasionally bumping up a global limit.

(And I strongly suspect that the proposed programming model is turing complete, so #1 is indeed undecidable.)

1 Like

Really nice addition.

For recursion why not make it an argument, e.g.:

#compilerEvaluable(recursionLimit = 512)

That way when an evaluable function is exported the compiler can deal with it. Otherwise the user of a library will have to set compile flags, which would be a pain. Also a flag is course grained.

Yeah, I see what you mean. Not everyone is a complexity analysis fan (and student) like one is. I'd agree: it's better to keep it simple for now.

As for #2, I meant basically what Howard just wrote ^^

(As an optional argument, of course)

Also thanks for the feedback!

I don't think that we thought this through precisely, because it's not something that we're proposing to actually do in this proposal.

Your guess sounds like a reasonable approach to me. I can think it through a bit and update the proposal.

Interesting, I hadn't realized that. I'll think more about this.

If it ends up being complicated, I totally support leaving MemoryLayout for a future extension. Leaving it out makes the #staticAssert that we introduce not-very-useful (the only useful examples I can think of involve MemoryLayout). That's okay because our main goal is to lay groundwork for useful static features, rather than to actually introduce useful static features.

To make sure I understand you, do you mean we should support @compilerEvaluable functions like the following contentsEqual?

protocol Container {
  associatedtype A
  func get() -> A
}

@compilerEvaluable
func contentsEqual<C: Container>(_ x: C, _ y: C) -> Bool where C.A: Equatable {
  return x.get() == y.get()
}

This sounds pretty sensible to me. I'll think through it and update the proposal.

Thanks for catching! I'll fix them to be consistently @compilerEvaluable.

1 Like

I see a problem with an argument to the @compilerEvaluable annotation: for large, varying-depth functions, this argument does not let you set the depth at the callsite. For example:

// Some external module that I cannot modify.
@compilerEvaluable(recursionLimit = 5)
func factorial(_ n: Int) -> Int {
  return n == 0 ? 1 : n * factorial(n - 1)
}
// My code.
#staticAssert(factorial(10) == 3628800)

This will fail to run, and there's nothing I can do in my code to fix it.

Point taken about it being painful for users of libraries to set compiler flags. I expect it'll be uncommon enough that very few users will feel this pain.

Alternatively, maybe allow users to set max recursion depth at the callsite? Not sure if there's a non-invasive way to do that though.

2 Likes

Duh, you're right! I wasn't thinking about that :laughing:

Yes, if you could, you'd have settable depth. Hmm :thinking:.

Let's leave aside the general case for this point:

What about tail recursion? Tail recursive functions can be turned into loops at compile time. Their depth always depends on the parameters. I think those are prime candidates for parameter-dependent recursion depth, since you'd have the guarantee that they'd terminate.

Okay, so. Now coming back to the general case.
What if we made it possible to express depth in terms of the parameters to the function?

For example, in your snippet:

/* Original code */
// Some external module that I cannot modify. 
@compilerEvaluable(recursionLimit = 5) 
func factorial(_ n: Int) -> Int { 
  return n == 0 ? 1 : n * factorial(n - 1) 
}

/* Depth depends on function parameter */
// Some external module that I cannot modify. 
@compilerEvaluable(recursionLimit = n + 2) 
func factorial(_ n: Int) -> Int { 
  return n == 0 ? 1 : n * factorial(n - 1) 
}

I think that's what I wanted when I thought of #3, but now I was able to formalize it in a (maybe) practical way.

The cool thing about this is that it doesn't change the syntax for normal functions. Only users of the @compilerEvaluable attribute would need to know about this feature.


Now. Can we guarantee the depth to be polynomial? In a non-obstrusive way? I think we can. Let's see:

We could limit the syntax of the depth formula to this:

  1. UInt parameters and results.
  2. Allow for constants (n + 2)
  3. Allow for the 4 basic arithmetic operators: +, -, *, /

// Semantics: anything less than or equal to 0 will be turned into 1.

And that's it. No exponentiation, no ifs, no binary expressions. Although some of those could be added as well (like ternary operators, which are technically if-then-else).
But this syntax, as a proof of concept, prohibits exponential depth.

What problem is being solved here? We want something that is a) simple, b) not user exposed, c) allows the compiler to detect an approximation for "infinite compile time".

We can not differentiate between something that takes a lone time to execute, and something that is infinite (that is the halting problem :-), and we don't want the compiler to depend on something non-determinstic or unstable like wall time.

What C++ compilers do in practice is pick a recursion depth. This depth is arbitrary, but we don't want people writing things like "fib" as a constant expression. Such a thing is possible to express, but would dramatically slow down compiles even when the expression is not infinite.

I don't see any particular use of exposing a recursion depth to the programmer, we should do something like clang does where the limit is big enough that users don't have to manage. If you hit the limit, then "you're doing it wrong", you shouldn't change the limit.

-Chris

8 Likes

Correct, this is one of the motivations for the proposal that needs further design work. The proposal does not include the elimination of @_transparent + overflow checks, it is a building block that takes us a step towards that.

Sure, splitting MemoryLayout out to its own follow-on would be fine, it isn't critical at all, it is just a motivating example. I'm confused about one piece though: are you saying that this layering violation is an artifact of how the compiler currently works, or are you saying that it is something we should strongly defend and you'd be opposed to doing?

This could be added for sure, but it is of dubious value. It would allow constant folding in the case where an existential's initialization is visible. I'm not sure what the use of this would be in practice, but if it is important, it is definitely addable in further proposals. Supporting (e.g.) floating point seems like a higher priority to me, and we punted that off to a future proposal to allow this one to focus on the core model.

Thanks Joe!

-Chris

1 Like

The limit isn't really a self-recursion limit, it is a call tree limit. Self recursion is one thing that can hit the limit, but arbitrary other computations can too.

-Chris

1 Like

Tail recursion doesn't help the implementation here. The bound is not on stack depth, it is on the amount of computation being performed at compile time. The proposal doesn't include loops, but if it did, we would also include a constant limit on the trip count of loops.

-Chris

Okay, fair point. I think I got too deep (pun not intended) on thinking of these as if they were macros. I forgot for a moment that we're calculating constants here. There's no code generation per se (for which a more detailed depth control would be useful). It's just calculation of a number, in most cases.

(Whenever we get macros, I'll advocate and brainstorm for a similar tool (bounding depth to a polynomial) ^^)

Thanks Chris :slight_smile:

It’s really exciting to see this! It looks like a really solid foundation for compile-time evaluation. I don’t have any specific questions or feedback about the design specifically.

The section on generics did get me wondering whether there will be frustrating limitations caused by the generics model though, especially when we get to more sophisticated applications such as allowing generic parameters to be of compiler evaluable types.

I’m looking forward to seeing where this goes!

3 Likes

I really appreciate how this proposal is written (starting from a simple base case and adding complexity in small pieces) which make it easy to understand even for people who don't have much experience with or knowledge of this area, like me.

6 Likes

If Swift eventually gets hygienic macro system, the macro system could use the ability to evaluate expressions at compile time.

Could you elaborate a little how this would fit into a macro system? It seems rather underpowered for a Rust-proc-macro style system and I can't really imagine how it would much help in a simpler text-replacement-but-hygienic style system.

We are not proposing (or planning to propose!) a macro system, so this is a very speculative claim, which may become irrelevant either because a) Swift never gets a macro system, or b) this is a useless feature for whatever it does get.

It was informed by the fact that GCC/Clang has things similar to this. For example, despite having "?:" , it has a similar but different __builtin_choose_expr that forces constant condition and provides stronger guarantees about the LHS/RHS of the expression. If you're curious about the insanity of GCC's macro system, __builtin_choose_expr is not a bad place to start reading. :-)

That said, it is important to know that I'm not claiming that __builtin_choose_expr can be expressed by this (it can't, because choose_expr requires folding in the AST) it is just an example of the wacky sort of stuff in this space.

-Chris

3 Likes

I occurs to me that it is probably better to define the evaluation limit in terms of a cap on the number of instructions evaluated to fold the constant. When defined this way, it could represent both looping, recursion, and arbitrary other execution models with a single limit.

-Chris