dminik 12 hours ago

I feel that I have to point this out once again, because the article goes so far as to state that:

> With this last improvement Zig has completely defeated function coloring.

I disagree with this. Let's look at the 5 rules referenced in the famous "What color is your function?" article referenced here.

> 1. Every function has a color

Well, you don't have async/sync/red/blue anymore, but you now have IO and non-IO functions.

> 2. The way you call a function depends on its color.

Now, technically this seems to be solved, but you still need to provide IO as a parameter. Non-IO functions don't need/take it.

It looks like a regular function call, but there's no real difference.

> 3. You can only call a red function from within another red function

This still applies. You can only call IO functions from within other IO functions.

Technically you could pass in a new executor, but is that really what you want? Not to mention that you can also do this in languages that don't claim to solve the coloring problem.

> 4. Red functions are more painful to call

I think the spirit still applies here.

> 5. Some core library functions are red

This one is really about some things being only possible to implement in the language and/or stdlib. I don't think this applies to Zig, but it doesn't apply to Rust either for instance.

Now, I think these rules need some tweaking, but the general problem behind function coloring is that of context. Your function needs some context (an async executor, auth information, an allocator, ...). In order to call such a function you also need to provide the context. Zig hasn't really solved this.

That being said, I don't think Zig's implementation here is bad. If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.

However, the coloring problem hasn't really been defeated.

  • mlugg 12 hours ago

    The key difference to typical async function coloring is that `Io` isn't something you need specifically for asynchronicity; it's something which (unless you make a point to reach into very low-level primitives) you will need in order to perform any IO, including reading a file, sleeping, getting the time, etc. It's also just a value which you can keep wherever you want, rather than a special attribute/property of a function. In practice, these properties solve the coloring problem:

    * It's quite rare for a function to unexpectedly gain a dependency on "doing IO" in general. In practice, most of your codebase will have access to an `Io`, and only leaf functions doing pure computation will not need them.

    * If a function does start needing to do IO, it almost certainly doesn't need to actually take it as a parameter. As in many languages, it's typical in Zig code to have one type which manages a bunch of core state, and which the whole codebase has easy access to (e.g. in the Zig compiler itself, this is the `Compilation` type). Because of this, despite the perception, Zig code doesn't usually pass (for instance) allocators explicitly all the way down the function call graph! Instead, your "general purpose allocator" is available on that "application state" type, so you can fetch it from essentially wherever. IO will work just the same in practice. So, if you discover that a code path you previously thought was pure actually does need to perform IO, then you don't need to apply some nasty viral change; you just grab `my_thing.io`.

    I do agree that in principle, there's still a form of function coloring going on. Arguably, our solution to the problem is just to color every function async-colored (by giving most or all of them access to an `Io`). But it's much like the "coloring" of having to pass `Allocator`s around: it's not a problem in practice, because you'll basically always have easy access to one even if you didn't previously think you'd need it. I think seasoned Zig developers will pretty invariably agree with the statement that explicitly passing `Allocator`s around really does not introduce function coloring annoyances in practice, and I see no reason that `Io` would be particularly different.

    • SkiFire13 an hour ago

      > I do agree that in principle, there's still a form of function coloring going on. Arguably, our solution to the problem is just to color every function async-colored

      I feel like there are two issues with this approach:

      - you basically rely on the compiler/stdlib to silently switch the async implementation, effectively implementing a sort of hidden control flow which IMO doesn't really fit Zig

      - this only solves the "visible" coloring issue of async vs non-async functions, but does not try to handle the issue of blocking vs non-blocking functions, rather it hides it by making all functions have the same color

      - you're limiting the set of async operations to the ones supported in the `Io`'s vtable. This forces it to e.g. include mutexes, even though they are not really I/O, because they might block and hence need async support. But if I wrote my own channel how would this design support it?

    • dminik 11 hours ago

      > It's quite rare for a function to unexpectedly gain a dependency on ...

      If this was true in general, the function coloring problem wouldn't be talked about.

      However, the second point is more interesting. I think there's a bit of a Stockholm syndrome thing here with Zig programmers and Allocator. It's likely that Zig programmers won't mind passing around an extra param.

      If anything, it would make sense to me to have IO contain an allocator too. Allocation is a kind of IO too. But I guess it's going to be 2 params from now on.

      • laserbeam 5 hours ago

        > If anything, it would make sense to me to have IO contain an allocator too. Allocation is a kind of IO too.

        Io in zig is for “things that can block execution”. Things that could semantically cause a yield of any kind. Allocation is not one of those things.

        Also, it’s perfectly reasonable and sometimes desireable to have 13 different allocators in your program at once. Short lived ones, long lived ones, temporary allocations, super specific allocators to optimize some areas of your game…

        There are fewer reasons to want 2 different strategies to handle concurrency at the same time in your program as they could end up deadlocking on each other. Sure, you may want one in debug builds, another in release, another when running tests, but there are much fewer usecases of them running side by side.

        • chrisohara 2 hours ago

          > Io in zig is for “things that can block execution”. Things that could semantically cause a yield of any kind. Allocation is not one of those things.

          The allocator may yield to the OS when requesting or releasing memory (e.g. sbrk, mmap, munmap)?

          • laserbeam 36 minutes ago

            Yielding in this context means to a different “thread” in your context, not the OS. If you want to express “this is a point where the program can do something else” it is a yield. If you block and can’t switch to something else… it is not.

            So if you’re using an API like mmap like that you should think of it as IO (I don’t think you can, but am not sure).

            • throwawaymaths 15 minutes ago

              the page allocator which is the root of many allocators calls mmap. of course the fixed buffer allocator does not.

          • messe 2 hours ago

            I don't find that a particularly compelling argument in this case, because so can accessing any memory address if it's not currently swapped in.

      • throwawaymaths 8 hours ago

        > But I guess it's going to be 2 params from now on.

        >> So, if you discover that a code path you previously thought was pure actually does need to perform IO, then you don't need to apply some nasty viral change; you just grab `my_thing.io

        • camgunz 6 hours ago

          Python, for example, will let you call async functions inside non-async functions, you just have to set up the event loop yourself. This isn't conceptually different than the Io thing here.

          • throwawaymaths an hour ago

            except you cant "pass the same event loop in multiple locations". its also not an easy lift. the zig std will provide a few standard implementations which would be trivial to drop in.

      • Gibbon1 6 hours ago

        I do something like that with event driven firmware. There is an allocator as part of the context. And the idea that the function is executing under some context seems fine to me.

    • FlyingSnake 5 hours ago

      > Arguably, our solution to the problem is just to color every function async-colored.

      This is essentially how Golang achived color-blindness.

    • ginko 9 hours ago

      > It's quite rare for a function to unexpectedly gain a dependency on "doing IO" in general.

      From the code sample it looks like printing to stdio will now require an Io param. So won’t you now have to pass that down to wherever you want to do a quick debug printf?

      • flohofwoe 5 hours ago

        Zig has specifically a std.debug.print() function for debug printing and a std.log module for logging. Those don't necessarily need to be wired up with the whole stdio machinery.

      • dwattttt 9 hours ago

        I'm not familiar with Zig, but won't the classic "blocking" APIs still be around? I'd rather a synchronous debug print either way.

        • laserbeam 5 hours ago

          Yes. You can always use the blocking syscalls your OS provides and ignore the Io system for stuff like that. No idea how they’d do that by default in the stdlib, but it will definitely be possible.

      • throwawaymaths 9 hours ago

        std.debug.print(..) prints to stderr whuch does not need an io param.

  • ozgrakkurt 5 hours ago

    You are skipping the massive point here.

    If you are using a library in rust, it has to be async await, tokio, send+sync and all the other crap. Or if it is sync api then it is useless for async application.

    This approach of passing IO removes this problem and this is THE main problem.

    This way you don’t have to use procedural macros or other bs to implement multi versioning for the functions in your library, which doesn’t work well anyway in the end.

    https://nullderef.com/blog/rust-async-sync/

    You can find 50 other ones like this by searching.

    To be honest I don’t hope they will solve cooperative scheduling, high performance, optionally thread-per-core async soon and the API won’t be that good anyway. But hope it solves all that in the future.

    • tcfhgj 4 hours ago

      > If you are using a library in rust, it has to be async await, tokio, send+sync and all the other crap

      Send and sync is only required if you want to access something from multiple threads, which isn't required by async await (parallelism vs concurrency)

      1) You can use async await without parallelism and 2) send and sync aren't a product of async/await in Rust, but generally memory safety, i.e. you need Send generally when something can/is allowed to move between threads.

      • m11a 2 hours ago

        Yes, but async Rust is basically built on tokio's runtime, which is what most the big async libraries depend on, like hyper/axum/tokio etc. And tokio is a thread-per-core work-stealing architecture, which requires Send + Sync bounds everywhere. You can avoid them if you depend on tokio-proper, but it's more icky when building on something like axum, where your application handlers also require these bounds.

        A good article on this: https://emschwartz.me/async-rust-can-be-a-pleasure-to-work-w...

    • dwattttt 4 hours ago

      > Or if it is sync api then it is useless for async application.

      The rest is true, but this part isn't really an issue. If you're in an async function you can call sync functions still. And if you're worried it'll block and you can afford that, I know tokio offers spawn_blocking for this purpose.

    • dminik 3 hours ago

      I'm not skipping anything. And in fact acknowledge this exact point:

      > That being said, I don't think Zig's implementation here is bad. If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.

  • kristoff_it 11 hours ago

    Here's a trick to make every function red (or blue? I'm colorblind, you decide):

        var io: std.Io = undefined;
    
        pub fn main() !void {
           var impl = ...;
           io = impl.io();
        }
    
    Just put io in a global variable and you won't have to worry about coloring in your application. Are your functions blue, red or green now?

    Jokes aside, I agree that there's obviously a non-zero amount of friction to using the `Io` intreface, but it's something qualitatively very different from what causes actual real-world friction around the use of async await.

    > but the general problem behind function coloring is that of context

    I would disagree, to me the problem seems, from a practical perspective that:

    1. Code can't be reused because the async keyword statically colors a function as red (e.g. python's blocking redis client and asyncio-redis). In Zig any function that wants to do Io, be it blue (non-async) or red (async) still has to take in that parameter so from that perspective the Io argument is irrelevant.

    2. Using async and await opts you automatically into stackless coroutines with no way of preventing that. With this new I/O system even if you decide to use a library that interally uses async, you can still do blocking I/O, if you want.

    To me these seems the real problems of function coloring.

    • dminik 11 hours ago

      Well, it's not really a joke. That's a valid strategy that languages use. In Go, every function is "async". And it basically blocks you from doing FFI (or at least it used to?). I wonder if Zig will run into similar issues here.

      > 1. Code can't be reused because the async keyword statically colors a function

      This is fair. And it's also a real pain point with Rust. However, it's funny that the "What color is your function?" article doesn't even really mention this.

      > 2. Using async and await opts you automatically into stackless coroutines with no way of preventing that

      This however I don't think is true. Async/await is mostly syntax sugar.

      In Rust and C# it uses stackless coroutines.

      In JS it uses callbacks.

      There's nothing preventing you from making await suspend a green thread.

      • kristoff_it 11 hours ago

        I should have specified that better, of course async and await can be lowered to different things (that's what Zig does afterall), what I wanted to say is that that's how it works in general. JS is a good counter example, but for all other mainstream languages, async means stackless coroutines (python, ruby, c#, rust, ...).

        Which means that if I want to use a dependency that uses async await, it's stackless coroutines for me too whether I like it or not.

    • ismailmaj 11 hours ago

      The global io trick would totally be valid if you’re writing an application (i.e. not a library) and don’t have use of two different implementations of io

      • laserbeam 4 hours ago

        There are plenty of libraries out there which require users to do an init() call of some sorts at startup. It is perfectly possible to design a library that only works with 1 io instance and gets it at init(). Whether people like or want that… I have no clue.

      • throwawaymaths 9 hours ago

        you could still have a library-global io, let the user set it as desired.

        > use of two different implementations of io

        functionally rare situation.

  • ayuhito 11 hours ago

    Go also suffers from this form of “subtle coloring”.

    If you’re working with goroutines, you would always pass in a context parameter to handle cancellation. Many library functions also require context, which poisons the rest of your functions.

    Technically, you don’t have to use context for a goroutine and could stub every dependency with context.Background, but that’s very discouraged.

    • arp242 5 hours ago

      Having all async happen completely transparently is not really logically possible. asynchronous logic is frequently fundamentally different from synchronous logic, and you need to do something different one way or the other. I don't think that's really the same as "function colouring".

      And context is used for more than just goroutines. Even a completely synchronous function can (and often does) take a context, and the cancellation is often useful there too.

    • tidwall 11 hours ago

      Context is not required in Go and I personally encourage you to avoid it. There is no shame in blazing a different path.

      • schrodinger 10 hours ago

        Why do you encourage avoiding it? Afaik it's the only way to early-abort an operation since Goroutines operate in a cooperative, not preemptive, paradigm. To be very clear, I'm asking this completely in good faith looking to learn something new!

        • sapiogram an hour ago

          > Afaik it's the only way to early-abort an operation since Goroutines operate in a cooperative, not preemptive, paradigm.

          I'm not sure what you mean here. Preemptive/coorporative terminology refers to interrupting (not aborting) a CPU-bound task, in which case goroutines are fully preemptive on most platforms since Go 1.14, check the release notes for more info. However, this has nothing to do with context.

          If you're referring to early-aborting IO operations, then yes, that's what context is for. However, this doesn't really have anything to do with goroutines, you could do the same if the runtime was built on OS threads.

        • pjmlp 9 hours ago

          You are expecting them to actually check the value, there is nothing preemptive.

          Another approach is special messages over a side channel.

          • deepsun 7 hours ago

            So instead of a context you need to pass a a channel. Same problem.

            • pjmlp 7 hours ago

              Not necessarily, that is one of the reasons OOP exists.

              Have a struct representing the set of associated activities, owning the channel.

              • ozgrakkurt 5 hours ago

                soo, a context?

                • pjmlp 3 hours ago

                  Nope, because I didn't mention it was to be passed around as a compulsory parameter, rather have your logic organised across structs with methods, hide most details behind interfaces.

                • ncruces 5 hours ago

                  You can take that view, yes.

                  But if you store your context in a struct (which is not the recommend “best practice” – but which you can do) it's no longer a function coloring issue.

                  I do that in on of my libraries and I feel that it's the right call (for that library).

                  • lenkite 2 hours ago

                    If the struct has a well-scoped and short-lived lifecycle, then it is actually better to put the context in the struct. Many Go libraries including the stdlib do this despite not being "best practice".

                    An exception to the short-lived rule is to put context in your service struct and pass it as the base context when constructing the HTTP server, so that when you get a service shutdown signal, one can cancel requests gracefully.

      • nu11ptr 10 hours ago

        What would you use in its place? I've never had an issue with it. I use it for 1) early termination 2) carrying custom request metadata.

        I don't really think it is fully the coloring problem because you can easily call non-context functions from context functions (but not other way around, so one way coloring issue), but you need to be aware the cancellation chain of course stops then.

    • phplovesong 5 hours ago

      Like you said you dont NEED context. Its just something thats available if you need it. I still think Go/Erlang has one of the best concurrency stories out there.

    • oefrha 5 hours ago

      The thing about context is it can be a lot more than a cancellation mechanism. You can attach anything to it—metadata, database client, logger, whatever. Even Io and Allocator if you want to. Signatures are future-proof as long as you take a context for everything.

      At the end of the day you have to pass something for cooperative multitasking.

      Of course it’s also trivial to work around if you don’t like the pattern, “very discouraged” or not.

    • zer00eyz 5 hours ago

      > If you’re working with goroutines, you would always pass in a context parameter to handle cancellation.

      The utility of context could be called a subtle coloring. But you do NOT need context at all. If your dealing with data+state (around queue and bus processing) its easy to throw things into a goroutine and let the chips fall where they will.

      > which poisons the rest of your functions. You are free to use context dependent functions without a real context: https://pkg.go.dev/context#TODO

  • n42 11 hours ago

    Aside from the ridiculous argument that function parameters color them, the assertion that you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in

    • throwawaymaths 6 minutes ago

      > you can’t call a function that takes IO from inside a function that does not is false, since you can initialize one to pass it in

      that's not true. suppose a function foo(anytype) takes a struct, and expects method bar() on the struct.

      you could send foo() the struct type Sync whose bar() does not use io. or you could send foo() the struct type Async whose bar uses an io stashed in the parameter, and there would be no code changes.

      if you don't prefer compile time multireification, you can also use type erasure and accomplish the same thing with a vtable.

    • dminik 11 hours ago

      To me, there's no difference between the IO param and async/await. Adding either one causes it to not be callable from certain places.

      As for the second thing:

      You can do that, but... You can also do this in Rust. Yet nobody would say Rust has solved function coloring.

      Also, check this part of the article:

      > In the less common case when a program instantiates more than one Io implementation, virtual calls done through the Io interface will not be de-virtualized, ...

      Doing that is an instant performance hit. Not to mention annoying to do.

      • bmurphy1976 10 hours ago

        >To me, there's no difference between the IO param and async/await.

        You can't pass around "async/await" as a value attached to another object. You can do that with the IO param. That is very different.

        • jolux 7 hours ago

          > You can't pass around "async/await" as a value attached to another object

          Sure you can? You can just pass e.g. a Task around in C# without awaiting it, it's when you need a result from a task that you must await it.

        • dminik 2 hours ago

          Conceptually, there's not much of a difference.

          If you have a sync/non-IO function that now needs to do IO, it becomes async/IO. And since IO and async are viral, it's callers must also now be IO/async and call it with IO/await. All the way up the call stack.

        • MrJohz 5 hours ago

          Sure you can. An `async` function in Javascript is essentially a completely normal function that returns a promise. The `async`/`await` syntax is a convenient syntax sugar for working with promises, but the issue would still exist if it didn't exist.

          More to the point, the issue would still exist even if promises didn't exist — a lot of Node APIs originally used callbacks and a continuation-passing style approach to concurrency, and that had exactly the same issues.

      • delamon 4 hours ago

        > Doing that is an instant performance hit. Not to mention annoying to do.

        The cost of virtual dispatch on IO path is almost always negligible. It is literally one conditional vs syscall. I doubt it you can even measure the difference.

      • n42 11 hours ago

        You’re allowed to not like it, but that doesn’t change that your argument that this is a form of coloring is objectively false. I’m not sure what Rust has to do with it.

        • dminik 2 hours ago

          It's funny, but I do actually like it. It's just that it walks like a duck, swims like a duck and quacks like a duck.

          I don't have a problem with IO conceptually (but I do have a problem with Zig ergonomics, allocator included). I do have a problem with claiming you defeated function coloring.

          Like, look. You didn't even get rid of await ...

          > try a_future.await(io);

          • mlugg 2 hours ago

            I mean... you use `await` if you've used `async`. It's your choice whether or not you do; and if you don't want to, your callers and callees can still freely `async` and `await` if they want to. I don't understand the point you're trying to make here.

            To be clear, where many languages require you to write `const x = await foo()` every time you want to call an async function, in Zig that's just `const x = foo()`. This is a key part of the colorless design; you can't be required to acknowledge that a function is async in order to use it. You'll only use `await` if you first use `async` to explicitly say "I want to run this asynchronously with other code here if possible". If you need the result immediately, that's just a function call. Either way, your caller can make its own choice to call you or other functions as `async`, or not to; as can your callees.

            • dminik an hour ago

              > in Zig that's just ...

              Well, no. In zig that's `const x = foo(io)`.

              The moment you take or even know about an io, your function is automatically "generic" over the IO interface.

              Using stackless coroutines and green threads results in a completely different codegen.

              I just noticed this part of the article:

              > Stackless Coroutines > > This implementation won’t be available immediately like the previous ones because it depends on reintroducing a special function calling convention and rewriting function bodies into state machines that don’t require an explicit stack to run. > > This execution model is compatible with WASM and other platforms where stack swapping is not available or desireable.

              I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.

              • mlugg an hour ago

                > Well, no. In zig that's `const x = foo(io)`.

                If `foo` needs to do IO, sure. Or, more typically (as I mentioned in a different comment), it's something like `const x = something.foo()`, and `foo` can get its `Io` instance from `something` (in the Zig compiler this would be a `Compilation` or a `Zcu` or a `Sema` or something like that).

                > Using stackless coroutines and green threads results in a completely different codegen.

                Sure, but that's abstracted away from you. To be clear, stackless coroutines are the only case where the codegen of callers is affected, which is why they require a language feature. Even if your application uses two `Io` implementations for some reason, one of which is based on stackless coroutines, functions using the API are not duplicated.

                > I wonder what will happen if you try to await a future created with a green thread IO using a stackless coroutine IO.

                Mixing futures from any two different `Io` implementations will typically result in Illegal Behavior -- just like passing a pointer allocated with one `Allocator` into the `free` of a different `Allocator` does. This really isn't a problem. Even with allocators, it's pretty rare for people to mess this up, and with allocators you often do have multiple of them available in one place (e.g. a gpa and an arena). In contrast, it will be extraordinarily rare to have more than one `Io` lying around. Even if you do mess it up, the IB will probably just trip a safety check, so it shouldn't take you too long to realise what you've done.

        • rowanG077 8 hours ago

          Sure it is a function coloring. Just in a different form. `async` in other languages is something like an implicit parameter. In zig they made this implicit parameter explicit. Is that more better/more ergonomic? I don't know yet. The sugar is different, but the end result the same. Unless you can show me concrete example of things that the approach zig has taken can do that is not possible in say, rust. Than I don't buy that it's not just another form of function coloring.

          • throwawaymaths 8 hours ago

            > Unless you can show me concrete example

            add io to a struct and let the struct keep track of its own io.

            • gfaster 7 hours ago

              Unless I'm misunderstanding, that's effectively implementing Future for the struct

              • masklinn 6 hours ago

                It’s more like adding a runtime handle to the struct.

                Modulo that I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations. So a library can’t take a runtime for a caller and get whatever implementation the caller decided to use.

                • dwattttt 3 hours ago

                  > I’m not sure any langage with a sync/async split has an “async” runtime built entirely out of sync operations.

                  You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.

                  I've implemented Future (Rust) on a struct for a Windows operation based only on C calls into the OS. The struct maintains everything needed to know the state of the IO, and while I coupled the impl to the runtime for efficiency (I've written it too), it's not strictly necessary from memory.

                  • masklinn an hour ago

                    > You get into hairy problems of definition, but you can definitely create an "async" runtime out of "sync" operations: implement an async runtime with calls to C. C doesn't have a concept of "async", and more or less all async runtime end up like this.

                    While C doesn't have async OS generally provide APIs which are non-blocking, and that is what async runtimes are implemented on top of.

                    By sync operations I mean implementing an "async" runtime entirely atop blocking operations, without bouncing them through any sort of worker threads or anything.

      • throwawaymaths 8 hours ago

        > Adding either one causes it to not be callable from certain places.

        you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?

        as a trivial example the fn main entrypoint in zig will never take an io parameter... how do you suppose you'd bootstrap the io parameter that you'd eventually need. this is unlike other languages where main might or might not be async.

        • masklinn 6 hours ago

          You can call an async function from a function that is not async by passing in a global runtime (/ event loop).

          As a trivial example the main entry point in rust is never async. How’d you suppose you’d bootstrap the runtime that you’d eventually need.

          This is pretty much like every other langage.

          • almostgotcaught 2 hours ago

            People in software really do have poor abilities to see the forest for the trees don't they lol

        • ginko 8 hours ago

          >you can call a function that requires an io parameter from a function that doesn't have one by passing in a global io instance?

          How will that work with code mixing different Io implementations? Say a library pulled in uses a global Io instance while the calling code is using another.

          I guess this can just be shot down with "don't do that" but it feels like a new kind of pitfall get get into.

          • throwawaymaths 8 hours ago

            > Say a library pulled in uses a global Io instance while the calling code is using another.

            it'll probably carry a stigma like using unsafe does.

          • kristoff_it 5 hours ago

            while not really idiomatic, as long as you let the user define the Io instance (eg with some kind of init function), then it doesn't really matter how that value is accessed within the library itself.

            that's why this isn't really the same as async "coloring"

          • TUSF 8 hours ago

            Zig already has an Allocator interface that gets passed around, and the convention is that libraries don't select an Allocator. Only provide APIs that accept allocators. If there's a certain process that works best with an Arena, then the API may wrap a provided function in an Arena, but not decide on their own underlying allocator for the user.

            For Zig users, adopting this same mindset for Io is not really anything new. It's just another parameter that occasionally needs to be passed into an API.

  • flohofwoe 5 hours ago

    > In order to call such a function you also need to provide the context. Zig hasn't really solved this.

    It is much more flexible though since you don't need to pass the IO implementation into each function that needs to do IO. You could pass it once into an init function and then use that IO impl throughout the object or module. Whether that's good style is debatable - the Zig stdlib currently has containers that take an allocator in the init function, but those are on the way out in favour of explicitly taking the allocator in each function that needs to allocate - but the user is still free to write a minimal wrapper to restore the 'pass allocator into init' behaviour.

    Odin has an interesting solution in that it passes an implicit context pointer into each function, but I don't know if the compiler is clever enough to remove the overhead for called functions that don't access the context (since it also needs to peek into all called functions - theoretically Zig with it's single-compilation-unit approach could probably solve that problem better).

    • tcfhgj 4 hours ago

      You can write a wrapper in other langs, too, e.g. in Rust: block_on(async_fn)

  • tcfhgj 3 hours ago

    > If anything, it does a great job at abstracting the usage from the implementation. This is something Rust fails at spectacularly.

    Could you expand on this? I don't get what you mean

    • koito17 3 hours ago

      I am not very experienced in async Rust, but it seems there are some pieces of async Rust that rely too much on tokio internals, so using an alternative runtime (like pollster) results in broken code.

      Searching for comments mentioning "pollster" and "tokio" on HN brings a few results, but not one I recall seeing a while ago where someone demonstrated an example of a library (using async Rust) that crashes when not using tokio as the executor.

      Related documentation: https://rust-lang.github.io/async-book/08_ecosystem/00_chapt...

      • conradludgate 3 hours ago

        There's two details that are important to highlight. tokio is actually 2 components, it's the async scheduler, and it's the IO runtime. Pollster is only a scheduler, and does not offer any IO functionality. You can actually use tokio libraries with pollster, but you need to register the IO runtime (and spawn a thread to manage it) - this is done with Runtime::enter() and it configures the thread local interface so any uses of tokio IO know what runtime to use.

        There are ideas to abstract the IO runtime interface into the async machinery (in Rust that's the Context object that schedulers pass into the Future) but so far that hasn't gotten anywhere.

      • pimeys 3 hours ago

        Yep. The old async wars in the Rust ecosystem. It's the AsyncRead and AsyncWrite traits. Tokio has its own, there was a standard brewing at the same time in the futures crate. Tokio did their own thing, people burnt out, these traits were never standardized to std.

        So you cannot use most of the async crates easily outside Tokio.

      • tcfhgj 3 hours ago

        thanks, got it!

    • dminik 2 hours ago

      Sure. Let's do an imaginary scenario. Let's say that you are the author of a http request library.

      Async hasn't been added yet, so you're using `std::net::TcpStream`.

      All is well until async comes along. Now, you have a problem. If you use async, your previous sync users won't be able to (easily) call your functions. You're looking at an API redesign.

      So, you swallow your pride and add an async variant of your functionality. Since Tokio is most popular, you use `tokio::net::TcpStream`.

      All is well, until a user comes in and says "Hey, I would like to use your library with smol (a different async runtime)". Now what do you do? Add a third variant of your code using `smol::net::TcpStream`? It's getting a bit ridiculous, and smol isn't the only alternative runtime.

      One solution is to do what Zig does, but there isn't really an agreed upon solution. The stdlib does not even provide AsyncRead/AsyncWrite so you could invert your code and just work with streams provided from above and keep your libary executor agnostic.

  • cryptonector 10 hours ago

    > Well, you don't have async/sync/red/blue anymore, but you now have IO and non-IO functions.

    > However, the coloring problem hasn't really been defeated.

    Well, yes, but if the only way to do I/O were to have an Io instance to do it with then Io would infect all but pure(ish, non-Io) functions, so calling Io functions would be possible in all but those contexts where calling Io functions is explicitly something you don't want to be possible.

    So in a way the color problem is lessened.

    And on top of that you get something like Haskell's IO monad (ok, no monad, but an IO interface). Not too shabby, though you're right of course.

    Next Zig will want monadic interfaces so that functions only have to have one special argument that can then be hidden.

    • throwawaymaths 9 hours ago

      Zig's not really about hiding things but you could put it in an options struct that has defaults unless overridden at compile time.

  • andyferris 12 hours ago

    I think of it this way.

    Given an `io` you can, technically, build another one from it with the same interface.

    For example given an async IO runime, you could create an `io` object that is blocking (awaits every command eagerly). That's not too special - you can call sync functions from async functions. (But in JavaScript you'd have trouble calling a sync function that relies on `await`s inside, so that's still something).

    Another thing that is interesting is given a blocking posix I/O that also allows for creating processes or threads, you could build in userspace a truly asynchronous `io` object from that blocking one. It wouldn't be as efficient as one based directly on iouring, and it would be old school, but it would basically work.

    Going either way (changing `io` to sync or async) the caller doesn't actually care. Yes the caller needs a context, but most modern apps rely on some form of dependency injection. Most well-factored apps would probably benefit from a more refined and domain-specific "environment" (or set of platform effects, perhaps to use the Roc terminology), not Zig's posix-flavoured standard library `io` thing.

    Yes rust achieves this to some extent; you can swap an async runtime for another and your app might still compile and run fine.

    Overall I like this alot - I am wondering if Richard Feldmann managed to convince Andrew Kelley that "platforms" are cool and some ideas were borrowed from Roc?

    • dminik 11 hours ago

      > but most modern apps rely on some form of dependency injection

      Does Zig actually do anything here? If anything, this seems to be anti-Zig, where everything must be explicit.

      • philwelch 9 hours ago

        Passing in your dependencies as function arguments is a form of dependency injection. It is the simplest and thus arguably best form of dependency injection.

        • almostgotcaught 2 hours ago

          This is like saying arithmetic is a form of calculus, the simplest form. Ie it reduces the concept (DI) to a meaningless tautology.

  • throwawaymaths 9 hours ago

    > Technically you could pass in a new executor, but is that really what you want?

    why does it have to be new? just use one executor, set it as const in some file, and use that one at every entrypoint that needs io! now your io doesn't propagate downwards.

  • jaredklewis 8 hours ago

    So this is a tangent from the main article, but this comment made me curious and I read the original "What color is Your Function" post.

    It was an interesting read, but I guess I came away confused about why "coloring" functions is a problem. Isn't "coloring" just another form of static typing? By giving the compiler (or interpreter) more meta data about your code, it can help you avoid mistakes. But instead of the usual "first argument is an integer" type meta data, "coloring" provides useful information like: "this function behaves in this special way" or "this function can be called in these kinds of contexts." Seems reasonable?

    Like the author seems very perturbed that there can be different "colors" of functions, but a function that merely calculates (without any IO or side-effects) is different than one that does perform IO. A function with only synchronous code behaves very differently than one that runs code inside another thread or in a different tick of the event loop. Why is it bad to have functions annotated with this meta data? The functions behave in a fundamentally different way whether you give them special annotations/syntax or not. Shouldn't different things look different?

    He mentions 2015 era Java as being ok, but as someone that’s written a lot of multithreaded Java code, it’s easy to mess up and people spam the “synchronized” keyword/“color” everywhere as a result. I don’t feel the lack of colors in Java makes it particularly intuitive or conceptually simpler.

    • dminik 2 hours ago

      Yes, the main character of that article really is mostly JavaScript. The main issue there is that some things must be async, and that doesn't mesh well with things that can't be.

      If you're writing a game, and you need to render a new enemy, you might want to reduce performance by blocking rather than being shot by an invisible enemy because you can only load the model async.

      But even the article acknowledges that various languages tackle this problem better. Zig does a good job, but claiming it's been defeated completely doesn't really fly for me.

    • ezst 2 hours ago

      I believe the point is less about "coloring" not having value as a type-system feature, and more about its bad ergonomics, and its viral nature in particular.

    • dwattttt 3 hours ago

      > Isn't "coloring" just another form of static typing?

      In a very direct way. Another example in languages that don't like you ignoring errors, changing a function from infallible to fallible is a breaking change, a la "it's another colour".

      I'm glad it is: if a function I call can suddenly fail, at the very least I want to know that it can, even if the only thing I do is ignore it (visibly).

    • com2kid 6 hours ago

      > Isn't "coloring" just another form of static typing?

      Yes, and so is declaring what exceptions a function can throw (checked exceptions in Java).

      > Why is it bad to have functions annotated with this meta data? The functions behave in a fundamentally different way whether you give them special annotations/syntax or not. Shouldn't different things look different?

      It really isn't a problem. The article makes people think they've discovered some clever gotcha when they first read it, but IMHO people who sit down for a bit and think through the issue come to the same conclusion you have - Function coloring isn't a problem in practice.

      • kristoff_it 4 hours ago

        > but IMHO people who sit down for a bit and think through the issue come to the same conclusion you have - Function coloring isn't a problem in practice.

        I dunno man, have you seen people complain about async virality in Rust being annoying? Have you ever tried to read a backtrace from a program that does stackless coroutines (it's not fun)? Have you seen people do basically duplicate work to maintain a blocking and an async version of the same networking library?

        • pimeys an hour ago

          People do complain, like they do from things like systemd. Then there is us, the silent majority, who just get shit done with these tools.

          I respect what zig has done here, and I will want to try it out when it stabilizes. But Rust async is just fine.

    • raincole 6 hours ago

      > It was an interesting read, but I guess I came away confused about why "coloring" functions is a problem. Isn't "coloring" just another form of static typing?

      It is. Function coloring is static typing.

      But people never ever agree on what to put in typing system. For example, Java's checked exceptions are a form of typing... and everyone hates them.

      Anyway it's always like that. Some people find async painful and say fuck it I'm going to manage threads manually. In the meanwhile another bunch of people work hard to introduce async to their language. Grass is always greener on the other side.

  • nurettin 4 hours ago

    I think the point is 3 doesn't fully apply anymore. And that was the main pain point. You couldn't call a blue in red even if it didn't use IO without some kind of execution wrapper or waiter. Now you clearly can.

  • nmilo 10 hours ago

    The original “function colouring” blogpost has done irreparable damage to PL discussions because it’s such a stupid concept to begin with. Of course I want async functions to be “coloured” differently, they do different things! How else is a “normal function” supposed to call a function that gives you a result later——obviously you want to be forced to say what to do with the result; await it, ignore it, .then() in JS terms, etc. these are important decisions that you can’t just ignore because it’s “painful”

    • yxhuvud 3 hours ago

      There is nothing obvious around that - it is driven by what abstractions the language provides related to concurrency, and with different choices you will end needing different ways to interact with it.

      So yes, given how the language designers of C# and JavaScript choose to implement concurrency and the APIs around that, then coloring is necessary. But it is very much implementation driven and implementation of other concurrency models then other ways to do it that don't involve keywords can make sense. So when people complain about function coloring, they are complaining about the choice of concurrency model that a language uses.

Cloudef 19 minutes ago

I like the IO interface simply for the fact that it would allow me to create language level vfs

do_not_redeem 12 hours ago

I'm generally a fan of Zig, but it's a little sad seeing them go all in on green threads (aka fibers, aka stackful coroutines). Rust got rid of their Runtime trait (the rough equivalent of Zig's Io) before 1.0 because it performed badly. Languages and OS's have had to learn this lesson the hard way over and over again:

https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p13...

> While fibers may have looked like an attractive approach to write scalable concurrent code in the 90s, the experience of using fibers, the advances in operating systems, hardware and compiler technology (stackless coroutines), made them no longer a recommended facility.

If they go through with this, Zig will probably top out at "only as fast as Go", instead of being a true performance competitor. I at least hope the old std.fs sticks around for cases where performance matters.

  • mlugg 12 hours ago

    I'm not sure how you got the perception that we're going "all in" on green threads, given that the article in OP explicitly mentions that we're hoping to have an implementation based on stackless coroutines, based on this Zig language proposal: https://github.com/ziglang/zig/issues/23446

    Performance matters; we're not planning to forget that. If fibers turn out to have unacceptable performance characteristics, then they won't become a widely used implementation. Nothing discussed in this article precludes stackless coroutines from backing the "general purpose" `Io` implementation if that turns out to be the best approach.

    • ksec 7 hours ago

      That is lovely to hear. I think the general conscious is that not a single programming language has done Async right. So people are a little sceptical. But Andrew and the team so far seems to have the do it right mentality. So I guess people should be a little more optimistic.

      Cant wait for 0.15 coming out soon.

      • krior an hour ago

        I would argue that Java's async is pretty nifty, especially now that some of the rough edges have been sanded off.

    • do_not_redeem 11 hours ago

      Does the BDFL want this though, or is it just one person's opinion that it might be nice? Given how he has been aggressively pruning proposals, I don't put any hope in them anymore unless I see some kind of positive signal from him directly.

      e.g. I'd feel a lot more confident if he had made the coroutine language feature a hard dependency of the writergate refactor.

      • kristoff_it 11 hours ago

        mlugg is in the Zig core team

        • do_not_redeem 11 hours ago

          I'm aware, but Zig isn't a democracy where the core team votes, right? Has Andrew actually expressed that he wants the proposal? Without that we're left with scraps like this commit message where he seems ambivalent. https://github.com/ziglang/zig/commit/d6c90ceb04f8eda7c6b711...

          Andrew, I know you read these threads sometimes, give us a sign so I can go down the mountain with my stone tablets and tell the people whether we'll have coroutines

          • mlugg 11 hours ago

            We don't know whether or not we'll have stackless coroutines; it's possible that we hit design problems we didn't foresee. However, at this moment, the general consensus is that we are interested in pursuing stackless coroutines.

            While Andrew has the final say, as Loris points out, we always work to reach a consensus internally. The article lists this an an implementation that will probably exist, because we agree that it probably will; nobody is promising it, because we also agree that it isn't guaranteed.

            Also, bear in mind that even if stackless coroutines don't make it into Zig, you can always use a single-threaded blocking implementation of `Io`, so you need not be negatively affected by any potential downsides to fibers either way.

            This new `Io` approach has made it strictly more likely than it previously was that stackless coroutines become a part of Zig's final design.

            • comex 7 hours ago

              But how will that actually work? Your stackless coroutines proposal talks about explicit primitives for defining a coroutine. But what about a function that's not designed for any particular implementation strategy - it just takes an Io and passes it on to some other functions? Will the compiler have a way to compile it as either sync or async, like apparently it did before? It would have to, if you want to avoid function colors. But your proposal doesn't explain anything about that.

              Disclaimer: I'm not actually a Zig user, but I am very interested in the design space.

              • mlugg 2 hours ago

                Right, the proposal doesn't discuss the implementation details -- I do apologise if that made it seem a little hand-wavey. I opted not to discuss them there, because they're similar-ish to the way we lowered stackless async in its stage1 implementation, and hence not massively interesting to discuss.

                The idea is that, yes, the compiler will infer whether or not a function is async (in the stackless async sense) based on whether it has any "suspension point", where a suspension point is either: * Usage of `@asyncSuspend` * A call to another async function

                Calls through function pointers (where we typically wouldn't know what we're calling, and hence don't know whether or not it's async!) are handled by a new language feature which has already been accepted; see a comment I left a moment ago [1] for details on that.

                If the compiler infers a function to be async, it will lower it differently; with each suspension point becoming a boundary where any stack-local state is saved to the async frame, as well as an integer indicating where we are in the function, and we jump to different code to be resumed once it finishes. The details of this depend on specifics of the proposal (which I'm planning to change soon) and sometimes melt my brain a little, so I'll leave them unexplained for now, but can probably elaborate on them in the issue thread at some point.

                Of course, this analysis of whether a function is async is a little bit awkward, because it is a whole-program analysis; a change in a leaf function in a little file in a random helper module could introduce asynchronocity which propagates all the way up to your `pub fn main`. As such, we'll probably have different strategies for this inference in the compiler depending on the release mode:

                * In Debug mode, it may be a reasonable strategy to just assume that (almost) all functions are asynchronous (it's safe to lower a synchronous function as asynchronous, just not vice versa). The overhead introduced by the async lowering will probably be fairly minimal in the context of a Debug build, and this will speed up build times by allowing functions to be sent straight to the code generator (like they are today) without having to wait for other functions to be analyzed (and without potentially having to codegen again later if we "guessed wrong").

                * In Release[Fast,Small,Safe] mode, we might hold back code generation until we know for sure, based on the parts of the call graph we have analyzed, whether or not a function is async. Vtables might be a bit of a problem here, since we don't know for sure that a vtable call is not async until we've finished literally all semantic analysis. Perhaps we'll make a guess about whether such functions are async and re-do codegen later if that guess was wrong. Or, in the worst case... perhaps we'll literally just defer all codegen until semantic analysis completes! After all, it's a release build, so you're going to be waiting a while for optimizations anyway; you won't mind an extra couple of seconds on delayed codegen.

                [1]: https://news.ycombinator.com/item?id=44549131

          • kristoff_it 11 hours ago

            we do build internal consensus before publishing articles like this one, or doing other public communication.

  • nsm 7 hours ago

    I’m confused about the assertion that green threads perform badly. 3 of the top platforms for high concurrency servers use or plan to use green threads (Go, Erlang, Java). My understanding was that green threads have limitations with C FFI which is why lower level languages don’t use them (Rust). Rust may also have performance concerns since it has other constraints to deal with.

    • yxhuvud 3 hours ago

      Green threads have issues with C FFI mostly due to not being able to preempt execution, when the C thing is doing something that blocks. This is a problem when you have one global pool of threads that execute everything. To get around it you essentially need to set up a dedicated thread pool to handle those c calls.

      Which may be fine - go doesn't let the user directly create thread pools directly but do create one under the hood for ffi interaction.

  • dundarious 12 hours ago

    It's hardly "all-in" if it is merely one choice of many, and the choice is made in the executable not in the library code.

    • do_not_redeem 12 hours ago

      I have definitely gotten the impression that green threads will be the favored implementation, from listening to core team members and hanging around the discord. Stackless coroutines don't even exist in the language currently.

      • andyferris 11 hours ago

        In the 2026 roadmap talk Andrew Kelley spoke of the fact that stackless coroutines with iouring is the end goal here (but the requires an orthogonal improvement in the compiler for inlining that data to the stack where possible).

        • do_not_redeem 11 hours ago

          Do you have the timestamp? I watched that video when it came out and don't remember hearing it.

      • dundarious 11 hours ago

        What does "favored" mean if event loop and direct blocking are relatively trivial and provided also/ If I can trivially use them, what do I care what Andrew or someone in core thinks? The control is all mine, and near zero cost (potential vtable indirection).

        And would Rust be "all-in" if tokio was in std, so you could use its tasks everywhere? That would be a very similar level of "all-in" to Zig's current plan, but with a seemingly better API.

        I understand the benefit of not being in std, but really not a fundamental issue, IMO.

      • geodel 8 hours ago

        > Stackless coroutines don't even exist in the language currently.

        And green thread exists in language?

  • andyferris 12 hours ago

    It actually has much the same benefits of Rust removing green threads and replacing them with a generic async runtime.

    The point here is that "async stuff is IO stuff is async stuff". So rather than thinking of having pluggable async runtimes (tokio, etc) Zig is going with pluggable IO runtimes (which is kinda the equivalent of "which subset of libc do you want to use?").

    But in both moves the idea is to remove the runtime out of the language and into userspace, while still providing a common pluggable interface so everyone shares some common ground.

  • flohofwoe 5 hours ago

    > I'm generally a fan of Zig, but it's a little sad seeing them go all in on green threads

    Read the article, you can use whatever approach you want by writing an implementation for the IO interface, green threading is just one of them.

eestrada 7 hours ago

Although I'm not wild about the new `io` parameter popping up everywhere, I love the fact that it allows multiple implementations (thread based, fiber based, etc.) and avoids forcing the user to know and/or care about the implementation, much like the Allocator interface.

Overall, I think it's a win. Especially if there is a stdlib implementation that is a no overhead, bogstock, synchronous, blocking io implementation. It follows the "don't pay for things you don't use" attitude of the rest of zig.

  • ozgrakkurt 5 hours ago

    Isn’t “don’t pay for what you don’t use” a myth? Some other person will using unless you are a very small team with discipline, and you will pay for it.

    Or just passing around an “io” is more work than just calling io functions where you want them.

henrikl 11 hours ago

Seeing a systems language like Zig require runtime polymorphism for something as common as standard IO operations seems off to me -- why force that runtime overhead on everyone when the concrete IO implementation could be known statically in almost all practical cases?

  • nu11ptr 10 hours ago

    I/O strikes me as one place where dynamic dispatch overhead would likely be negligible in practice. Obviously it depends on the I/O target and would need to be measured, but they don't call them "I/O bound" (as opposed to "CPU bound") programs for no reason.

  • throwawaymaths 8 hours ago

    > why force that runtime overhead on everyone

    pretty sure the intent is for systems that only use one io to have a compiler optimization that elides the cost of double indirection... but also, you're doing IO! so usually something else is the bottleneck, one extra indirection is likely to be peanuts.

  • do_not_redeem 11 hours ago

    I think it's just the Zig philosophy to care more about binary size than speed. Allocators have the same tradeoff, ArrayListUnmanaged is not generic over the allocator, so every allocation uses dynamic dispatch. In practice the overhead of allocating or writing a file will dwarf the overhead of an indirect call. Can't argue with those binary sizes.

    (And before anyone mentions it, devirtualization is a myth, sorry)

    • kristoff_it 11 hours ago

      > (And before anyone mentions it, devirtualization is a myth, sorry)

      In Zig it's going to be a language feature, thanks to its single unit compilation model.

      https://github.com/ziglang/zig/issues/23367

      • do_not_redeem 11 hours ago

        Wouldn't this only work if there's only one implementation throughout the entire compliation unit? If you use 2 allocators in your app, your restricted function type has 2 possible callees for each entry, and you're back to the same problem.

        • Zambyte 10 hours ago

          > A side effect of proposal #23367, which is needed for determining upper bound stack size, is guaranteed de-virtualization when there is only one Io implementation being used (also in debug builds!).

          > In the less common case when a program instantiates more than one Io implementation, virtual calls done through the Io interface will not be de-virtualized, as that would imply doubling the amount of machine code generated, creating massive code bloat.

          From the article

          • yxhuvud 3 hours ago

            I wonder how massive it actually would be. I'm guessing it really wouldn't be all that massive in practice even if it of course is easy to create massive examples using ways people typically don't write code.

        • thrwyexecbrain 3 hours ago

          Having a limited number of known callees is already better than a virtual function (unrestricted function pointer). A compiler in theory could devirtualize every two-possible-callee callsite into `if (function_pointer == callee1) callee1() else callee2()` which then can be inlined at compile time or branch-predicted at runtime.

          In any case, if you have two different implementations of something then you have to switch between them somewhere -- either at compile-time or link-time or load-time or run-time (or jit-time). The trick is to find an acceptable compromise of performance, (machine)code-bloat and API-simplicity.

        • throwawaymaths 8 hours ago

          > Wouldn't this only work if there's only one implementation throughout the entire compliation unit

          in practice how often are people using more than one io in a program?

          • latch 7 hours ago

            I think having a thread pool on top of some evented IO isn't _that_ uncommon.

            You might have a thread pool doing some very specific thing. You can do your own threadpool which wont use the Io interface. But if one of the tasks in the threadpool wanted to read a file, I guess you'd have to pass in the blocking Io implementation.

  • ozgrakkurt 5 hours ago

    Runtime polymorphism isn’t something inherently bad.

    It is bad if you are introducing branching in a tight loop or preventing compiler from inlining things it would inline otherwise and other similar things maybe?

wucke13 4 hours ago

Is this in effect introducing algebraic effects by concept? E.g. the io passed in is an effect handler, and it is the effect handler's choice whether to perform stack switching (or other means of non-blocking waiting) to enable asynchronicity?

  • runeks 3 hours ago

    In my view, algebraic effects enable specifying different kinds of effects (with different interpretations) — e.g. read a file, run DB query, network access — as opposed to just a single 'Io' effect that allows everything.

sevensor 10 hours ago

> io.async expresses asynchronicity (the possibility for operations to happen out of order and still be correct) and it does not request concurrency, which in this case is necessary for the code to work correctly.

This is the key point for me. Regardless of whether you’re under an async event loop, you can specify that the order of your io calls does not imply sequencing. Brilliant. Separate what async means from what the io calls do.

anonymoushn 2 hours ago

I think this design is a regression from the previous design, in which you could use compile time introspection to check whether things are actually async (calling convention) or not.

Additionally, I don't necessarily want to delegate the management of the memory backing the futures to an Io, or pass around a blob of syscalls and an associated runtime, which accesses everything via a vtable. I would prefer to have these things be compile time generic only.

lenkite 2 hours ago

Love the no function coloring solution! I am so looking forward to Zig 1.0. Finally, a system programming language that I can actually read and understand without putting in heavy labor. Hell, I could fully follow this blog post, without actually knowing anything much about Zig. Broke my head on async Rust several times before throwing in the towel.

n42 13 hours ago

This is very well written, and very exciting! I especially love the implications for WebAssembly -- WASI in userspace? Bring your own IO? Why not both!

noelwelsh 5 hours ago

Ok, they are implementing an effect system. Is there any acknowledgement that they are going down an established path?

phplovesong 5 hours ago

I wish Zig had not done async/await. CPS (like you have in Go) is way, way better, and is lower level, making it possible to do you own "async/await" if you really want to.

  • flohofwoe 5 hours ago

    Read the article, the new Zig async/await interface doesn't imply the typical async/await state-machine code transformation. You can write a simple blocking runtime, or a green-thread implementation, or a thread-pool, or the state-machine approach via stackless coroutines (but AFAIK this needs a couple of language builtins which then must be implemented in an IO implementation).

  • osa1 5 hours ago

    By CPS do you mean lightweight threads + meeting point channels? (i.e. both the reader and writer get blocked until they meet at the read/write call) Or something else?

    Why is CPS better and lower level than async/await?

    • burnt-resistor 4 hours ago

      Because it allows multiple topologies of producers and consumers.

      • osa1 4 hours ago

        No idea what that means.. Do you have a concrete example of what CPS allows and async/await doesn't?

Yoric 3 hours ago

Interesting. This is a bit reminiscent of how OCaml handles async these days.

aatd86 5 hours ago

So is that zig becoming a type AND effect system?

gavinhoward 7 hours ago

As the author of a semi-famous post about how Zig has function colors [1], I decided to read up on this.

I see that blocking I/O is an option:

> The most basic implementation of `Io` is one that maps to blocking I/O operations.

So far, so good, but blocking I/O is not async.

There is a thread pool that uses blocking I/O. Still good so far, but blocking I/O is still not async.

Then there's green threads:

> This implementation uses `io_uring` on Linux and similar APIs on other OSs for performing I/O combined with a thread pool. The key difference is that in this implementation OS threads will juggle multiple async tasks in the form of green threads.

Okay, they went the Go route on this one. Still (sort of) not async, but there is an important limitation:

> This implementation requires having the ability to perform stack swapping on the target platform, meaning that it will not support WASM, for example.

But still no function colors, right?

Unfortunately not:

> This implementation [stackless coroutines] won’t be available immediately like the previous ones because it depends on reintroducing a special function calling convention and rewriting function bodies into state machines that don’t require an explicit stack to run.

(Emphasis added.)

And the function colors appear again.

Now, to be fair, since there are multiple implementation options, you can avoid function colors, especially since `Io` is a value. But those options are either:

* Use blocking I/O.

* Use threads with blocking I/O.

* Use green threads, which Rust removed [2] for good reasons [3]. It only works in Go because of the garbage collector.

In short, the real options are:

* Block (not async).

* Use green threads (with their problems).

* Function colors.

It doesn't appear that the function colors problem has been defeated. Also, it appears to me that the Zig team decided to have every concurrency technique in the hope that it would appear innovative.

[1]: https://gavinhoward.com/2022/04/i-believe-zig-has-function-c...

[2]: https://github.com/aturon/rfcs/blob/remove-runtime/active/00...

[3]: https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p13...

  • ozgrakkurt 5 hours ago

    Their bet seems to be that they can transparently implement real async inside an IO implementation using compiler magic. But then it means if you use that IO instance with the magic then your function gets transformed into a state machine?

    Then this whole thing is useless for implementing cooperative scheduling async like in rust?

    • flohofwoe 4 hours ago

      > But then it means if you use that IO instance with the magic then your function gets transformed into a state machine?

      This was essentially like the old async/await implementation in Zig already worked. The same function gets the state-machine treatment if it was called in an async context, otherwise it's compiled as a 'regular' sequential function.

      E.g. at runtime there may be two versions of a function, but not in the code base. Not sure though how that same idea would be implemented with the new IO interface, but since Zig strictly uses a single-compilation-unit model the compiler might be able to trace the usage of a specific IO implementation throughout the control flow?

    • yxhuvud 3 hours ago

      No, it just means the cooperative scheduler needs to provide an io implementation that works with the rest of the scheduler.

  • mlugg 2 hours ago

    > it depends on reintroducing a special function calling convention

    This is an internal implementation detail rather than a fact which is usually exposed to the user. This is essentially just explaining that the Zig compiler needs to figure out which functions are async and lower them differently.

    We do have an explicit calling convention, `CallingConvention.async`. This was necessary in the old implementation of async functions in order to make runtime function pointer calls work; the idea was that you would cast your `fn () void` to a `fn () callconv(.async) void`, and then you could call the resulting `*const fn () callconv(.async) void` at runtime with the `@asyncCall` builtin function. This was one of the biggest flaws in the design; you could argue that it introduced a form of coloring, but in practice it just made vtables incredibly undesirable to use, because (since nobody was actually doing the `@asyncCall` machinery in their vtable implementations) they effectively just didn't support async.

    We're solving this with a new language feature [0]. The idea here is that when you have a virtual function -- for a simple example, let's say `alloc: *const fn (usize) ?[*]u8` -- you instead give it a "restricted function pointer type", e.g. `const AllocFn = @Restricted(*const fn (usize) ?[*]u8);` with `alloc: AllocFn`. The magic bit is that the compiler will track the full set of comptime-known function pointers which are coerced to `AllocFn`, so that it can know the full set of possible `alloc` functions; so, when a call to one is encountered, it knows whether or not the callee is an async function (in the "stackless async" sense). Even if some `alloc` implementations are async and some are not, the compiler can literally lower `vtable.alloc(123)` to `switch (vtable.alloc) { impl1 => impl1(123), impl2 => impl2(123), ... }`; that is, it can look at the pointer, and determine from that whether it needs to dispatch a synchronous or async call.

    The end goal is that most function pointers in Zig should be used as restricted function pointers. We'll probably keep normal function pointers around, but they ideally won't be used at all often. If normal function pointers are kept, we might keep `CallingConvention.async` around, giving a way to call them as async functions if you really want to; but to be honest, my personal opinion is that we probably shouldn't do that. We end up with the constraint that unrestricted pointers to functions where the compiler has inferred the function as async (in a stackless sense) cannot become runtime-known, as that would lead to the compiler losing track of the calling convention it is using internally. This would be a very rare case provided we adequately encourage restricted function pointers. Hell, perhaps we'd just ban all unrestricted default-callconv function pointers from becoming runtime-known.

    Note also that stackless coroutines do some with a couple of inherent limitations: in particular, they don't play nicely with FFI (you can't suspend across an FFI boundary; in other words, a function with a well-defined calling convention like the C calling convention is not allowed to be inferred as async). This is a limitation which seems perfectly acceptable, and yet I'm very confident that it will impact significantly more code than the calling convention thing might.

    TL;DR: depending on where the design ends up, the "calling convention" mentioned is either entirely, or almost entirely, just an implementation detail. Even in the "almost entirely" case, it will be exceptionally rare for anyone to write code which could be affected by it, to the point that I don't think it's a case worth seriously worrying about unless it proves itself to actually be an issue in practice.

    [0]: https://github.com/ziglang/zig/issues/23367

logicchains 2 hours ago

Does this mean that as a side effect, it'll now be possible to enforce functions are pure/deterministic in Zig by not passing in an Io?

  • mlugg an hour ago

    Not quite:

    * Global variables still exist and can be stored to / loaded from by any code

    * Only convention stops a function from constructing its own `Io`

    * Only convention stops a function from reaching directly into low-level primitives (e.g. syscalls or libc FFI)

    However, in practice, we've found that such conventions tend to be fairly well-respected in most Zig code. I anticipate `Io` being no different. So, if you see a function which doesn't take `Io`, you can be pretty confident (particularly if it's in a somewhat reputable codebase!) that it's not interacting with the system (e.g. doing filesystem accesses, opening sockets, sleeping the thread).

didibus 11 hours ago

I don't know Zig, but wouldn't such a change be a major breaking change where all prior Zig code doing Io wouldn't work anymore if upgraded?

  • flohofwoe 5 hours ago

    Yeah, but why is that a problem? Zig doesn't promise any stability before 1.0, and it's not like we don't need to change code in other language ecosystem frequently for all sorts of reasons (e.g. bumping a dependency version, or a new minor C/C++ compiler implementing new warnings).

  • xxpor 11 hours ago

    Zig's not at 1.0 yet, so there's no stability guarantee at this point.

  • TUSF 9 hours ago

    Breaking changes is just another Tuesday for Zig.

wordofx 3 hours ago

Damn that’s some ugly async syntax.

  • sgt 3 hours ago

    Not Zig philosophy to hide things away to make it prettier.

    • wordofx 2 hours ago

      You don’t need to hide it.