[ $davids.sh ] — david shekunts blog

🥧 Bun 1.0 is out 🥧

# [ $davids.sh ] · message #181

🥧 Bun 1.0 is out 🥧

Well, it's out now

#js #nodejs #typescript #bun

  • @ [ $davids.sh ] · # 562

    And seriously, it looks nice, but it doesn't solve important problems:

    Node.js's problem isn't its built-in API (it's already clear which libraries are worth using and which aren't), nor its compilation speed (that's being optimized), nor package download speed (that happens once a month), nor even execution speed (any CPU-intensive task needs to be offloaded to a separate process anyway).

    Node.js's problem is that to write a console utility in it, you have to package Node.js itself with it, and the whole thing will be 200MB.

    Node.js's problem is that we've had various worker options for a long time, but there are no convenient libraries and no proper mechanism for sharing memory with them without copying and serialization (meaning, we're standing at the door of parallelization and can't get in).

    Node.js's problem is that we have TypeScript, but no one wants to add strict mode so you can't import JS, add number variations (int64, float32, etc.), and start compiling TS to binary code like Go.

    The problem is the lack of a unified linter, like in Go.

    The problem is the ability to throw anything.

    Bun wasn't intended to solve these problems; rather, it wanted to provide full backward compatibility plus various features and optimizations.

    If this catches on, I'll use it, if not, it's a shame. I hope existing solutions will try to compete with it and also improve (remember how npm started moving after Yarn appeared).

    What I'm really waiting for is when the next engine offers TS developers only TS backward compatibility, compilation, parallelism, and finally allows us to sail away from this "just a browser on the server" island to a "slower but more flexible alternative to Go."

  • @ Arsen IT-K Arakelyan · # 563

    Bro, your article body is in the comment, is that how it's supposed to be?

  • @ [ $davids.sh ] · # 564

    Yes, yes, this is a reference to the meme "well, he died and he died")

  • @ [ $davids.sh ] · # 565

    Although, I was thinking... Actually, I'd try this as a new format, the main feature of which would be to immediately open comments and bring people closer to leaving a comment + you can write much more + the feed will be cleaner, with only headlines or a brief caption

    Essentially, like Twitter (or whatever it's called these days)

  • @ Arsen IT-K Arakelyan · # 566

    By the way, regarding a convenient library for convenient data sharing between threads, Node.js has a native API called SharedArrayBuffer. This is precisely the mechanism that allows you to add data to an array so that it's accessible across all threads.

    But I think if each thread modifies this buffer, we'll get another memory leak, similar to middlewares that constantly change user-provided data at each step of the chain. We might end up with nonsense because the chain of responsibility is violated.

  • @ [ $davids.sh ] · # 567

    Yes, SharedArrayBuffer exists, but it's inconvenient – I'd like to have automatic (de)serialization into this SharedArrayBuffer, and specifically via C bindings, not JS code.

    • you're right about threads, but I'm more concerned not about memory leaks, but about concurrent access, which means we'll have to add mutexes like in C or read/write locks like in Go.

    Again, there's nothing terrible about read/write locks (except for deadlocks, but that can be dealt with), the problem is more that Node.js gives us threads (workers), gives us shared memory (SharedArrayBuffer), but doesn't give us the ability to use standard types with this memory, nor tools for managing concurrent access to this memory.

  • @ Arsen IT-K Arakelyan · # 568

    That's true, but honestly, it's not really used that often / meaning, it's not a very frequent use case in Node.js development, as far as I can tell.

  • @ Arsen IT-K Arakelyan · # 569

    Although you know best

  • @ Arsen IT-K Arakelyan · # 570

    I still don't understand the meaning of mutex and semaphore concepts.

  • @ Arsen IT-K Arakelyan · # 571

    If I find anything out, it'll be in the chat with Shemsedtinov, he's better at this stuff and probably mentioned it somewhere.

  • @ Arsen IT-K Arakelyan · # 572

    Is competitive access needed to avoid race conditions?

  • @ [ $davids.sh ] · # 573

    Not used, because there's no convenient way to do it)

    A simple example: if Node.js could run threads based on the number of cores, with utilization of these cores and I/O, and in these threads, run workers with shared memory, then Node.js would be priceless.

    This would be a multiple increase in speed in all aspects of the language, with the possibility of backward compatibility with the entire current codebase.

  • @ [ $davids.sh ] · # 574

    Yes, "mutex" is roughly the same as a "read/write lock".

    This means you explicitly say at some point in time, "Okay, give me write/read access, I'll do what I need to do, and then I'll give it back." If write access isn't available right now, you'll wait.

    It could look something like this:


    const shared = new SharedMemory("ping")

    // ... somehow we get into the workers later await shared.writeLock() // 1. Lock the resource

    const val = shared.value() // 2. Get the data

    if (val === "ping") shared.set("pong") // 3. Modify it

    await shared.releaseLock(); // 4. Unlock the resource


    In the case of a race condition between steps 2 and 3, shared could have been modified by another thread. But since I put a write lock on it, I can be absolutely sure that this didn't happen.

  • @ Ivan ITK 🚫 · # 592

    https://snyk.io/blog/node-js-multithreading-worker-threads-pros-cons/ This covers everything about threads in quite some detail)