“With the increased limit of the acceptance queue, and a patched version of wrk, we can now conclude that swift is a good competitor speed-wise as a web application server.

Memory wise it blows all the other technologies away, using only 2.5% of the amount of memory that the java implementation needs, and 10% of node-js.”

https://tech.phlux.us/Juice-Sucking-Servers-Part-Trois/

#swift #swiftlang

Juice Sucking Servers, part trois

Some technical stuff I do.

Phluxus Tech Blog
@finestructure I’m not sure it’s actually truly fair to compare to Java memory usage. But still, this is good stuff!
@mattiem I think it is in this case, because the idea is - what size VM do I need to run this workload and that’ll have a massive cost impact.
@finestructure Don’t get me wrong - Java VM tuning is an enormous pain point. You basically cannot run JVM stuff without becoming an expert. But there *is* some optimized setting and that could, conceivably compete quite well with Swift’s memory usage when handling a similar load. So think this could be hard to compare. But it’s still great and I vastly prefer it!
@finestructure You know thinking about this more, I could be wrong. I guess the fixed VM cost is represented here in this workload already. So, I take it back this rocks.
@mattiem @finestructure RC is conceptually better here *if* the RC is implemented properly, i.e. the code produces no retain cycles. Memory is freed when it isn't needed which is particularly important if you have high scale (which 98% of devs don't have, fwiw).
I've once heard that there've been high speed trading systems that could trade favourably by predicting the GC cycles of the Java systems of large banks.
@helge @finestructure I saw a few cool presentations from the JVM team at Twitter about GC performance, and let me tell you, they did not share this opinion. But intuitively, RC always made sense to me.
@mattiem @finestructure Render me surprised that the JVM team didn't share that opinion! 🙂 FWIW I'm a big GC fan.
The bigger problem w/ RC is that the application has to be memory correct, i.e. not leak, and that is quite hard. It's a big issue w/ NIO based things, isolated failure points (whether OOMs or fatalErrors) bring down the whole stack at once (conceptually tens of thousands of connections!).
So you either go ownership (hard+fast) or GC (easy+mem).
@mattiem @finestructure (or you use Apache, which is a protocol aware host and can recycle subprocesses cleanly if anomalies are detected, or even just after every 1k requests, because it's cheap to fork 🙃).
@helge @mattiem @finestructure I implemented throw-everything GC in a search engine in the ‘90s. Not as brutal as killing the whole process, but the idea was that no memory allocated during a request was needed after the request returned. I didn’t know then that Next had implemented that and called it autorelease.
@ahltorp @mattiem @finestructure Autorelease has a different purpose, it isn't for grouping free's. It exists mainly for API reasons, so that a method can return an "unowned" reference (which has a logical RC of 0).
NeXT actually had bulk free, but that was never really used AFAIK (NXZone's/NSZone's, hence +allocWithZone:).
An example for alloc, then throw-away everything are Apache pools: https://apr.apache.org/docs/apr/1.5/group__apr__pools.html
Apache Portable Runtime: Memory Pool Functions