HPN-SSH 的使用情境以及效果

以前就知道 SSH over SSH tunnel 的 latency & throughput 都不算太好 (像是透過 ProxyJump 的情境),然後也知道 HPN-SSH 有對這塊提出改善的方案,但一直都沒有研究太多。 最近剛好遇到要從台灣透過 AWS 上 us-east-1 的跳板機連到內部網路的...

Gea-Suan Lin's BLOG
Microsoft apresenta Azure HorizonDB: a nova “besta” de bases de dados PostgreSQL na cloud

 Durante a conferência Ignite, a gigante tecnológica revelou o Microsoft Azure HorizonDB, uma nova base de dados PostgreSQL na cloud que promete redefinir o des

TugaTech
brahma-firelight

A blazing-fast, fire-and-forget orchestrator built with Rust and JavaScript, designed for ultra-low-latency task routing, message triggering, and heavyweight logic execution — all without blocking. A native Rust AddOn for NodeJS, BunJS and DenoJS.. Latest version: 1.5.16, last published: a month ago. Start using brahma-firelight in your project by running `npm i brahma-firelight`. There are no other projects in the npm registry using brahma-firelight.

npm

An overengineered solution to `sort | uniq -c` with 25x throughput (hist)

https://github.com/noamteyssier/hist-rs

#HackerNews #overengineered #solution #sort #uniq #throughput #hist #GitHub #Rust

GitHub - noamteyssier/hist-rs: An efficient unique-line counter (25x over `sort | uniq -c`)

An efficient unique-line counter (25x over `sort | uniq -c`) - noamteyssier/hist-rs

GitHub
Preparing for the .NET 10 GC (DATAS) - .NET Blog

Learn how DATAS in .NET 10 adapts heap size, what changes to expect versus previous Server Garbage Collection (GC) behavior, and how to decide whether to tune or disable it.

.NET Blog
Preparing for the .NET 10 GC (DATAS) - .NET Blog

Learn how DATAS in .NET 10 adapts heap size, what changes to expect versus previous Server Garbage Collection (GC) behavior, and how to decide whether to tune or disable it.

.NET Blog

Storage Tail Latency Matters: The Silent Killer

When we talk about storage performance, we typically think of IOPS and throughput. In the best case, we also talk about average or best-case access latency. However, tail latency, the high-percentile end of the latency distribution, is crucial for providing predictable and consistent performance. Tail latency refers to the 95th, 99th, and 99.9th percentiles on the latency distribution scale. And the 99th percentile provides the latency experienced by the worst 1% of all […]

https://www.simplyblock.io/blog/tail-latency-storage/

🚀 Behold, the magical #LMCache that promises to triple your LLM's #throughput, as if by waving a wand made of #Redis and marketing buzzwords. 🤖✨ But wait, there's more! Experience the thrill of saving milliseconds while drowning in GitHub's relentless onslaught of #features you never asked for. 🤯🙄
https://github.com/LMCache/LMCache #LLM #GitHub #Innovation #HackerNews #ngated
GitHub - LMCache/LMCache: Supercharge Your LLM with the Fastest KV Cache Layer

Supercharge Your LLM with the Fastest KV Cache Layer - LMCache/LMCache

GitHub
GitHub - LMCache/LMCache: Supercharge Your LLM with the Fastest KV Cache Layer

Supercharge Your LLM with the Fastest KV Cache Layer - LMCache/LMCache

GitHub

Feels like every time I try to reduce #memory usage, I accidentally improve #throughput instead. At least THIS time, I also see reduced memory usage, nice!

#swad #coding #c #unusual #issue