Beams.cc giới thiệu nền tảng mới sử dụng AI để nén và chia sẻ tệp. AI phân tích tệp, chọn thư viện nén hiệu quả nhất và chia file lớn thành các khối nhỏ. Mời bạn trải nghiệm và đóng góp ý kiến. #AI #Compression #Beams #CongNgheAI #ChiaFile

https://www.reddit.com/r/SideProject/comments/1pj50h6/built_a_new_site_that_uses_ai_to_compress_and/

Beams là nền tảng mới sử dụng AI để nén và gửi file. AI phân tích dữ liệu, chọn thuật toán nén tối ưu và chia file lớn thành phần nhỏ. Mời thử nghiệm và phản hồi! #AI #Compression #FileSharing #CôngNghệ #MạngLưới #SángTạoViệt

https://www.reddit.com/r/SideProject/comments/1pj50h6/built_a_new_site_that_uses_ai_to_compress_and/

Post-transformer inference: 224× compression of Llama-70B with improved accuracy

https://zenodo.org/records/17873275

#HackerNews #PostTransformer #Inference #Llama70B #Compression #ImprovedAccuracy

Post-Transformer Inference: 224× Compression of Llama-70B with Improved Accuracy

This paper introduces the first verified method to eliminate transformers from inference while preserving, and in many cases improving, downstream accuracy. We show that a frozen 70-billion-parameter Llama-3.3-70B model can be replaced by a 256-dimensional meaning field extracted from seven internal activation layers. A lightweight compressor (AN1) reduces these fields by 224× with an average +1.81 percentage point gain across classification tasks, including +3.25 pp on low-resource RTE (R² = 0.98 inverse-scaling fit, p < 0.01). A 30M-parameter student then learns to regenerate these fields directly from raw text, enabling full transformer-free inference at 60× higher throughput with only 0.35 pp average accuracy loss. The core insight is that task-aligned semantics in modern transformers occupy a remarkably low-rank manifold. Across layers we observe 72–99 percent of variance in the top one to three dimensions. Once this structure is extracted and learned, the transformer becomes unnecessary. It serves as a one-time sculptor of meaning rather than the permanent home of inference. This work establishes Field Processing Units (FPUs) as a post-transformer compute primitive that replaces deep matrix multiplication with shallow field operations. All results are averaged over five seeds with statistical significance reported. Ablations isolate the causal contributions of field supervision, geometric regularization, and anchor-layer selection. This Zenodo release provides the complete scientific manuscript and the baseline reference implementation for the AN1 Core system. Proprietary optimizations (AN1-Turbo) have been removed to support independent verification and further research into post-transformer inference.

Zenodo

FerricTDS mkIII by Variety Of Life 🎛️
Tape dynamics sim w/ warm comp, modern tech + vintage vibe. Bass, mids, highs + harmonic exciter.

💻 Win (VST/VST3)
🎁 FREE https://varietyofsound.wordpress.com/2025/12/02/ferrictds-mkiii-released/

#freeplugin #tapeplugin #compression #vst3 #varietyofsound #musicproduction #audioplugin

Abdominal-only compression garments reduce orthostatic tachycardia and improve symptoms in patients with postural orthostatic tachycardia syndrome. - Abstract - Europe PMC

https://europepmc.org/article/MED/41338488

#POTS #compression

enz / unz

UPDATE: added support for zip-compatible symlinks. enz works exactly like zip by default storing linked files, using the -y switch to store links only.

https://github.com/ha1tch/unz

enz and unz are a zip-compatible compressor and decompressor pair that beat zip -9 on source code by 5-10%. Uses smarter pre-processing before DEFLATE.

It's often better than the alternatives for text, source code, structured text files, and markup.

Pure Go. No dependencies beyond stdlib. Output works with standard ZIP tools where possible

Benchmarks available.

#golang #foss #compression #enz #unz #zip #pkzip #7zip

GitHub - ha1tch/unz: ZIP-compatible compression with adaptive BPE methods.

ZIP-compatible compression with adaptive BPE methods. - ha1tch/unz

GitHub

So I was working on this:

enz / unz

ZIP-compatible compressor and decompressor pair that beat zip -9 on source code by 5-10%. Uses smarter pre-processing before DEFLATE.

It's often better than the alternatives for text, source code, structured text files, and markup.

Pure Go. No dependencies beyond stdlib. Output works with standard ZIP tools where possible

https://github.com/ha1tch/unz

Benchmarks available.

#golang #foss #compression #unz #enz #zip #pkzip #winzip

GitHub - ha1tch/unz: ZIP-compatible compression with adaptive BPE methods.

ZIP-compatible compression with adaptive BPE methods. - ha1tch/unz

GitHub

SSDs, but specially RAM prices are skyrocketing: https://arstechnica.com/gadgets/2025/11/spiking-memory-prices-mean-that-it-is-once-again-a-horrible-time-to-build-a-pc

These are excellent times for using compression to reduce your storage pressure.

https://www.blosc.org/posts/roofline-analysis-blosc2/

#Blosc2 #Compression #HPC

For text with a lot of repetition, #bzip3 still blows my mind. 😆

rld@Intrepid:Documents$ for x in cat "gzip -9" "zstd --ultra -22" "xz -9e" "bzip2 -9" bzip3; do $x < weatherlog-2024.txt |wc -c |tr "\n" "\t"; echo "$x"; done 1735300 cat 80423 gzip -9 63275 zstd --ultra -22 53516 xz -9e 52374 bzip2 -9 40645 bzip3 rld@Intrepid:Documents$ echo 1735300/40645 |bc -l 42.69405830975519744125

#Lossless #Compression #LosslessCompression

P.S. times:

real 1.49 zstd --ultra -22 real 0.94 xz -9e real 0.23 bzip2 -9 real 0.07 gzip -9 real 0.06 bzip3 real 0.00 cat

DANG. 😂