ironArray SLU

@ironArray
4 Followers
3 Following
32 Posts
Compress Better, Compute Bigger, Share Faster
Home Pagehttps://ironarray.io

RE: https://fosstodon.org/@Blosc2/116481891984444131

The beast is out of his cage! 🧸

Look at the bump in performance that we will see with the next Python-Blosc2 release.

Matrix multiplication has been speeded up by using blocks and Blosc2 prefilters and its own efficient and multithreaded engine.

Expect between 5x and 6x better speed for matrices with no padding.

Huge shoutout to @luke_shaw_ironarray Shaw for making this happen!

We designed the DSL kernels in Blosc2 so that they could be used in the main platforms, and nowadays this necessarily includes Web Assembly (WASM) in the browser.

We also implemented a new JIT compiler for WASM specially meant for Blosc2 DSL kernels.

Look at this online notebook: https://cat2.cloud/demo/roots/@public/examples/mandel-jit-vs-nojit.ipynb
where we run of a Mandelbrot DSL kernel, with and without JIT. You can even run it yourself by just clicking in the "Run" button.

Enjoy!

#WASM, #Pyodide #HPC

Blosc2 4.1 Release!

We've packed this minor release: optimised compression and funcs for unicode arrays; cumulative reductions; memory-map support for store containers like `DictStore` ; and a DSL kernel functionality for faster, compiled, user-defined funcs!πŸ‘‡

Notebook here - https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/mandelbrot-dsl.ipynb

Super-efficient DSL kernels are coming with forthcoming Python-Blosc 4.1.0.

A DSL kernel is a function that takes ndarray objects as input and returns a ndarray object as output.

These allow for full vectorized/tensorized operations on Blosc2 NDArray objects (or NumPy/PyTorch arrays). They can be JIT compiled too!

The performance of those with the classic Mandelbrot example is kind of mind-blowing.

See https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/mandelbrot-dsl.ipynb for more details.

Just took the new engine in Python-Blosc2 4.0 for a spin, and the performance results are quite awesome. Processing a 400 MB array:

πŸ”Ή NumPy baseline: 146 ms
πŸ”Ή Blosc2 (on NumPy arrays): 73.1 ms (2x faster)
πŸ”Ή Blosc2 (on native Blosc2 arrays): 15.1 ms (**9.6x faster!**) 🀯

The best part? It fully supports NumPy's array and ufunc interfaces. High performance with zero friction! πŸŽοΈπŸ’¨

More info: https://ironarray.io/blog/miniexpr-powered-blosc2

#Python #DataScience #NumPy #HighPerformanceComputing #Blosc2 #OpenSource

RE: https://fosstodon.org/@Blosc2/116000946640721384

Happy to sponsor this work, and very pleased with the results πŸ‘

This will be immensely helpful for making our Cat2.Cloud offer more efficient and user-friendly. ❀️

πŸ“’ OSS Synergy : Blosc2 🀝 OpenZL πŸ”

Exciting pluginπŸ”Œannouncement - you can now use the new OpenZL compression πŸ—œοΈ library from Blosc2: https://github.com/Blosc/blosc2-openzl!

It's as simple as

πš™Μ²πš’Μ²πš™Μ²β€‚Μ²πš’Μ²πš—Μ²πšœΜ²πšΜ²πšŠΜ²πš•Μ²πš•Μ²β€‚Μ²πš‹Μ²πš•Μ²πš˜Μ²πšœΜ²πšŒΜ²πŸΈΜ²β€“Μ²πš˜Μ²πš™Μ²πšŽΜ²πš—Μ²πš£Μ²πš•Μ²

and just like that, OpenZL compression + Blosc2 compute engine!

Thanks to Yann Collet, his team and all contributors to the OpenZL project - check it out here https://openzl.org/.

Find out more about how the Blosc team implemented this plugin here ⏩ https://blosc.org/posts/openzl-plugin/

Visualize data on your phone with Cat2Cloud! πŸ“±βœ¨ Create interactive plots directly in your mobile browser. No laptop needed!

Get started in seconds:

1. Launch our live demo πŸš€
2. Run the notebook & use Matplotlib/Plotly to create graphs 🎨
3. Customize your plots & render them instantly ✨
4. Register to save & share your work πŸ’Ύ

Experience data visualization on the go! πŸ“ˆπŸŽ‰ Try it: https://cat2.cloud/demo/static/jupyterlite/notebooks/index.html?path=@public/examples/ironpill_nb.ipynb

RE: https://mastodon.social/@luke_shaw_ironarray/115661753498449987

Numba can make UDFs really shine ✨✨✨