RE: https://fosstodon.org/@Blosc2/116481891984444131
The beast is out of his cage! π§Έ
| Home Page | https://ironarray.io |
RE: https://fosstodon.org/@Blosc2/116481891984444131
The beast is out of his cage! π§Έ
Look at the bump in performance that we will see with the next Python-Blosc2 release.
Matrix multiplication has been speeded up by using blocks and Blosc2 prefilters and its own efficient and multithreaded engine.
Expect between 5x and 6x better speed for matrices with no padding.
Huge shoutout to @luke_shaw_ironarray Shaw for making this happen!
We designed the DSL kernels in Blosc2 so that they could be used in the main platforms, and nowadays this necessarily includes Web Assembly (WASM) in the browser.
We also implemented a new JIT compiler for WASM specially meant for Blosc2 DSL kernels.
Look at this online notebook: https://cat2.cloud/demo/roots/@public/examples/mandel-jit-vs-nojit.ipynb
where we run of a Mandelbrot DSL kernel, with and without JIT. You can even run it yourself by just clicking in the "Run" button.
Enjoy!
Blosc2 4.1 Release!
We've packed this minor release: optimised compression and funcs for unicode arrays; cumulative reductions; memory-map support for store containers like `DictStore` ; and a DSL kernel functionality for faster, compiled, user-defined funcs!π
Notebook here - https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/mandelbrot-dsl.ipynb
Super-efficient DSL kernels are coming with forthcoming Python-Blosc 4.1.0.
A DSL kernel is a function that takes ndarray objects as input and returns a ndarray object as output.
These allow for full vectorized/tensorized operations on Blosc2 NDArray objects (or NumPy/PyTorch arrays). They can be JIT compiled too!
The performance of those with the classic Mandelbrot example is kind of mind-blowing.
See https://github.com/Blosc/python-blosc2/blob/main/examples/ndarray/mandelbrot-dsl.ipynb for more details.
Just took the new engine in Python-Blosc2 4.0 for a spin, and the performance results are quite awesome. Processing a 400 MB array:
πΉ NumPy baseline: 146 ms
πΉ Blosc2 (on NumPy arrays): 73.1 ms (2x faster)
πΉ Blosc2 (on native Blosc2 arrays): 15.1 ms (**9.6x faster!**) π€―
The best part? It fully supports NumPy's array and ufunc interfaces. High performance with zero friction! ποΈπ¨
More info: https://ironarray.io/blog/miniexpr-powered-blosc2
#Python #DataScience #NumPy #HighPerformanceComputing #Blosc2 #OpenSource
RE: https://fosstodon.org/@Blosc2/116000946640721384
Happy to sponsor this work, and very pleased with the results π
This will be immensely helpful for making our Cat2.Cloud offer more efficient and user-friendly. β€οΈ
π’ OSS Synergy : Blosc2 π€ OpenZL π
Exciting pluginπannouncement - you can now use the new OpenZL compression ποΈ library from Blosc2: https://github.com/Blosc/blosc2-openzl!
It's as simple as
πΜ²πΜ²πΜ²βΜ²πΜ²πΜ²πΜ²πΜ²πΜ²πΜ²πΜ²βΜ²πΜ²πΜ²πΜ²πΜ²πΜ²πΈΜ²βΜ²πΜ²πΜ²πΜ²πΜ²π£Μ²πΜ²
and just like that, OpenZL compression + Blosc2 compute engine!
Thanks to Yann Collet, his team and all contributors to the OpenZL project - check it out here https://openzl.org/.
Find out more about how the Blosc team implemented this plugin here β© https://blosc.org/posts/openzl-plugin/
Visualize data on your phone with Cat2Cloud! π±β¨ Create interactive plots directly in your mobile browser. No laptop needed!
Get started in seconds:
1. Launch our live demo π
2. Run the notebook & use Matplotlib/Plotly to create graphs π¨
3. Customize your plots & render them instantly β¨
4. Register to save & share your work πΎ
Experience data visualization on the go! ππ Try it: https://cat2.cloud/demo/static/jupyterlite/notebooks/index.html?path=@public/examples/ironpill_nb.ipynb
RE: https://mastodon.social/@luke_shaw_ironarray/115661753498449987
Numba can make UDFs really shine β¨β¨β¨