You can now register Pixi workspaces globally and access them through their name from anywhere in your shell!
From all our conda CLI lovers this was a big request!
Thank you Sophia Castellarin for the implementation!
You can now register Pixi workspaces globally and access them through their name from anywhere in your shell!
From all our conda CLI lovers this was a big request!
Thank you Sophia Castellarin for the implementation!
Totally normal workflow:
I work on documenting #Jinja syntax used in #CondaForge recipes.
https://github.com/conda-forge/conda-forge.github.io/pull/2782
I decide that the snippets would use Jinja syntax highlighting. However, #Prism doesn't have one. But Internets suggest Twig would work instead.
https://github.com/PrismJS/prism/issues/759
So I try Twig. Except that Twig highlighter crashes in #Docusaurus. But there's a workaround.
https://github.com/facebook/docusaurus/issues/8065
So I copy the code over to the project, fix it and while at it, rename it to "jinja" and adjust a bit.
But then, highlighting Jinja expressions alone looks pretty bleak, so let's combine it with YAML… Hmm, that actually doesn't work that well, needs some more adjustments. And before you know it, I have a pretty new Jinja highlighter, and a recipe highlighter that combines Jinja expressions, YAML, v0 recipe selectors, v1 if:/skip: conditions, and also highlighting shell / cmd variables for a good measure.
https://github.com/conda-forge/conda-forge.github.io/pull/2790
"Building From Source Shouldn't Be This Hard"
That's our motto, read our latest blogpost to understand what we're building and how Pixi already helps you with today!
🗞️ Read it here: https://prefix.dev/blog/building-from-source-not-hard
I'm looking at Repology, and I think most of the distributions and other downstreams have rightfully boycotted #Python #chardet #copywashing. Of course, there's the possibility that some of them are simply out-of-date, though.
So far chardet-7 is distributed by #Chromebrew, #CondaForge (not on Repology), #Homebrew, #KaOS, #OpenIndiana, #openmamba, #Ravenports, #Spack and #T2 SDE. Shame on you!
https://repology.org/project/chardet/versions
https://repology.org/project/python%3Achardet/versions
Another post on #Quansight PBC blog: "BLAS/LAPACK #packaging"
https://labs.quansight.org/blog/blas-lapack-packaging
"""
#BLAS and #LAPACK are the standard libraries for linear algebra. The original implementation, often called Netlib LAPACK, developed since the 1980s, nowadays serves primarily as the origin of the standard interface, the reference implementation and a conformance test suite. The end users usually use optimized implementations of the same interfaces. The choice ranges from generically tuned libraries such as OpenBLAS and BLIS, through libraries focused on specific hardware such as Intel® oneMKL, Arm Performance Libraries or the Accelerate framework on macOS, to ATLAS that aims to automatically optimize for a specific system.
The diversity of available libraries, developed in parallel with the standard interfaces, along with vendor-specific extensions and further downstream changes, adds quite a bit of complexity around using these libraries in software, and distributing such software afterwards. This problem entangles implementation authors, consumer software authors, build system maintainers and distribution maintainers. Software authors generally wish to distribute their packages built against a generically optimized BLAS/LAPACK implementation. Advanced users often wish to be able to use a different implementation, more suited to their particular needs. Distributions wish to be able to consistently build software against their system libraries, and ideally provide users the ability to switch between different implementations. Then, build systems need to provide the scaffolding for all of that.
I have recently taken up the work to provide such a scaffolding for the Meson build system; to add support for BLAS and LAPACK dependencies to Meson. While working on it, I had to learn a lot about BLAS/LAPACK packaging: not only how the different implementations differ from one another, but also what is changed by their respective downstream packaging. In this blog post, I would like to organize and share what I have learned.
"""
Well dammit. The X.h header is missing from the #conda #condaforge packages.
If you include #xorg Xlib it’ll bork when it can’t find X.h
Naturally #wayland isn’t an option because many required libraries don’t have packages to provide it.
#linux is extremely annoying to develop graphical apps.
Did a lot of work on conda-forge this week handling various packages for exotic architectures. Includes some Rust CLI tools I got to build on ppc4le (for fun), and CUDA-dependent libs I wanted to enable on Grace Hopper systems using aarch64.
Previously I've relied mostly on the migrator bot, which tends to default to emulation on QEMU apparently, but found out at https://github.com/conda-forge/cargo-auditable-feedstock/pull/4#issuecomment-3291674939 that cross-compilation 1) works and 2) is faster. It's documented at https://conda-forge.org/docs/maintainer/knowledge_base/#how-to-enable-cross-compilation, but it only 'clicked' in my brain recently.
Level of difficulty/pain: 🐍 Python+CUDA/C++ > Rust 🦀