Fabric – mạng tính toán phân tán cho phép laptop cho thuê tài nguyên idle cho nhà phát triển & nhà nghiên cứu. Sau ~43k lượt xem, tác giả đã redesign UI/UX, thêm xác thực email, cải thiện an toàn & minh bạch. Hiện có ~200 nhà cung cấp thiết bị, 15 dự án dùng. Cần phản hồi: Bạn hiểu rõ Fabric chưa? Rào cản khi cài đặt? Trang an toàn có hợp lý? So sánh với Google Colab? Cần thêm tài liệu, benchmark? #Fabric #DistributedComputing #Tech #CôngNghệ #Feedback #MạngTínhToán #OpenSource

https://www.redd

SETI@Home Has Finally Been Completed After 27 Years, Here's What Was Found

YouTube

Một dự án mới cho phép chạy các mô hình AI lớn trên nhiều máy tính chỉ qua mạng WiFi. Có thể kết hợp phẩn cứng khác nhau như Apple Silicon, NVIDIA, CPU. Một hướng đi thú vị cho AI tại nhà.

#AI #MôHìnhNgônNgữLớn #LLM #MachineLearning #MáyHọc #Tech #CôngNghệ #DistributedComputing #VietNam

https://www.reddit.com/r/LocalLLaMA/comments/1qha0kd/run_large_models_across_multiple_machines_over/

Enthusiasts used their home computers to search for ET—scientists are homing in on 100 signals they found

For 21 years, between 1999 and 2020, millions of people worldwide loaned UC Berkeley scientists their computers to search for signs of advanced civilizations in our galaxy.

Phys.org

Salut le fediverse !

En ce début d'année 2026 qui commence mal, je vous invite à rejoindre le projet Science United qui utilise (via le logiciel BOINC créé par l'Université de Berkeley) la puissance de calcul de votre ordinateur (processeur, et carte graphique si vous en avez une) pour des projets de recherche universitaire : médecine, astronomie, mathématiques...

https://scienceunited.org

#BOINC #fediBOINC #ScienceUnited #CPU #GPU #science #DistributedComputing #médecine #astronomie #mathématiques

Science United

Science United lets you supply computing power to science research projects in a wide range of areas

Discord's ML Scaling Breakthrough

Discord's machine learning systems have evolved significantly, overcoming scaling challenges with distributed computing.

TechLife

Người dùng thảo luận về việc "nối chuỗi" nhiều Mac Mini giá rẻ cho tác vụ AI/LLM, thay vì mua một chiếc đắt tiền với cấu hình nâng cấp. Mặc dù tiết kiệm chi phí ban đầu, thách thức lớn là khả năng hỗ trợ phần mềm phân tán như Ollama, vLLM trên nền tảng Metal.

#MacMini #AI #LLM #DistributedComputing #Ollama #vLLM #ĐiệnToánPhânTán

https://www.reddit.com/r/LocalLLaMA/comments/1p90pkl/daisy_chaining_macminis/

A new article in Cloud Native Now highlights how pgEdge is enabling distributed #PostgreSQL across multiple #Kubernetes clusters — bringing global scale, high availability, and true cloud-native resilience to #Postgres.

It’s another step forward in simplifying how organizations run Postgres at scale — fully open source, multi-cloud, and Kubernetes-native. 🌍

📰 Read the full feature on Cloud Native Now: https://cloudnativenow.com/features/pgedge-adds-ability-to-distribute-postgres-across-multiple-kubernetes-clusters/

#programming #cloudcomputing #k8s #devops #distributedcomputing

pgEdge Adds Ability to Distribute Postgres Across Multiple Kubernetes Clusters

pgEdge has released a new Kubernetes-ready distribution of its open-source Postgres database, enabling deployments across multiple clusters.

Cloud Native Now

Today I introduced a much-needed feature to #GPUSPH.

Our code supports multi-GPU and even multi-node, so in general if you have a large simulation you'll want to distribute it over all your GPUs using our internal support for it.

However, in some cases, you need to run a battery of simulations and your problem size isn't large enough to justify the use of more than a couple of GPUs for each simulation.

In this case, rather than running the simulations in your set serially (one after the other) using all GPUs for each, you'll want to run them in parallel, potentially even each on a single GPUs.

The idea is to find the next avaialble (set of) GPU(s) and launch a simulation on them while there are still available sets, then wait until a “slot” frees up and start the new one(s) as slots get freed.

Until now, we've been doing this manually by partitioning the set of simulations to do and start them in different shells.

There is actually a very powerful tool to achieve this on the command, line, GNU Parallel. As with all powerful tools, however, this is somewhat cumbersome to configure to get the intended result. And after Doing It Right™ one must remember the invocation magic …

So today I found some time to write a wrapper around GNU Parallel that basically (1) enumerates the available GPUs and (2) appends the appropriate --device command-line option to the invocation of GPUSPH, based on the slot number.

#GPGPU #ParallelComputing #DistributedComputing #GNUParallel

#OpenAFS

#AFS is a distributed filesystem product, pioneered at Carnegie Mellon University and supported and developed as a product by Transarc Corporation (now IBM Pittsburgh Labs). It offers a client-server architecture for federated file sharing and replicated read-only content distribution, providing location independence, scalability, security, and transparent migration capabilities. AFS is available for a broad range of heterogeneous systems including UNIX, Linux,  MacOS X, and Microsoft Windows

IBM branched the source of the AFS product, and made a copy of the source available for community development and maintenance. They called the release OpenAFS.

OpenAFS Foundation
The OpenAFS Foundation is dedicated to fostering the stability and growth of OpenAFS by providing strategic direction and aiming to raise money to support the development and maintenance of OpenAFS.
#distributedcomputing #foss
https://www.openafs.org/

OpenAFS