Expanse

https://matrix.to/#/#Expanse:matrix.org

Telegram

Kdenlive 25.08.0 has been released. This release brings improvements to the background removal tool using SAM2. The background removal tool now uses less memory consumption. It also fixes SAM2 using hidden system packages. These changes now also allow installing specific CUDA version for Whisper and SAM2.

#Kdenlive #SAM2 #Whisper #Cuda #memory

If I'm not mistaken, one of the libraries/models used for the object selection in #Krita is the same as the one used for #Kdenlive's new Object Mask effect (#SAM2).

https://docs.kdenlive.org/en/effects_and_filters/video_effects/alpha_mask_keying/object_mask.html

#videoediting #FOSS #opensource #libregraphics

kioworker appears to be a file transfer tool as part of the KDE Framework, so IDK if it would be possible to get this to work on mac...

My next idea is to try "Use system package only" and see if I can install #SAM2 locally and get the model installed outside of Kdenlive...

@kdenlive @kde Can anyone help me use this #SAM2 #ObjectDetection in #kdenlive I got stuck at the first hurdle!

The plugin setting says it's installed, but the Effect Stack still just shows a "Configure" button? This is on macOS... what am I missing?

Kdenlive 25.04.0 has been released. This version uses the Segment Anything Model 2 (SAM2) from Meta to make Object Segmentation possible.

Segment Anything Model 2 (SAM2) is a foundation model for visual segmentation in both images and videos developed by Meta AI. It builds upon the original Segment Anything Model (SAM) and is designed for real-time interactive segmentation in videos and images.

https://github.com/facebookresearch/sam2

#Linux #KDE #Kdenlive #video #editor #META #SAM2

GitHub - facebookresearch/sam2: The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th...

GitHub

Meta launches SAM 2: The first Unified Model, Check Details Here -
https://techchilli.com/news/meta-launches-sam-2-the-first-unified-model/

#meta #ai #sam2 #aimodel

Meta launches SAM 2: The first Unified Model

Meta's Segment Anything Model 2 (SAM 2) expands segmentation capabilities to video, addressing challenges in segmenting video. SAM 2 can follow objects in real time across frames, enabling easier video generation and editing.

Tech Chill
Segment Anything Model 2 (SAM 2) is a model for visual segmentation in images and videos. With a transformer architecture, it enables real-time video editing. The SA-V dataset improves the model and data through user interaction.
#AI #SAM2 #MachineLearning #ComputerVision
https://github.com/facebookresearch/segment-anything-2
GitHub - facebookresearch/segment-anything-2: The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th...

GitHub
Meta introduces new AI Segment Anything Model 2 (SAM 2)

Meta Blog: Today, we’re announcing the Meta Segment Anything Model 2 (SAM 2), the next generation of the Meta Segment Anything Model, now supporting object segmentation in videos and images. We’re releasing SAM 2 under an Apache 2.0 license, so anyone can use it to build their own experiences...

Windows 11 Forum