Prithiv Sakthi (@prithivMLmods)

Map-Anything v1 데모가 Hugging Face Spaces에 공개되었습니다. 다중 이미지와 비디오를 이용해 3D 재구성, 깊이 추정, 노멀 맵 생성, 인터랙티브 측정을 수행하는 범용 3D 재구성 모델로, Gradio와 Rerun이 통합되었습니다.

https://x.com/prithivMLmods/status/2035055111358357957

#huggingface #3dreconstruction #gradio #computervision #opensource

Prithiv Sakthi (@prithivMLmods) on X

Map-Anything v1 (Universal Feed-Forward Metric 3D Reconstruction) demo is now available on Hugging Face Spaces. Built with @Gradio and integrated with @rerundotio , it performs multi-image and video-based 3D reconstruction, depth, normal map, and interactive measurements.

X (formerly Twitter)

852話(hakoniwa) (@8co28)

단일 이미지로 고품질 3D화를 자동으로 만들어 컬러 상태로 바로 3D프린터에 보낼 수 있는 수준에 도달했다는 내용입니다. 리깅도 지원하며(Apose), 생성된 것은 동영상 AI가 아닌 일반 3D 메쉬라서 바로 출력(시제품 제작 등)에 활용할 수 있다고 적었습니다.

https://x.com/8co28/status/2032638757816643636

#3d #3dreconstruction #3dprinting #ai

852話(hakoniwa) (@8co28) on X

一枚の画像からのAIでの3D化もすごいレベルになっていて、そのままカラーで3Dプリンターに投げれる形式にもしてくれる時代になってる。Aposeならリギングもできる。 一つなにか印刷してみようかな。 (これは動画生成AIではなく普通に3Dメッシュ)

X (formerly Twitter)
DeepMind and #UC #Berkeley have teamed up to give us #LoGeR, a project that claims to tackle long video 3D reconstruction. 🚀 They've thrown in buzzwords like "Hybrid Memory" and "Sliding Window Attention" to distract you from the fact that it still drifts after 19,000 frames. 😜 Basically, it's a convoluted way to say: "Look, Ma! No hands!" while tripping over the finish line. 🙃
https://loger-project.github.io #DeepMind #3DReconstruction #AIResearch #VideoTech #HackerNews #ngated
LoGeR: Long-Context Geometric Reconstruction with Hybrid Memory

fly51fly (@fly51fly)

논문 'VGG-T³: Offline Feed-Forward 3D Reconstruction at Scale'이 arXiv에 공개되었음을 알리는 트윗이다. 저자로 S Elflein, R Li, S Agostinho, Z Gojcic 등이 언급되며 NVIDIA 소속으로 표기되어 있다. 대규모 오프라인 피드포워드 방식의 3D 재구성 관련 연구 논문이다.

https://x.com/fly51fly/status/2027867633438335281

#vggt3 #3dreconstruction #computervision #nvidia

fly51fly (@fly51fly) on X

[CV] VGG-T³: Offline Feed-Forward 3D Reconstruction at Scale S Elflein, R Li, S Agostinho, Z Gojcic… [NVIDIA] (2026) https://t.co/9LAqiiFr7a

X (formerly Twitter)

Tencent HY (@TencentHunyuan)

Tencent의 HY 3D 3.1이 글로벌 플랫폼에 배포되었습니다. 이번 업데이트는 텍스처 충실도와 지오메트리 정밀도를 크게 향상시키고 최대 8개의 입력 뷰를 지원해 재구성 정확도와 조각 수준의 디테일을 구현합니다. 신규 크리에이터에게는 20회 무료 이용권이 제공됩니다.

https://x.com/TencentHunyuan/status/2016449283428659599

#tencent #hy3d #3dreconstruction #texture

Tencent HY (@TencentHunyuan) on X

Tencent HY 3D 3.1 is now available on our global platform! This update delivers a massive leap in texture fidelity and geometry precision. It also supports up to 8 input views for ultimate reconstruction accuracy and sculpt-level detail. 💎 New creators can now access 20 free

X (formerly Twitter)

Tencent HY (@TencentHunyuan)

Tencent의 HY 3D 3.1이 글로벌 플랫폼에 공개되었습니다. 이번 업데이트는 텍스처 충실도와 지오메트리 정밀도를 크게 향상시키고, 최대 8개의 입력 뷰를 지원해 재구성 정확도와 조각 수준 디테일을 제공하며, 신규 크리에이터 대상 20개의 무료 제공 등 접근성 개선을 포함합니다.

https://x.com/TencentHunyuan/status/2016449283428659599

#tencent #3dreconstruction #multiview #texture

Tencent HY (@TencentHunyuan) on X

Tencent HY 3D 3.1 is now available on our global platform! This update delivers a massive leap in texture fidelity and geometry precision. It also supports up to 8 input views for ultimate reconstruction accuracy and sculpt-level detail. 💎 New creators can now access 20 free

X (formerly Twitter)

Bài báo mới D4RT giới thiệu mô hình transformer duy nhất, cho phép suy đoán độ sâu, tương quan thời gian‑không gian và tham số camera từ một video duy nhất. Cơ chế truy vấn mới giảm tải tính toán, đồng thời đạt hiệu suất mới trong tái tạo 4D, vượt trội so với các phương pháp trước. #AI #ComputerVision #DeepLearning #3DReconstruction #TríTuệNhânTạo #ThịGiácMáy #TáiTạo3D

https://deepmind.google/blog/d4rt-teaching-ai-to-see-the-world-in-four-dimensions/

D4RT: Unified, Fast 4D Scene Reconstruction & Tracking

Meet D4RT, a unified AI model for 4D scene reconstruction and tracking.

Google DeepMind
Europeana’s EUreka3D is reconstructing Europe’s cultural legacy in 3D and making it interoperable across institutions.
By merging archives, models, and metadata, it’s turning scattered memory into a shared spatial archive, usable for research, education, and public storytelling.
https://pro.europeana.eu/project/eureka3d-european-union-s-rekonstructed-content-in-3d
#DigitalHeritage #3DReconstruction #CulturalMemory
EUreka3D - European Union's REKonstructed content in 3D | Europeana PRO

EUreka3D built the capacity of small cultural heritage institutions in digital transformation, particularly on issues related to 3D digitisation.

Europeana PRO
New AI model turns photos into explorable 3D worlds, with caveats

Openly available AI tool creates steerable 3D-like video, but requires serious GPU muscle.

Ars Technica

The Dispersed Chinese Art Digitization Project bridges centuries and continents through digital reconstructions of major historical sites and artifacts – from the Zhihua Temple to the Six Horses of Zhaoling and Longmen Binyang Central Cave.

Led by the Center for the Art of East Asia at UChicago and Xi’an Jiaotong University, with major museum partners in China, Japan, and the US.

https://caea.lib.uchicago.edu/dcadp/en/

#DigitalHeritage #EastAsianArt #3DReconstruction #MuseumTech #DigitalHumanities