AshutoshShrivastava (@ai_for_success)

Hitem3D V2.0이 출시되었습니다. 이미지·사진을 풀 3D 모델로 변환하고, 모델 분할(segmentation)으로 부분별 수정이 가능하며, 이미지에서 STL 파일로 변환해 3D 프린팅용 출력까지 지원합니다. Google Nano Banana와 결합하면 게임 에셋 제작에 활용할 수 있다는 응용 예시도 제시됩니다.

https://x.com/ai_for_success/status/2032488717458547103

#hitem3d #3dgeneration #imageto3d #3dprinting #gamedev

AshutoshShrivastava (@ai_for_success) on X

Hitem3D V2.0 just launched. It's an AI 3D model generation tool that does three things: - Photo or image to full 3D model - Model segmentation so you can isolate and refine parts - Image to STL for 3D printing Combine this with Google Nano Banana and you can build game assets,

X (formerly Twitter)

Jerome | InsaneUnreal (@insaneUEFN)

작성자는 여러 플랫폼·툴을 조합한 3D 생성 워크플로를 공유했습니다. Tripo로 하이폴리 초기 모델 생성, Hunyuan 3D Studio로 로우폴리 전환 및 자동 UV 언랩, ComfyUI에서 Trellis2로 텍스처링과 업스케일을 수행했다고 설명하며, Tripo가 전측면 레퍼런스 두 장으로 초기 3D 모델 생성에 가장 잘 작동했다고 평가했습니다.

https://x.com/insaneUEFN/status/2013623255895081074

#3dgeneration #comfyui #tripo #hunyuan3d

Jerome | InsaneUnreal (@insaneUEFN) on X

Doing some more platform blending 😄 - Tripo initial highpoly generation - Hunyuan 3D Studio for lowpoly + auto UV unwrap - Trellis2 in ComfyUI for texturing + upscaling. Tripo worked the best with my 2 reference images (front side + side view) to create the initial 3D model.

X (formerly Twitter)

Tencent HY (@TencentHunyuan)

텐센트의 오픈소스 3D 생성 모델 'Tencent HY 3D'의 최신 기능을 시연하기 위해 선전 본사에서 크리에이터들을 모았다는 발표입니다. 2024년 출시 이후 전 세계 300만 회 이상 다운로드를 기록했으며, 모멘텀을 바탕으로 2025년 11월에 3D 엔진을 글로벌화했다고 알립니다.

https://x.com/TencentHunyuan/status/2011385381560926700

#3d #opensource #tencent #3dgeneration

Tencent HY (@TencentHunyuan) on X

We recently gathered creators at our Shenzhen HQ to test the latest capabilities of Tencent HY 3D. Since its 2024 launch, our open-source 3D generation model has surpassed 3 million downloads globally. Building on this momentum, we took our 3D Engine global in November 2025,

X (formerly Twitter)

Nghiên cứu mới về tạo cảnh 3D có thể điều khiển được cho huấn luyện robot! Sử dụng mô hình khuếch tán, học tăng cường và thuật toán tìm kiếm MCTS, giúp tạo ra các môi trường ảo đa dạng, phức tạp và sát thực tế cho robot. Đặc biệt, có thể điều chỉnh để phù hợp với các nhiệm vụ cụ thể và đảm bảo tính khả thi vật lý. Đã phát hành bộ dữ liệu lớn hơn 44 triệu cảnh.

#AI #Robotics #3DGeneration #MachineLearning #SceneSynthesis #TạoCảnh3D #Robot #HọcMáy

https://www.reddit.com/r/singularity/comments/1o

New AI model turns photos into explorable 3D worlds, with caveats

Openly available AI tool creates steerable 3D-like video, but requires serious GPU muscle.

Ars Technica
LRM is the method behind the 3D generation model from stablility AI. Very clear diagram showing how it works. Someday maybe there is a LLM to output code implementation given a diagram like this ... https://arxiv.org/abs/2311.04400 #GenerativeAI #3DGeneration (via https://message.haoxiang.org)
LRM: Large Reconstruction Model for Single Image to 3D

We propose the first Large Reconstruction Model (LRM) that predicts the 3D model of an object from a single input image within just 5 seconds. In contrast to many previous methods that are trained on small-scale datasets such as ShapeNet in a category-specific fashion, LRM adopts a highly scalable transformer-based architecture with 500 million learnable parameters to directly predict a neural radiance field (NeRF) from the input image. We train our model in an end-to-end manner on massive multi-view data containing around 1 million objects, including both synthetic renderings from Objaverse and real captures from MVImgNet. This combination of a high-capacity model and large-scale training data empowers our model to be highly generalizable and produce high-quality 3D reconstructions from various testing inputs, including real-world in-the-wild captures and images created by generative models. Video demos and interactable 3D meshes can be found on our LRM project webpage: https://yiconghong.me/LRM.

arXiv.org