Upload up to 6 photos - multi-view input for accurate reconstruction No photos? No problem- type a prompt, FLUX.1-Schnell generates your reference images AI vision pipeline - Qwen2.5-VL analyzes your angles and synthesizes the optimal 3D
description Wireframe inspector - review topology before you export GLB export - drop it straight into Blender, ZBrush, Maya, Unity, or Unreal π Bring your own HF token. Nothing is stored server-side. Works great as a starting mesh for retopology - pair it with [8VIEW AI Studio](ArtelTaleb/8view-ai-studio) to generate your character reference sheets first, then build the 3D asset here. π ArtelTaleb/splat-explorer
Upload up to 6 photos - multi-view input for accurate reconstruction No photos? No problem- type a prompt, FLUX.1-Schnell generates your reference images AI vision pipeline - Qwen2.5-VL analyzes your angles and synthesizes the optimal 3D
description Wireframe inspector - review topology before you export GLB export - drop it straight into Blender, ZBrush, Maya, Unity, or Unreal π Bring your own HF token. Nothing is stored server-side. Works great as a starting mesh for retopology - pair it with [8VIEW AI Studio](ArtelTaleb/8view-ai-studio) to generate your character reference sheets first, then build the 3D asset here. π ArtelTaleb/splat-explorer
What if you could control a 3D model just by talking to it?
Not clicking. Not dragging sliders. Not writing animation code. Just⦠describing what you want.
"Rotate slowly on the Y axis." "Move forward, don't stop." "Scale up, then reset."
That's the core idea behind Hello 3D World - a space I've been building as an open experiment. βββββββββββββββββββββββββββββ Here's how it works:
You load a 3D model. You describe it to the LLM ("this is a robot", "this is a hot air balloon"). Then you type a natural language command.
The LLM β Qwen 72B, Llama 3, or Mistral - reads your intent and outputs a JSON action: rotate, move, scale, loop, reset. The 3D scene executes it instantly.
Today it's simple geometric commands. But what happens when the model understands context? When it knows the object has legs, or wings, or a cockpit? When it can choreograph a sequence from a single sentence?
Maybe this becomes a prototyping tool for robotics. Maybe a no-code animation layer for game dev. Maybe something I haven't imagined yet.
That's why I'm keeping it open β I want to see what other people make it do. βββββββββββββββββββββββββββββ
The space includes:
β DR8V Robot + Red Balloon (more models coming) β 5 lighting modes: TRON, Studio, Neon, Cel, Cartoon β Import your own GLB / OBJ / FBX β Built-in screen recorder β Powered by open LLMs β bring your own HF token
Record your best sequences and share them in the comments. I want to see what this thing can do in other hands.
What if you could control a 3D model just by talking to it?
Not clicking. Not dragging sliders. Not writing animation code. Just⦠describing what you want.
"Rotate slowly on the Y axis." "Move forward, don't stop." "Scale up, then reset."
That's the core idea behind Hello 3D World - a space I've been building as an open experiment. βββββββββββββββββββββββββββββ Here's how it works:
You load a 3D model. You describe it to the LLM ("this is a robot", "this is a hot air balloon"). Then you type a natural language command.
The LLM β Qwen 72B, Llama 3, or Mistral - reads your intent and outputs a JSON action: rotate, move, scale, loop, reset. The 3D scene executes it instantly.
Today it's simple geometric commands. But what happens when the model understands context? When it knows the object has legs, or wings, or a cockpit? When it can choreograph a sequence from a single sentence?
Maybe this becomes a prototyping tool for robotics. Maybe a no-code animation layer for game dev. Maybe something I haven't imagined yet.
That's why I'm keeping it open β I want to see what other people make it do. βββββββββββββββββββββββββββββ
The space includes:
β DR8V Robot + Red Balloon (more models coming) β 5 lighting modes: TRON, Studio, Neon, Cel, Cartoon β Import your own GLB / OBJ / FBX β Built-in screen recorder β Powered by open LLMs β bring your own HF token
Record your best sequences and share them in the comments. I want to see what this thing can do in other hands.
π΅ MP3 Player - Drop your music, hit play. No install
MP3 Player - brings that energy back - straight in your browser.
- Drop your files - MP3, WAV, FLAC, AAC, OGG, AIFF, WMA β it reads them all - Build your playlist - add tracks one by one or batch-load a whole folder - Retro LCD display - scrolling track info, elapsed time, the full throwback - Full controls - play, pause, skip, shuffle, repeat - Mobile-first - big tactile buttons, works on phone like an iPod in your pocket
No install. No GPU needed on your end. Just upload and play.
π΅ MP3 Player - Drop your music, hit play. No install
MP3 Player - brings that energy back - straight in your browser.
- Drop your files - MP3, WAV, FLAC, AAC, OGG, AIFF, WMA β it reads them all - Build your playlist - add tracks one by one or batch-load a whole folder - Retro LCD display - scrolling track info, elapsed time, the full throwback - Full controls - play, pause, skip, shuffle, repeat - Mobile-first - big tactile buttons, works on phone like an iPod in your pocket
No install. No GPU needed on your end. Just upload and play.