Skip to main content

Phase Roadmap

All 16 phases are approved. Phases 1-11 are deployed and live. Phases 12-16 are scaffolded (types, stores, service stubs, panel UIs, Cloud Function stubs) but their ML/compute backends are not yet connected.

Phase Status

PhaseFeatureStatusKey Files
1Auth (email/password + Google OAuth)LiveAuthPage, useAuthStore, auth.ts
2Landing page with WebGL heroLiveyugma-landing/
3Scene templates + loaderLivesceneTemplates.ts, ProjectInfoPanel
4Share links (URL-encoded)LiveshareLinkService, urlEncoder
5Library panel (primitives + GLTF + Sketchfab)LiveLibraryPanel/
6Real-time collaboration (room codes, cursors)LivecollabService, CollabOverlay
7AI Scene ComposerLiveAIPanel, aiCompose, aiSerializer
8Text-to-3D (Meshy)LiveGenerateSection, generationService
9Advanced collab (comments, feed)LiveCommentPin, ChangeFeed
10Export (GLB, screenshot, embed)LiveexportUtils, EmbedPage
11AI Materials (30 presets, AI gen)LivematerialPresets, AITextureSection
12Video-to-3D reconstructionScaffoldedvideo.types, useVideoStore, VideoPanel
13Industrial digital twinsScaffolded + Mountedtwin.types, useTwinStore, TwinPanel
14Product vibe-coderScaffoldedproduct.types, useProductStore, ProductPanel
15Factory simulation (physics)Scaffolded + Wrappedphysics.types, usePhysicsStore, PhysicsWorld
16AI cinematic directorScaffoldedcinematic.types, useCinematicStore, CinematicPanel

AI Enhancements (Cross-Phase)

These enhancements improve the AI layer across all phases:

EnhancementStatusFiles
Spatial preprocessor (circle/grid/stack/spiral/line/scatter)CompletespatialPreprocessor.ts
Cross-session memory (Firestore)CompleteaiService.ts (loadLastSession/saveAISession)
Style fingerprint + memoryCompletestyleFingerprint.ts
Planner→executor decomposition (3D-GPT)CompleteaiCompose.ts (PLANNING_SYSTEM_PROMPT)
Real focus_camera (smooth tween)CompleteCameraController.tsx, useSceneStore
YSL v1.5 schema (userData, semanticRole, relationships)Completescene.types.ts, aiSerializer.ts

What Each Scaffolded Phase Needs to Go Live

Phase 12 — Video-to-3D

Build a Cloud Run orchestrator that chains: SAM2 segmentation → monocular depth estimation → object classification → auto-rigging → scene composition. The Cloud Function startVideoReconstruction creates the job doc; the orchestrator processes it and writes back resultGlbUrl.

Phase 13 — Digital Twins

  1. Deploy RTDB rules for /sensors/* (firebase deploy --only database)
  2. Build an MQTT→RTDB bridge (Cloud Run or Cloud Function) that receives sensor readings and writes to sensors/{id}/latest
  3. The useSensorSubscriptions hook + pushReadingupdateObject pipeline is already wired end-to-end

Phase 14 — Product Vibe-Coder

  1. Set NEXAR_API_KEY secret for the Octopart/Nexar GraphQL API
  2. Replace the mock data in octopartProxy.ts with real API calls
  3. Wire enclosure geometry into SceneRenderer (convert Enclosure → box with mounting hole subtractions)

Phase 15 — Factory Simulation

  1. pnpm --filter yugma-app add @react-three/rapier @dimforge/rapier3d-compat
  2. Replace PhysicsWorld.tsx stub body with <Physics paused={!running}>{children}</Physics>
  3. Add RigidBody wrappers per SceneObject based on usePhysicsStore.bodies

Phase 16 — AI Cinematic Director

  1. Build renderStoryboard Cloud Function: headless R3F render → ffmpeg → video upload
  2. Wire CINEMATIC_DIRECTOR_PROMPT into a new Cloud Function that takes a scene + brief and returns a Storyboard JSON
  3. Build a shot timeline player in CinematicPanel that drives setCameraTarget per shot