Enable Advanced Spatial Intelligence: Raw Mesh Access & Semantic Understanding
Summary:
Requesting an expansion of the WorldMesh API to allow for programmatic access to raw geometric vertex/index buffers and high-level semantic scene classification.
Problem:
While the current WorldMesh and HitTest APIs are excellent for visual occlusion and simple placement, they remain "black boxes" for advanced spatial reasoning. Specifically, we cannot programmatically access the raw vertex data to generate custom pathfinding (NavMesh) or perform geometric analysis (e.g., identifying surfaces for "Cover Finding"). Furthermore, semantic classification is currently limited to specific hit-test results; we need a more holistic way to query the scene's composition.
Proposed Solution:
-Raw Mesh Data Access: Provide an API to extract vertex and index buffers from the WorldMesh resource. Ideally, this should be delivered as a stream or via a "chunked" event system to maintain performance as the mesh refines in real-time.
-Expanded Semantic Scene Graph: Provide a structured API to query the scene’s semantic layout (e.g., a list of all identified "Table," "Floor," or "Wall" objects within the camera frustum or radius) rather than relying solely on individual point-in-space hit tests.
-Spatial Metadata: Expose geometric properties of these semantic objects (bounding volumes, orientation, and surface normals) to allow for sophisticated tactical AI, such as identifying valid cover positions or navigable space for agents.
Impact:
This would enable the next generation of "Spatial AI" experiences on Spectacles. Developers would be able to build intelligent agents that treat the user's home as a truly interactive, persistent, and "aware" environment, moving beyond simple overlays to meaningful mixed-reality game mechanics like tactical stealth and pathfinding-based navigation.