These functions interact with the Render() function in several ways, primarily through shared data structures and event handling: 1. postOdometryEvent(): - Feeds camera and pose data into the system that Render() will use - Updates the camera_ object that Render() uses for visualization - Processes image data and depth information - Sets up AR/camera matrices that Render() will use for visualization - Interacts with Render() through the cameraMutex_ 2. handleEvent(): - Processes various types of events that affect what Render() will display: * SensorEvents (stored in sensorEvents_ queue) * RtabmapEvents (stored in rtabmapEvents_ queue) * PoseEvents (stored in poseEvents_ queue) * PostRenderEvents (generated after Render() completes) - Uses multiple mutexes (sensorMutex_, rtabmapMutex_, poseMutex_) to synchronize with Render() The flow typically goes: 1. postOdometryEvent() receives new camera/pose data 2. This data is processed and stored in the camera object 3. Render() accesses this data through the camera mutex 4. Render() processes any pending events in the various event queues 5. After rendering, Render() generates a PostRenderEvent 6. handleEvent() processes the PostRenderEvent and updates UI statistics The key shared resources between these functions are: - Event queues (sensorEvents_, rtabmapEvents_, poseEvents_) - Mutex locks for thread safety - Camera object (camera_) - Scene object (main_scene_) - Various state variables (status_, bufferedStatsData_) This creates a pipeline where data flows from sensors through postOdometryEvent(), gets rendered by Render(), and then results in UI updates through handleEvent().