Distributed Video Analytics
Presentation Menu
Visual content continues to be on the rise. Increasingly, visual analytics are part of computational pipelines supporting latency-sensitive and interactive applications such as situational awareness, which combine content dynamically aggregated from sensors and end-user devices. The talk will highlight assistive visual systems for persons with visual impairments and a pollinator-tracking system as examples of such end uses. Using these application contexts, the design space of these distributed sensors will be explored with focus on communication costs. Next, the talk will focus on Video query processing that is evolving from applications that query pre-extracted metadata using traditional query processing techniques to applications that directly analyze geometry, semantics, and content in the video bitstream. This talk will showcase ongoing efforts in system level design. Finally, we will show some new opportunities with the shift from 2D to 3D sensors. The talk leverages effort from collaborators from the SRC JUMP Visual Analytics team and the NSF Expeditions in Computing Visual Cortex on Silicon program.
Organizer: Mehdi Tahoori