Our Research
Our research strives to discover the fundamental scientific principles and engineering methodologies behind sensor-rich intelligent autonomous systems with high efficiency, reliability, and security. Methodologically, we apply hardware/algorithm co-design and co-optimization across the computing system stack to three exciting research domains:
Embodied AI: It has long been hypothesized that intelligence emerges from the interaction of an agent with an environment as a result of sensorimotor activity. As such, the epitomizing characteristic of embodied AI is its ability to interact and learn from a physical environment. Compared to conventional learning tasks based on large volumes of static images (e.g. ImageNet), digital embodied agents must dynamically perceive the real physical world through a multitude of egocentric perceptual inputs with the help of multi-modal sensors (e.g. cameras, LiDARs, accelerometers, etc.). Therefore, intelligent sensors are the key to transforming raw analog signals in the physical world into semantically meaningful digital embeddings that can be consumed by the downstream AI algorithms, be it computer vision, localization, or visual navigation. Our research looks at how AI computing capabilities can be strategically embedded inside or near the sensors so that high-level learned features can be extracted directly at the signal source instead of sending the raw sensor signals with high data volumes, for example, by learning a low-dimensional representation of the raw image to achieve 4~8x compression and 6.3x energy saving with negligible accuracy loss (LeCA).
Ambient AI: The grand vision of ambient AI is to enable intelligent physical spaces that are responsive to human presence and sensitive to context and situation changes. By embedding sensors and electronics into everyday objects and environment surroundings and combining with today’s advanced AI technology, ambient AI systems can provide a pervasive computing environment and create personalized experiences with context awareness. Examples of ambient AI include vision recognition systems, gesture/voice-based interfaces, and action/activity detection technologies. A key challenge to realize the vast potential of ambient AI is the design of efficient always-on edge AI devices with the ability to process multisensory and multimodal data (e.g. microphones, cameras, ultrasound). In our lab, we explore the joint design space of novel devices (optical and electronic), circuits, architecture, and algorithms to build passive contactless computational sensors that operate under extremely low power (or entirely self-powered) and with enhanced privacy protection. We create smart edge systems that can fuse multimodal sensor inputs, learn multiple tasks, and continually adapt to changes, aiming to transform the ambient spaces in healthcare, industrial/agriculture, and critical infrastructure.
Companion AI: AI is revolutionizing our economy. This is especially true in the age of generative AI and foundational models where increasingly large-scale AI systems are trained to ingest petabytes of data, often in the form of collective human knowledge. Research has found that the larger impact of AI technology comes from when humans and machines work together, enhancing each other’s strengths. In this new line of research, we are particularly interested in how AI and machine learning can be applied to complement and augment the capabilities of human designers in the domain of electronic design automation (EDA). Rather than replacing human designers, we envision an AI companion that can serve as a productivity booster and force multiplier to IC and electronic system development, much like what GitHub copilot has done for pair programming. In this research endeavor, we take a designer-driven perspective by collaboratively integrating AI capacity into the design process and tools, achieving faster time-to-verifiable-design and lower cost-to-reliable-solution. Techniques that leverage infusion of domain knowledge for parameter optimization, hierarchical graph representation for topology synthesis, and hybrid Bayesian and reinforcement learning for sample-efficient optimization, are just a few examples of Companion AI we have developed to free human designer from routine tasks and supercharge their innovation and performance on higher-level missions.