Back to Publications
ShadAR: LLM-driven Shader Generation to Transform Visual Perception in Augmented Reality
CHI EA
2026
Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems
TL;DR
What we did:
We built ShadAR, an Augmented Reality prototyping pipeline that enables real-time creation of visualizations through large language models and object detection.
What we found:
We found that ShadAR effectively interprets natural language commands to generate corresponding HLSL shader code, allowing users to flexibly alter their visual experiences in real time.
Takeaway:
We demonstrate that integrating large language models with object detection can enable novel on-demand Augmented Reality applications, enabling broader user interaction and engagement.
Abstract
Augmented Reality (AR) can visually transform a user’s world by rendering virtual content on top of reality. However, developing such AR apps and visualizations remains a complex process that requires an understanding of computer vision and programming skills. We present ShadAR, an AR prototyping pipeline that enables real-time creation of small AR visualizations and applications using large language models (LLMs) and object detection. ShadAR allows users to express their visual intent (e.g., pixelate every person around me) via natural language, which is interpreted by an LLM to generate corresponding shader code. This shader is then compiled in real-time and applied to the passthrough video stream.