I am a fourth year PhD candidate at University of Michigan, Ann Arbor. My research interests are at the intersection of Machine Learning and Human-Computer Interaction (HCI).
I develop methods for Human-AI interaction that allows AI systems to utilize information about their users and the context to improve their performance. For example, a chatbot that efficiently selects queries to pose to the user to learn about a concept, or an AI educational support tool that observes student performance to adapt its recommendations. I borrow from cognitive science to build approximate user models for tasks, and use them with Reinforcement Learning to improve Human-AI interaction. I bring in strong computational and model building skills from my prior industry experience to build systems for Human-AI interaction and my training in HCI allows me to conduct real world study on improving Human-AI interaction. E.g., I recently built a bayesian network from a massive dataset of 3M records to model personal information and using it to study personalization - privacy trade-off.
I am an open source contributor to Wikipedia, and I also administered their Google Summer of Code internship program in 2016,17. Specifically, I contributed majorly to the mobile Wikipedia website to make it high-performant. For my services I was nominated by Wikimedia Foundation to attend Google summer of code mentor summit in 2017, and invited to present my research at the monthly Wikimedia research showcase in 2021.
Personalized AI can improve education significantly by giving each student the opportunity to learn at their own pace and understanding. In this work, I am leveraging cognitive theory retention of concepts to build an adaptive course practice scheduler that allows students to practice course materials. The algorithm uses RL on the underlying memory model to adaptively schedule concepts for practice that account for past recall of concepts as well as maximize lookahead gains of learning future concepts.
Understanding the privacy implications of human - agent interactions when agents try to learn about users for personalization. Built a bayesian network from a massive data of user preferences and using the network to drive interactions in a quantative study to measure privacy violations and privacy concerns, so that more privacy-aware agents can be built.
Research on conversational agents that learn to ask open domain questions to know their users that are relevant, and socially appropriate. The agent uses Reinforcement Learning to maximize information about the user as well as user's willingness to answer the question.
Built a system to evaluate the spread of covid-19 in indoor environments, by simulating human behavior using Reinforcement Learning. Through realistic models of home environments, and human behavior, we are understanding the spread of covid-19 due to human movements, to identify possible strategies to mitigate such spread. Click here for a demo video.
Developed a semi-supervised labeling method to learn about the intent of Wikipedia editors when they edit articles. We proposed the method to automatically learn about people's intent for edits to use that for developing article quality models.
Adaptive simplificiation of domain specific writing using LLMs. Developed robust semantic evaluation method to evaluate LLMs for simplifying domain specific writing and evaluated open source and commercial LLMs for their performance on the task.
Intelligent meeting recap - Designed, Built and Evaluated an LLM based meeting experiences and studied its effectivenss in the context of user's meeting. Insights informed to the development of a robust meeting recap exeperience within Microsoft.
Usefulness and challenges of devops bots in the software development landscape - Studied bots in a bot platform used by 10,000 developers daily for their workflows. Identified bot challenges such as too many recommendations, poorly grounded human-bot communication. Recommendation from the study helped improve bot engagement, and overall software development quality with Microsoft.
Developed a reviewer recommendations platform within Microsoft. Evaluated the platform through a continual quantative user study, and interviewed developers to understand the breakdowns of the system, and potential direction for improvement such as understanding code level semantic knowledge of developers for review recommendation.
Built a system to identify high level topics of Wikipedia articles using word embeddings, and RandomForests. The models are currently deployed on Wikipedia, and helping Wikipedia editors identify articles of interest to judge relevance, and decide their value for Wikipedia.
Worked on L2VPN routing solutions for Gigabit ethernet datacenter networks.