I am a fourth year PhD candidate at University of Michigan, Ann Arbor. My research interests are at the intersection of Machine Learning and Human-Computer Interaction (HCI).
I develop methods for Human-AI interaction that allows AI systems to utilize information about their users and the context to improve their performance. For example, a chatbot that efficiently selects queries to pose to the user to learn about a concept, or personalize the interaction. I am training in HCI methods to understand technological interactions, and I bring in strong computational and model building skills from my prior industry experience. E.g., I recently built a bayesian network from a massive dataset of 5M records and applying it in my research to build more efficient agents.
I am an open source contributor to Wikipedia, and I also administered their Google Summer of Code internship program in 2016,17. Specifically, I contributed majorly to the mobile Wikipedia website to make it high-performant. For my services I was nominated by Wikimedia Foundation to attend Google summer of code mentor summit in 2017, and invited to present my research at the monthly Wikimedia research showcase in 2021.
Understanding the privacy implications of human - agent interactions when agents try to learn about users for personalization. Built a bayesian network from a massive data of user preferences and using the network to drive interactions in a quantative study to measure privacy violations and privacy concerns, so that more privacy-aware agents can be built.
Research on conversational agents that learn to ask open domain questions to know their users that are relevant, and socially appropriate. The agent uses Reinforcement Learning to maximize information about the user as well as user's willingness to answer the question.
Can digital personal assitatns violate privacy when knowing their users? - Studying privacy implications of user chatbot interactions in a quantative user study, by understanding user's mental models of conversational digital personal assistants, and their privacy concerns with them.
Built a system to evaluate the spread of covid-19 in indoor environments, by simulating human behavior using Reinforcement Learning. Through realistic models of home environments, and human behavior, we are understanding the spread of covid-19 due to human movements, to identify possible strategies to mitigate such spread.
Developed a semi-supervised labeling method to learn about the intent of Wikipedia editors when they edit articles. We proposed the method to automatically learn about people's intent for edits to use that for developing article quality models.
Intelligent meeting recap - Designed, Built and Evaluated an LLM based meeting experiences and studied its effectivenss in the context of user's meeting. Insights informed to the development of a robust meeting recap exeperience within Microsoft.
Usefulness and challenges of devops bots in the software development landscape - Studied bots in a bot platform used by 10,000 developers daily for their workflows. Identified bot challenges such as too many recommendations, poorly grounded human-bot communication. Recommendation from the study helped improve bot engagement, and overall software development quality with Microsoft.
Developed a reviewer recommendations platform within Microsoft. Evaluated the platform through a continual quantative user study, and interviewed developers to understand the breakdowns of the system, and potential direction for improvement such as understanding code level semantic knowledge of developers for review recommendation.
Built a system to identify high level topics of Wikipedia articles using word embeddings, and RandomForests. The models are currently deployed on Wikipedia, and helping Wikipedia editors identify articles of interest to judge relevance, and decide their value for Wikipedia.
Worked on L2VPN routing solutions for Gigabit ethernet datacenter networks.
asumit at umich dot edu
Computer Science and Engineering
University of Michigan, Ann Arbor