I am a fourth year PhD candidate at University of Michigan, Ann Arbor. My research interests are at the intersection of Machine Learning and Human-Computer Interaction (HCI).
I develop adaptive AI systems that enable people to reason under risk and uncertainty in complex decision-making scenarios. For example, enabling decision-makers in education to conduct accurate assessments of students for admissions and course waivers or designing adaptive meeting recap that considers meeting participant's goals and expertise to provide them the most relevant information. I borrow from cognitive science to build approximate user models for tasks, and use them with Reinforcement Learning to improve Human-AI interaction. I bring in strong computational and model building skills from my prior industry experience to build systems for Human-AI interaction and my training in HCI allows me to conduct real world study on improving Human-AI interaction. E.g., I recently built a bayesian network from a massive dataset of 3M records to model personal information and using it to study personalization - privacy trade-off.
I am an open source contributor to Wikipedia, and I also administered their Google Summer of Code internship program in 2016,17. Specifically, I contributed majorly to the mobile Wikipedia website to make it high-performant. For my services I was nominated by Wikimedia Foundation to attend Google summer of code mentor summit in 2017, and invited to present my research at the monthly Wikimedia research showcase in 2021.
asumit at umich dot edu
Computer Science and Engineering
University of Michigan, Ann Arbor
Ann Arbor
United States
Sumit Asthana, Sagi Hilleli, Pengcheng He, Aaron Halfaker
Snehal Prabhudesai, Sumit Asthana, Leyao Yang, Xun Huan, Q. Vera Liao, Nikola Banovic
Sumit Asthana, Sabrina Tobar Thommel, Aaron Halfaker, Nikola Banovic
Sumit Asthana, Rahul Kumar, Ranjita Bhagwan, Christian Bird, Chetan Bansal, Chandra Maddila, Sonu Mehta, B. Ashok
Sonu Mehta, Ranjita Bhagwan, Rahul Kumar, Chetan Bansal, Chandra Maddila, B. Ashok, Sumit Asthana, Christian Bird, Aditya Kumar
Sumit Asthana and Aaron Halfaker.
Adaptive simplification of domain-specific writing using LLMs. Developed a robust semantic evaluation method to evaluate LLMs for simplifying domain-specific writing and evaluated open-source and commercial LLMs for their performance on the task.
Intelligent meeting recap - Designed, Built and Evaluated an LLM based meeting experiences and studied its effectivenss in the context of user's meeting. Insights informed to the development of a robust meeting recap exeperience within Microsoft.
Usefulness and challenges of devops bots in the software development landscape - Studied bots in a bot platform used by 10,000 developers daily for their workflows. Identified bot challenges such as too many recommendations, poorly grounded human-bot communication. Recommendation from the study helped improve bot engagement, and overall software development quality with Microsoft.
Developed a reviewer recommendations platform within Microsoft. Evaluated the platform through a continual quantative user study, and interviewed developers to understand the breakdowns of the system, and potential direction for improvement such as understanding code level semantic knowledge of developers for review recommendation.
Built a system to identify high level topics of Wikipedia articles using word embeddings, and RandomForests. The models are currently deployed on Wikipedia, and helping Wikipedia editors identify articles of interest to judge relevance, and decide their value for Wikipedia.
Worked on L2VPN routing solutions for Gigabit ethernet datacenter networks.
Personalized AI can improve education significantly by giving each student the opportunity to learn at their own pace and understanding. In this work, I am leveraging cognitive theory retention of concepts to build an adaptive course practice scheduler that allows students to practice course materials. The algorithm uses RL on the underlying memory model to adaptively schedule concepts for practice that account for past recall of concepts as well as maximize lookahead gains of learning future concepts.
Understanding the privacy implications of human - agent interactions when agents try to learn about users for personalization. Built a bayesian network from a massive data of user preferences and using the network to drive interactions in a quantative study to measure privacy violations and privacy concerns, so that more privacy-aware agents can be built.
Built a system to evaluate the spread of covid-19 in indoor environments, by simulating human behavior using Reinforcement Learning. Through realistic models of home environments, and human behavior, we are understanding the spread of covid-19 due to human movements, to identify possible strategies to mitigate such spread. Click here for a demo video.