Hima Lakkaraju



Contact

hlakkaraju@hbs.edu
hlakkaraju@seas.harvard.edu

Morgan Hall 491
Science and Engineering Complex 6.220

@hima_lakkaraju
lvhimabindu


I am an Assistant Professor at Harvard University with appointments in the Business School and the Department of Computer Science. I am also a Senior Staff Research Scientist (part-time) at Google. Previously, I earned my PhD in Computer Science from Stanford University and have held research and leadership roles at Microsoft Research, IBM Research, Adobe, and Fiddler AI.

My research interests lie within the broad area of the algorithmic foundations and societal implications of safe, trustworthy, and responsible AI. Specifically, I develop machine learning and optimization techniques, design evaluation frameworks, and conduct human-subject studies to improve the trustworthiness of predictive and generative models, including large language models (LLMs). My work spans themes such as safety, interpretability, fairness, privacy, reasoning, AI-assisted decision making, and human–AI collaboration.

My work addresses fundamental questions at the intersection of human and algorithmic decision-making, such as:

  1. How can we build interpretable and accurate models to assist and augment human decision-making?
  2. How do we identify and correct underlying biases in both human decisions and model predictions?
  3. How can we ensure that models and their interpretations are robust to adversarial and privacy attacks?
  4. How do we train and evaluate models in the presence of missing counterfactuals and unmeasured confounding?
  5. How do humans engage with AI models, and what factors shape effective human–AI collaboration?

These questions have far-reaching implications in high-stakes domains such as health care, policy, law, and business.

I lead the AI4LIFE research group at Harvard and I recently co-founded the Trustworthy ML Initiative (TrustML) to help lower entry barriers into trustworthy ML and bring together researchers and practitioners working in the field. My research is being generously supported by NSF, Sloan Foundation, Schmidt Sciences, Google, OpenAI, Amazon, JP Morgan, Adobe, Bayer, Harvard Data Science Initiative, and D^3 Insitute at Harvard. My work has been featured in various major media outlets including the New York Times, TIME magazine, Fortune, Forbes, MIT Technology Review, and Harvard Business Review.

Please check out my CV for more details about me and my research.

NOTE: I am looking for motivated graduate and undergraduate students and postdocs who are broadly interested in trustworthy machine learning and large pre-trained models. If you are excited about this line of research and would like to work with me, please read this before contacting me.

Browse by Topic or Publication Type.

  • Who Gets Credit or Blame? Attributing Accountability in Modern AI Systems
    Shichang Zhang, Hongzhe Du, Karim Saraipour, Jiaqi Ma, Himabindu Lakkaraju
    PDF
  • Interpretability Illusions with Sparse Autoencoders: Evaluating Robustness of Concept Representations
    Aaron J. Li, Suraj Srinivas, Usha Bhalla, Himabindu Lakkaraju
    PDF
  • Towards Unified Attribution in Explainable AI, Data-Centric AI, and Mechanistic Interpretability
    Shichang Zhang, Tessa Han, Usha Bhalla, Himabindu Lakkaraju
    PDF
  • Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers
    Alex Oesterling, Usha Bhalla, Suresh Venkatasubramanian, Himabindu Lakkaraju
    PDF
  • Generalized Group Data Attribution
    Dan Ley, Suraj Srinivas, Shichang Zhang, Gili Rusak, Himabindu Lakkaraju
    PDF
  • Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models
    Martin Pawelczyk, Lillian Sun, Zhenting Qi, Aounon Kumar, Himabindu Lakkaraju
    PDF
  • On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
    Sree Harsha Tanneru, Dan Ley, Chirag Agarwal, Himabindu Lakkaraju
    Research Highlight: OpenAI o1 System Card
    PDF
  • Manipulating Large Language Models to Increase Product and Content Visibility
    Aounon Kumar, Himabindu Lakkaraju
    Featured in The New York Times | The Guardian | Communications of the ACM | Towards Data Science
    PDF
  • Advancing science- and evidence-based AI policy
    Rishi Bommasani, Sanjeev Arora, Jennifer Chayes, Yejin Choi, Mariano-Florentino Cuéllar, Li Fei-Fei, Daniel E. Ho, Dan Jurafsky, Sanmi Koyejo, Himabindu Lakkaraju, Arvind Narayanan, Alondra Nelson, Emma Pierson, Joelle Pineau, Scott Singer, Gaël Varoquaux, Suresh Venkatasubramanian, Ion Stoica, Percy Liang, and Dawn Song
    Science, 2025.
    PDF
  • Detecting LLM-Generated Peer Reviews
    Vishisht Rao, Aounon Kumar, Himabindu Lakkaraju, Nihar B. Shah
    PLOS ONE, 2025.
    PDF
  • EvoLM: In Search of Lost Language Model Training Dynamics
    Zhenting Qi, Fan Nie, Alexandre Alahi, James Zou, Himabindu Lakkaraju, Yilun Du, Eric Xing, Sham Kakade, Hanlin Zhang
    Advances in Neural Information Processing Systems (NeurIPS), 2025.
    Oral Presentation
    PDF
  • All Proxy Rewards are Bad, Can We Hedge to Make Some Useful?
    Hadi Khalaf, Claudio Mayrink Verdun, Alex Oesterling, Himabindu Lakkaraju, Flavio Calmon
    Advances in Neural Information Processing Systems (NeurIPS), 2025.
    Spotlight Presentation
    PDF
  • Measuring the Faithfulness of Thinking Drafts in Large Reasoning Models
    Zidi Xiong, Shan Chen, Zhenting Qi, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2025.
    PDF
  • How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence
    Hongzhe Du, Weikai Li, Min Cai, Karim Saraipour, Zimin Zhang, Himabindu Lakkaraju, Yizhou Sun, Shichang Zhang
    Conference on Language Modeling (COLM), 2025.
    Outstanding Paper Award, New England NLP Symposium 2025
    PDF
  • On the Impact of Fine-Tuning on Chain-of-Thought Reasoning
    Elita Lobo, Chirag Agarwal, Himabindu Lakkaraju
    The North American Chapter of the Association for Computational Linguistics (NAACL), 2025.
    PDF
  • Quantifying Generalization Complexity for Large Language Models
    Zhenting Qi, Hongyin Luo, Xuliang Huang, Zhuokai Zhao, Yibo Jiang, Xiangjun Fan, Himabindu Lakkaraju, James Glass
    International Conference on Learning Representations (ICLR), 2025.
    PDF
  • More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness
    Aaron Jiaxun Li, Satyapriya Krishna, Himabindu Lakkaraju
    International Conference on Learning Representations (ICLR), 2025.
    Oral Presentation [Top 1.8%]
    PDF
  • Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems
    Zhenting Qi, Hanlin Zhang, Eric P. Xing, Sham M. Kakade, Himabindu Lakkaraju
    International Conference on Learning Representations (ICLR), 2025.
    PDF
  • Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
    Usha Bhalla, Alex Oesterling, Suraj Srinivas, Flavio Calmon, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2024.
    PDF
  • MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language models
    Tessa Han, Aounon Kumar, Chirag Agarwal, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2024.
    PDF
  • In-context Unlearning: Language Models as Few Shot Unlearners
    Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju
    International Conference on Machine Learning (ICML), 2024.
    PDF
  • Understanding the Effects of Iterative Prompting on Truthfulness
    Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju
    International Conference on Machine Learning (ICML), 2024.
    PDF
  • Characterizing Data Point Vulnerability as Average-Case Robustness
    Tessa Han, Suraj Srinivas, Himabindu Lakkaraju
    International Conference on Uncertainty in Artificial Intelligence (UAI), 2024.
    PDF
  • A Study on the Calibration of In-context Learning
    Hanlin Zhang, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Himabindu Lakkaraju, Sham Kakade
    The North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    PDF
  • Investigating the Fairness of Large Language Models for Predictions on Tabular Data
    Yanchen Liu, Srishti Gautam, Jiaqi Ma, Himabindu Lakkaraju
    The North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    PDF
  • Quantifying Uncertainty in Natural Language Explanations of Large Language Models
    Sree Harsha Tanneru, Chirag Agarwal, Himabindu Lakkaraju
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
    Spotlight Presentation, NeurIPS Workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models, 2023.
    PDF
  • Fair Machine Unlearning: Data Removal while Mitigating Disparities
    Alex Oesterling, Jiaqi Ma, Flavio Calmon, Himabindu Lakkaraju
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
    PDF
  • The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
    Satyapriya Krishna*, Tessa Han*, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, Himabindu Lakkaraju
    Transactions on Machine Learning Research (TMLR), 2024.
    Featured in Fortune Magazine
    PDF
  • Certifying LLM Safety Against Adversarial Prompting
    Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Aaron Li, Soheil Feizi, Himabindu Lakkaraju
    Conference on Language Modeling (COLM), 2024.
    Featured in Science News
    PDF
  • TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
    Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju*, Sameer Singh*
    Nature Machine Intelligence, 2023.
    Outstanding Paper Award Honorable Mention, NeurIPS Workshop on Trustworthy and Socially Responsible ML, 2022.
    PDF
  • Evaluating Explainability for Graph Neural Networks
    Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, Marinka Zitnik
    Nature Scientific Data, 2023.
    PDF
  • Post Hoc Explanations of Language Models Can Improve Language Models
    Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2023.
    PDF
  • Verifiable Feature Attributions: A Bridge between Post Hoc Explainability and Inherent Interpretability
    Usha Bhalla*, Suraj Srinivas*, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2023.
    PDF
  • Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
    Suraj Srinivas*, Sebastian Bordt*, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2023.
    Spotlight Presentation [Top 3%]
    PDF
  • M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
    Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, Himabindu Lakkaraju, Haoyi Xiong
    Advances in Neural Information Processing Systems (NeurIPS), 2023.
    PDF
  • M4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities, and Models
    Xuhong Li, Mengnan Du, Jiamin Chen, Yekun Chai, Himabindu Lakkaraju, Haoyi Xiong
    Advances in Neural Information Processing Systems (NeurIPS), 2023.
    PDF
  • When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making
    Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage, Himabindu Lakkaraju
    Transactions on Machine Learning Research (TMLR), 2023.
    PDF
  • Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
    Satyapriya Krishna*, Jiaqi Ma*, Himabindu Lakkaraju
    International Conference on Machine Learning (ICML), 2023.
    PDF
  • On the Impact of Actionable Explanations on Social Segregation
    Ruijiang Gao, Himabindu Lakkaraju
    International Conference on Machine Learning (ICML), 2023.
    PDF
  • On Minimizing the Impact of Dataset Shifts on Actionable Explanations
    Anna Meyer*, Dan Ley*, Suraj Srinivas, Himabindu Lakkaraju
    International Conference on Uncertainty in Artificial Intelligence (UAI), 2023.
    Oral Presentation [Top 5%]
    PDF
  • Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
    Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, Himabindu Lakkaraju
    International Conference on Learning Representations (ICLR), 2023.
    PDF
  • On the Privacy Risks of Algorithmic Recourse
    Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2023.
    PDF
  • Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
    Tessa Han, Suraj Srinivas, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2022.
    Best Paper Award, ICML Workshop on Interpretable Machine Learning in Healthcare, 2022.
    PDF
  • Flatten the Curve: Efficiently Training Low-Curvature Neural Networks
    Suraj Srinivas, Kyle Matoba, Himabindu Lakkaraju, Francois Fleuret
    Advances in Neural Information Processing Systems (NeurIPS), 2022.
    PDF
  • OpenXAI: Towards a Transparent Evaluation of Model Explanations
    Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2022.
    PDF
  • Data Poisoning Attacks on Off-Policy Evaluation Methods
    Elita Lobo, Harvineet Singh, Marek Petrik, Cynthia Rudin, Himabindu Lakkaraju
    International Conference on Uncertainty in Artificial Intelligence (UAI), 2022.
    Oral Presentation [Top 5%]
    PDF
  • Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis
    Martin Pawelczyk, Chirag Agarwal, Shalmali Joshi, Sohini Upadhyay, Himabindu Lakkaraju
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
    PDF
  • Probing GNN Explainers: A Rigorous Theoretical and Empirical Analysis of GNN Explanation Methods
    Chirag Agarwal, Marinka Zitnik, Himabindu Lakkaraju
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
    PDF
  • Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
    Jessica Dai, Sohini Upadhyay, Ulrich Aivodji, Stephen Bach, Himabindu Lakkaraju
    AAAI/ACM Conference on AI, Society, and Ethics (AIES), 2022.
    PDF
  • Towards Robust Off-Policy Evaluation via Human Inputs
    Harvineet Singh, Shalmali Joshi, Finale Doshi-Velez, Himabindu Lakkaraju
    AAAI/ACM Conference on AI, Society, and Ethics (AIES), 2022.
    PDF
  • A Human-Centric Take on Model Monitoring
    Murtuza N Shergadwala, Himabindu Lakkaraju, Krishnaram Kenthapadi
    AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2022.
    PDF
  • Towards Robust and Reliable Algorithmic Recourse
    Sohini Upadhyay*, Shalmali Joshi*, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2021.
    Best Paper Runner Up, ICML Workshop on Algorithmic Recourse, 2021.
    PDF
  • Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
    Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2021.
    PDF
  • Counterfactual Explanations Can Be Manipulated
    Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju, Sameer Singh
    Advances in Neural Information Processing Systems (NeurIPS), 2021.
    PDF
  • Learning Models for Algorithmic Recourse
    Alexis Ross, Himabindu Lakkaraju, Osbert Bastani
    Advances in Neural Information Processing Systems (NeurIPS), 2021.
    PDF
  • Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
    Sushant Agarwal, Shahin Jabbari, Chirag Agarwal*, Sohini Upadhyay*, Steven Wu, Himabindu Lakkaraju
    International Conference on Machine Learning (ICML), 2021.
    Spotlight Presentation
    Shorter version presented at Foundations of Responsible Computing (FORC), 2022.
    PDF
  • Towards a Unified Framework for Fair and Stable Graph Representation Learning
    Chirag Agarwal, Himabindu Lakkaraju, Marinka Zitnik
    International Conference on Uncertainty in Artificial Intelligence (UAI), 2021.
    Oral Presentation [Top 5%]
    PDF
  • Fair influence maximization: A welfare optimization approach
    Aida Rahmattalabi, Shahin Jabbari, Himabindu Lakkaraju, Phebe Vayanos, Eric Rice, Milind Tambe
    AAAI International Conference on Artificial Intelligence (AAAI), 2021.
    PDF
  • Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring
    Tom Suhr, Sophie Hilgard, Himabindu Lakkaraju
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2021.
    PDF
  • Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
    Kaivalya Rawal and Himabindu Lakkaraju
    Advances in Neural Information Processing Systems (NeurIPS), 2020.
    PDF
  • Incorporating Interpretable Output Constraints in Bayesian Neural Networks
    Wanqian Yang, Lars Lorch, Moritz Gaule, Himabindu Lakkaraju, Finale Doshi-Velez
    Advances in Neural Information Processing Systems (NeurIPS), 2020.
    Spotlight Presentation [Top 3%]
    PDF
  • Robust and Stable Black Box Explanations
    Himabindu Lakkaraju, Nino Arsov, Osbert Bastani
    International Conference on Machine Learning (ICML), 2020.
    PDF
  • Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
    Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2020.
    Oral Presentation [Top 16.6%]
    Best Paper (Non-Archival), AAAI Workshop on Safe AI, 2020
    Featured in deeplearning.ai | Harvard Business Review
    PDF
  • "How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
    Himabindu Lakkaraju, Osbert Bastani
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2020.
    Oral Presentation [Top 16.6%]
    PDF
  • Faithful and Customizable Explanations of Black Box Models
    Himabindu Lakkaraju, Ece Kamar, Rich Carauna, Jure Leskovec
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2019.
    Oral Presentation [Top 10%]
    PDF
  • Human Decisions and Machine Predictions
    Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan
    Quarterly Journal of Economics (QJE), 2018.
    Featured in The New York Times, MIT Technology Review, and Harvard Business Review
    PDF
  • The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables
    Himabindu Lakkaraju, Jon Kleinberg, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2017.
    Oral Presentation [Top 8.5%]
    PDF
  • Learning Cost-Effective and Interpretable Treatment Regimes
    Himabindu Lakkaraju, Cynthia Rudin
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2017.
    INFORMS Best Data Mining Paper Award
    PDF
  • Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration
    Himabindu Lakkaraju, Ece Kamar, Rich Caruana, Eric Horvitz
    AAAI Conference on Artificial Intelligence (AAAI), 2017.
    Featured in Bloomberg Technology
    PDF
  • Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making
    Himabindu Lakkaraju, Jure Leskovec
    Advances in Neural Information Processing Systems (NIPS), 2016.
    PDF
  • Interpretable Decision Sets: A Joint Framework for Description and Prediction
    Himabindu Lakkaraju, Stephen H. Bach, Jure Leskovec
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2016.
    PDF
  • Mining Big Data to Extract Patterns and Predict Real-Life Outcomes
    Michal Kosinki, Yilun Wang, Himabindu Lakkaraju, Jure Leskovec
    Psychological Methods, 2016.
    PDF
  • A Machine Learning Framework to Identify Students at Risk of Adverse Academic Outcomes
    Himabindu Lakkaraju, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani
    ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2015.
    Oral Presentation [Top 8.2%]
    PDF
  • A Bayesian Framework for Modeling Human Evaluations
    Himabindu Lakkaraju, Jure Leskovec, Jon Kleinberg, Sendhil Mullainathan
    SIAM International Conference on Data Mining (SDM), 2015.
    Oral Presentation [Top 5%]
    PDF
  • Who, When, and Why: A Machine Learning Approach to Prioritizing Students at Risk of not Graduating High School on Time
    Everaldo Aguiar, Himabindu Lakkaraju, Nasir Bhanpuri, David Miller, Ben Yuhas, Kecia Addison, Rayid Ghani
    Learning Analytics and Knowledge Conference (LAK), 2015.
    PDF
  • What's in a name ? Understanding the Interplay Between Titles, Content, and Communities in Social Media
    Himabindu Lakkaraju, Julian McAuley, Jure Leskovec
    International AAAI Conference on Weblogs and Social Media (ICWSM), 2013.
    Oral Presentation [Top 3%]
    Featured in TIME, Forbes, Phys.Org, Business Insider, New Scientist
    PDF
  • Dynamic Multi-Relational Chinese Restaurant Process for Analyzing Influences on Users in Social Media
    Himabindu Lakkaraju, Indrajit Bhattacharya, Chiranjib Bhattacharyya
    IEEE International Conference on Data Mining, 2012.
    Oral Presentation [Top 8.6%]
    PDF
  • TEM: a novel perspective to modeling content on microblogs
    Himabindu Lakkaraju, Hyung-Il Ahn
    International World Wide Web Conference (WWW), 2012.
    PDF
  • Exploiting Coherence for the Simultaneous Discovery of Latent Facets and associated Sentiments
    Himabindu Lakkaraju, Chiranjib Bhattacharyya, Indrajit Bhattacharya, Srujana Merugu
    SIAM International Conference on Data Mining (SDM), 2011.
    Best Paper Award
    PDF
  • Attention prediction on social media brand pages
    Himabindu Lakkaraju, Jitendra Ajmera
    ACM Conference on Information and Knowledge Management (CIKM), 2011.
    PDF
  • Smart news feeds for social networks using scalable joint latent factor models
    Himabindu Lakkaraju, Angshu Rai, Srujana Merugu
    International World Wide Web Conference (WWW), 2011.
    PDF
  • Extraction and grouping of feature words
    Chiranjib Bhattacharyya, Himabindu Lakkaraju, Kaushik Nath, Sunil Arvindam
    US8484228 B2
  • Enhancing knowledge bases using rich social media
    Jitendra Ajmera, Shantanu Ravindra Godbole, Himabindu Lakkaraju, Bernard Andrew Roden, Ashish Verma
    US10192458 B2

I am very fortunate to be working with the following core group of students, interns, postdocs, and research affiliates

  • Shichang Zhang (Postdoc, Harvard University)
  • Aounon Kumar (Postdoc, Harvard University)
  • Martin Pawelczyk (Postdoc, Harvard University); Co-advised with Seth Neel
  • Usha Bhalla (PhD Student, Harvard University)
  • Dan Ley (PhD Student, Harvard University)
  • Alex Oesterling (PhD Student, Harvard University); Co-advised with Flavio Calmon
  • Paul Hamilton (PhD Student, Harvard University)
  • Zidi Xiong (PhD Student, Harvard University)
  • Jenny Wang (PhD Student, Harvard University)
  • Aaron Li (Masters Student, Harvard University)
  • Zhenting Qi (Masters Student, Harvard University)
  • Yanchen Liu (Masters Student, Harvard University)

Alumni (Past Advisees, Close Collaborators, and Visitors):

  • Jiaqi Ma (Postdoc, Harvard University --> Assistant Professor, UIUC)
  • Chirag Agarwal (Postdoc, Harvard University --> Assistant Professor, University of Virginia)
  • Suraj Srinivas (Postdoc, Harvard University --> Research Scientist, Robert Bosch)
  • Dylan Slack (PhD Student, UC Irvine --> Research Scientist, Google DeepMind)
  • Satyapriya Krishna (PhD Student, Harvard University --> Research Scientist, Amazon)
  • Tessa Han (PhD Student, Harvard University --> Postdoc, Harvard Medical School)
  • Sree Harsha Tanneru (Masters Student, Harvard University --> Research Engineer, Google DeepMind)
  • Aditya Karan (Masters Student, Harvard University --> PhD Student, UIUC CS)
  • Kaivalya Rawal (Masters Student, Harvard University --> Research Fellow, Oxford University)
  • Alexis Ross (Undergraduate Student, Harvard University -- Winner of Hoopes Prize for Best Undergrad Thesis --> PhD Student, MIT EECS)
  • Isha Puri (Undergraduate Student, Harvard University --> PhD Student, MIT EECS)
  • Jessica Dai (Undergraduate Student, Brown University --> PhD Student, UC Berkeley EECS)
  • Eshika Saxena (Undergraduate Student, Harvard University --> AI Research Engineer, Meta)
  • Ethan Kim (Undergraduate Student, Harvard University --> Founding Engineer, VectorShift)
  • Catherine Huang (Undergrad, Harvard University --> Quant Trader, IMC Trading)
  • Charu Badrinath (Undergrad, Harvard University --> Engineer, Palantir Technologies)
  • Christina Xiao (Undergrad, Harvard University --> Engineer, Bloomberg)

  • Sophie Hilgard (PhD Student, Harvard University --> Research Scientist, Twitter)
  • Sushant Agarwal (Masters Student, University of Waterloo --> PhD Student, Northeastern University)

  • Harvineet Singh (PhD Student, New York University; Research Intern, Harvard University --> Postdoc UCSF/UC Berkeley)
  • Umang Bhatt (PhD Student, University of Cambridge; Research Intern, Harvard University --> Faculty Fellow, New York University)
  • Ruijiang Gao (PhD Student, University of Texas at Austin; Research Intern, Harvard University --> Assistant Professor, UT Dallas)
  • Elita Lobo (PhD Student, UMass Amherst; Research Intern, Harvard University)
  • Anna Meyer (PhD Student, University of Wisconsin; Research Intern, Harvard University)
  • Vishwali Mhasawade (PhD Student, New York University; Research Intern, Harvard University)
  • Nick Kroeger (PhD Student, University of Florida; Research Intern, Harvard University)
  • Chhavi Yadav (PhD Student, UC San Diego; Research Intern, Harvard University)
  • Tom Suhr (MS Student, TU Berlin; Research Fellow, Harvard University --> PhD Student, Max Planck Institute)
  • Davor Ljubenkov (Fullbright Scholar; Research Fellow, Harvard University)

  • Introduction to Data Science and Machine Learning
    Instructor
    Harvard University, Fall 2020 - 2023.

  • Explainable AI: From Simple Predictors to Complex Generative Models
    Instructor
    Harvard University, Fall 2019, Spring 2021, Spring 2023.

  • Introduction to Data Science
    Guest Lecture
    Stanford Law School, 2016.

  • Probability with Mathemagic
    Co-Instructor
    Stanford Splash Initiative for High School Students, 2016.

  • Mining Massive Datasets Course
    Teaching Assistant
    Stanford Computer Science, 2016.

  • Submodular Optimization
    Guest Lecture
    Mining Massive Datasets Course, Stanford, 2016.

  • Social and Information Network Analysis Course
    Head Teaching Assistant
    Stanford Computer Science, 2014.

  • Machine Learning Course
    Teaching Assistant
    Indian Institute of Science, 2010.

  • English and Mathematics
    Tutor
    UNICEF's Teach India Initiative, 2008 - 2010.