Yasaman Etesam

Research Assistantship Opportunities in Rosie Lab

June 23, 2025

We have a few opportunities available for a summer volunteer research assistantship at Rosie Lab. If you are interested in any of the following topics and meet the requirements, please consider applying by sending me an email.

  • Ranked contextual emotion recognition in large language models

    Description: Ranked emotion prediction can offer a more nuanced alternative to traditional multi-label emotion classification by asking models to produce a prioritized list of emotions rather than flat labels. Using datasets like EMOTIC [4], you can derive rankings either through annotator consensus or individual annotator preferences. These ranked lists are then used to prompt large language models [1], with nDCG [5] serving as the evaluation metric to assess alignment with human judgment.
    Time commitment: 3-6 hours per week for volunteering towards co-curricular record
    Requirements: Experience coding in Python, notions of machine learning and natural language processing are recommended
    Responsibilities: Literature review about the problem, coding and running experiments, writing paper
    What you learn: You will learn about contextual emotion recognition, LLMs, different evaluation metrics in NLP, improve your coding skills, and how to write an academic paper

  • Scene description prediction for an emotional and environmental condition

    Description: As humans, given a set of conditions, we can describe a scene that would lead to those conditions. For example, if a writer wants to convey that a person is a sad doctor, they might describe a scene where the doctor is about to deliver bad news to a patient. In this project you will generate scene descriptions from structured conditions like emotion labels, number of people, and environmental cues. This reverse formulation allows us to explore how well large language models [1] can translate abstract emotional and contextual inputs into coherent, human-like narratives. Using our human-generated dataset [3], you can evaluate both zero-shot and fine-tuned performance and assess quality through human evaluations of generated descriptions. This approach opens up new avenues for controllable scene generation and emotionally grounded AI systems.
    Time commitment: 3-6 hours per week for volunteering towards co-curricular record
    Requirements: Experience coding in Python, notions of machine learning and cognitive science are recommended
    Responsibilities: Literature review about the problem, coding and running experiments, writing paper
    What you learn: You will learn about contextual emotion recognition, LLMs, LLM finetuning, improve your coding skills, and how to write an academic paper

  • Bias in emotion estimation in large language models

    Description: As large language models continue to grow and permeate various aspects of our lives, we want to ensure that these models are fair. This project explores fairness in large language models (LLM) [1] by examining how gender references influence their predictions. Using our already available datasets [2, 3], you will systematically modify captions to reflect different genders (male, female, non-binary) and analyze the resulting shifts in LLM outputs. As an additional step, you can fine-tune these models on the gender-varied captions with ground-truth emotion labels to mitigate bias, ultimately aiming to make emotion recognition systems more equitable and inclusive.
    Time commitment: 3-6 hours per week for volunteering towards co-curricular record
    Requirements: Experience coding in Python, notions of machine learning is recommended
    Responsibilities: Literature review about the problem, coding and running experiments, writing paper
    What you learn: You will learn about fairness, LLMs, improve your coding skills, and how to write an academic paper

  • References

    1. Brown, Tom, et al. "Language models are few-shot learners." NeurIPS 2020.
    2. Etesam, Yasaman, et al. "Contextual Emotion Recognition using Large Vision Language Models". IROS 2024.
    3. Yang, Vera, et al. "Contextual Emotion Estimation from Image Captions". ACII 2023.
    4. Kosti, Ronak, et al. "Context based emotion recognition using emotic dataset." PAMI 2019.
    5. Järvelin, Kalervo and Kekäläinen, Jaana. "Cumulated gain-based evaluation of IR techniques". TOIS 2002.