Currently Hiring

The RIET lab is currently hiring for the following roles:

  1. PhD Student with fully-funded Assistantship, with a focus on:
    1. AI Safety, Generative AI, and Agentic AI Systems
    2. AI Ethics, Bias Mitigation, and Fairness
    3. Misinformation and Social Media, especially utilizing techniques from Information Retrieval and Knowledge Graphs 

Details on each of the positions are below.

About Us:

The Reducing Information Ecosystem Threats (RIET) Lab researches some of the most vexing threats to democratic nations, including disinformation, weaponized controversy, as well as fairness, bias, and inequities, all from a computational perspective. Our research focuses on the sociotechnical AI alignment problem and threats to the information ecosystem online.We prioritize problems disproportionately affecting minoritized communities, including but not limited to racial, ethnic, gender and ability minorities. We use state-of-the-art AI techniques, collecting and analyzing petabyte-scale datasets from social networks and other sources. The RIET Lab prides itself on fostering transdisciplinary collaborations with experts spanning medicine, public health, the social sciences, and the humanities.

1. PhD Student with fully-funded Assistantship

We are seeking proactive, curious prospective PhD students who get things done, to join the Reducing Information Ecosystem Threats (RIET) Lab. We are particularly interested in candidates with a strong background or keen interest in three focus areas (see below for more details on each area).

Underrepresented minorities and diverse applicants in computer science (with diversity broadly defined) are particularly invited to apply. Full consideration will be given to all applicants who apply according to the instructions below.

PhD students will be fully funded throughout their studies.

Before you apply: review and familiarize yourself with the UConn School of Computing PhD admissions requirements, and ensure that you meet these requirements. You are also invited to check this list of frequently asked questions about our lab’s hiring process, which may address your questions.

To apply, please:

  1. Submit your application through the UConn system.
    • Applications will be considered on a rolling basis.
    • To be considered for the nearest admission cycle, you must submit your application through the UConn system no later than the official admissions deadline (see this link), and preferably much earlier.
    • In your application, select “Shiri Dori-Hacohen” when asked about faculty members you are interested to work with.
  2. Email both shiridh AT uconn.edu and avijit.g AT uconn.edu with the subject line “PhD application: Focus area XX: your-full-name” (where XX is 1, 2, or 3). In your email, please:
    • Attach your resume and personal statement.
    • Include the names and contact information of your three references. Do not send us any recommendation letters!
    • Include publication citations and link or full text to the publication(s), if you have any.
    • Confirm that you have reviewed the UConn CSE admissions requirements, and that you have submitted your application through the UConn system.
    • Briefly describe your interest and any experience in AI safety, generative AI, or agentic AI systems.

Applications missing either one of these components will not be considered. Attaching your recommendation letters to the email will result in immediate disqualification.

Focus Area 1: AI Safety, Generative AI, and Agentic AI Systems

We are expanding our existing work on the sociotechnical AI safety and alignment problem. We seek one or more PhD students interested in improving critical aspects of AI development and deployment. For this focus area, we are particularly interested in candidates with a strong background or keen interest in AI safety, generative AI, and agentic AI systems:

  • AI Safety: Investigating methods to ensure AI systems are safe, reliable, and aligned with complex and diverse human values.
  • Generative AI: Exploring and evaluating the capabilities, implications, and risks of large language models and other generative AI technologies.
  • Agentic AI Systems: Studying the development, risks, and impacts of increasingly autonomous agentic AI systems, capable of complex decision-making and task completion.

 

Focus Area 2: AI Ethics/Bias Mitigation/Fairness

We are expanding our existing work on the sociotechnical AI safety and alignment problem. We seek one or more PhD students interested in improving critical aspects of AI ethics, bias, and fairness. For this focus area, we are particularly interested in candidates with a strong background or keen interest in AI Ethics, reducing bias and inequities, and fairness:

  • AI Ethics: Investigating the ethical implications of AI systems and developing frameworks for responsible AI development and deployment.
  • Bias Mitigation: Identifying, measuring, and mitigating biases in AI systems and/or using AI systems, particularly in machine learning models and datasets.
  • Fairness: Developing and implementing fairness metrics and algorithms to ensure equitable outcomes across diverse populations; use AI to bring about fairer outcomes.

Focus Area 3: Misinformation and Social Media, especially utilizing techniques from Information Retrieval and Knowledge Graphs 

We are expanding our existing work on information ecosystem threats. We seek one or more PhD students interested in mitigating misinformation and algorithmic manipulation on social media. For this focus area, we are particularly interested in candidates with a strong background or keen interest in misinformation, social media analysis, and computational social science:

  • Misinformation Detection and Mitigation: Developing advanced techniques to identify, track, and combat the spread of misinformation across digital platforms.
  • Information Retrieval in Complex Ecosystems: Designing and implementing novel IR systems capable of handling the complexities of modern information landscapes.
  • Knowledge Graphs for Information Verification: Utilizing and enhancing knowledge graph technologies to fact-check and contextualize information.

Social Media Analysis and Intervention: Studying the dynamics of information propagation on social media and developing strategies to promote healthy information ecosystems.