Research Project

Resilience: Graph Representation Learning for Fair Teaming in Crisis Response.

Updated on June, 2024

NSF Sponsored Graph Learning AI Genotype-Phenotype
Introduction

The recent COVID-19 pandemic has revealed the fragility of humankind. In our highly connected world, infectious disease can swiftly transform into worldwide epidemics. A plague can rewrite history and science can limit the damage. The significance of teamwork in science has been extensively studied in the science of science literature using transdisciplinary studies to analyze the mechanisms underlying broad scientific activities. How can scientific communities rapidly form teams to best respond to pandemic crises? Artificial intelligence (AI) models have been proposed to recommend scientific collaboration, especially for those with complementary knowledge or skills. But issues related to fairness in teaming, especially how to balance group fairness and individual fairness remain challenging. Thus, developing fair AI models for recommending teams is critical for an equal and inclusive working environment. Such a need could be pivotal in the next pandemic crisis. This project will develop a decision support system to strengthen the US-Australia public health response to infectious disease outbreak. The system will help to rapidly form global scientific teams with fair teaming solutions for infectious disease control, diagnosis, and treatment. The project will include participation of underrepresented groups (Indigenous Australians and Hispanic Americans) and will provide fair teaming solutions in broad working and recruiting scenarios.

Sponsors
NSF Sponsor
Project Goal

This project aims to understand how scientific communities have responded to historical pandemic crises and how to best respond in the future to provide fair teaming solutions for new infectious disease crises. The project will develop a set of graph representation learning methods for fair teaming recommendation in crisis response through: 1) biomedical knowledge graph construction and learning, with novel models for emerging bio-entity extraction, relationship discovery, and fair graph representation learning for sensitive demographical attributes; 2) the recognition of fairness and the determinant of team success, with a subgraph contrastive learning-based prediction model for identifying core team units and considering trade-offs between fairness and team performance; and 3) learning to recommend fairly, with a measurement of graph-based maximum mean discrepancy, a meta learning method for fair graph representation learning, and a reinforcement learning-based search method for fair teaming recommendation. The project will support cross-disciplinary curriculum development by effectively bridging gaps in responsible AI and team science, fair project management, and risk management in science.

Project Outcome

This is a joint project between researchers from the United States and Australia and funded by the Collaboration Opportunities in Responsible and Equitable AI under the U.S. NSF and the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO).