Use of AI tools in IGoR proposal reviews
ARPA-H’s Intelligent Generator of Research (IGoR) program anticipates significant interest. This reflects the urgency to restore trust in science and accelerate biomedical breakthroughs. To meet this urgent need, IGoR will pilot the use of secure large language model (LLM) tools to assist with the initial review of submitted materials. This approach is consistent with the program’s vision: enabling AI-powered tools to generate meaningful insights for consideration by humans with key expertise and judgement.
How AI will be used
- LLM tools will help organize, summarize, and surface key information from solution summaries.
- Human experts will remain responsible for all evaluation and decisions.
- Reviewers will include specialists in contracting, regulation, engineering, product design, informatics, and biomedical research.
AI will not replace human judgment. Instead, these tools will help reviewers work more efficiently and consistently, while maintaining the rigorous standards expected of ARPA-H programs.
Why this matters
Beyond advancing an AI-enabled biomedical research ecosystem, IGoR is helping ARPA-H and the broader U.S. government understand how to use AI responsibly at scale. HHS has been an early leader in adopting generative AI tools, and the agency is building on lessons from across the Department to ensure safety, transparency, and trust.
For more questions, you can go to IGoR’s FAQ page, where you can both submit a new question, or see responses to prior ones. The most important document remains the ISO Solicitation (ARPA-H-SOL-26-155), which includes a similar disclosure of the use of AI in submission reviews.