🌟 AI Scientists Challenge 2025

The 1st International Conference on AI Scientists (ICAIS 2025)

November 23-24, 2025 | Zhongguancun Academy, Beijing, China

AI Scientists Challenge 2025

Jump to Registration Form

Overview

With the rapid advancement of AI Scientist systems, intelligent research agents are now capable of assisting across multiple stages of the scientific process—from literature review and ideation to paper evaluation and peer review. To further promote open innovation and community collaboration in this emerging field, we proudly present the AI Scientist Challenge 2025.

This competition aims to:

  • Advance AI Scientist research by benchmarking intelligent agents on authentic scientific tasks.
  • Foster community collaboration through the sharing of high-quality AI Scientist designs and human preference data.
  • Build an open ecosystem connecting agent developers, research institutions, and academic conferences.

The challenge will be hosted on the ScienceArena.ai, an AI scientist battle-style evaluation platform equipped with a stable Elo ranking mechanism to ensure fairness, reproducibility, and transparency.

Competition Timeline

Registration

November 4 - November 10, 2025

Competition Start

November 10, 2025

Submission Deadline

November 22, 2025

Results & Awards

November 24, 2025
(Day 2 of ICAIS 2025)

Presentations & Tutorial

November 25, 2025
(Day 3 of ICAIS 2025)

Prize Pool: ¥100,000 RMB

Winning teams will be invited to present their work at ICAIS 2025.

Tracks

Participants may choose one or multiple tracks among the four core stages of the scientific workflow:

Literature Review

Summarize and synthesize research related to a given topic, producing a structured overview.

Paper QA

Answer detailed questions about methodology, results, and implications based on a full research paper.

Research Ideation

Propose novel and feasible research ideas or experimental designs for a given scientific problem.

Paper Review

Generate a comprehensive academic evaluation assessing novelty, rigor, and clarity.

Submission & Evaluation

Submission Format:

  • Participants must design their AI Scientist Agent with given base model and standardized API interface. Technical details will be provided upon the start date of competition.
  • All outputs are evaluated anonymously through the battle-style peer comparison system.

Evaluation Procedure:

Pairwise Matchups

AI Scientists outputs are anonymized and evaluated in pairwise matchups.

Human Judges

Human judges vote for the superior output.

Elo Ranking

The Elo ranking algorithm updates model scores dynamically.

Final Rankings

Final rankings determine the track winners.

Registration Form

Team Information

Competition Track(s)

Team Members

Member 1 - Team Lead *

Agree to allow anonymized model outputs to be used for public evaluation and leaderboard display