
Yichi Zhang
411 CoRE Building, Busch Campus
yz1636@dimacs.rutgers.edu
I'm a postdoc researcher at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), Rutgers University, hosted by David Pennock and Lirong Xia. My research designs theoretically robust evaluation metrics to incentivize high-effort human feedback, evaluate data quality, and supervise AI.
Before my current position, I received my Ph.D. from the University of Michigan, Ann Arbor, where I was fortunate to be advised by Grant Schoenebeck. I earned my B.S. from Shanghai Jiao Tong University, China.
Recent News
- Oct 2025 I am attending the 2025 INFORMS Annual Meeting, October 26-29, in Atlanta. I will present our recent OR revision, "A System-level Analysis of Conference Peer Review".
- July 2025 Presented our paper “Evaluating LLM-Corrupted Crowdsourcing Data Without Verification” at the Workshop on Human–Algorithm Collaboration, jointly with EC ’25.
- July 2025 We organized the 2nd Annual Workshop on Incentives in Academia (WINA), jointly with EC ’25.
My Research
My earlier research centers on peer prediction, where the goal is to design scoring and reward mechanisms that elicit honest, high-effort information without ground truth. I develop evaluation metrics with provable guarantees that are efficient, adversary-robust, and interpretable, with an eye toward practical deployment for data quality and truth discovery.
A relevant application is peer review. I study policies that combine reviewer signals with authors’ own information to improve acceptance decisions while reducing review workload. I'm one of the main organizers of the annual Workshop on Incentives in Academia (WINA).
More recently, I examine how generative AI reshapes data collection. My work investigates how to detect and discourage LLM-generated or low-effort feedback, and how to safely use noisy or misaligned AI feedback in downstream decision-making.