The Harvard admissions office is seated behind tall windows on a gray winter morning in Cambridge, and it is almost purposefully quiet. Outside, students rush by with their scarves pulled tight against the wind, while inside, most applicants will never see the rooms where decisions that will affect their futures are made. These choices, as well as the instruments used to make them, are now being examined in a new way.
Harvard is being sued by the U.S. Department of Justice for allegedly hiding admissions information that was required to look into whether the university’s selection procedure—which may have involved algorithmic tools—violates the Supreme Court’s 2023 ban on affirmative action. This legal action, which initially seems technical, is centered on compliance and document production deadlines. But there’s more to it than that.
Key Information Table
| Category | Details |
|---|---|
| Institution | Harvard University |
| Legal Action | U.S. Department of Justice lawsuit filed in federal court (2026) |
| Core Issue | Alleged withholding of admissions data, including possible AI-driven evaluation tools |
| Context | Follows 2023 Supreme Court ban on affirmative action |
| Government Claim | Harvard “slow-walked” and refused to provide applicant-level data |
| Harvard’s Position | Says it complied with law and acted in good faith |
| Legal Objective | Force Harvard to release admissions records |
| Reference |
a query regarding the covert outsourcing of judgment. The main focus of the government’s complaint is Harvard’s purported refusal to disclose comprehensive applicant-level records, which include details about internal communications, evaluation standards, and race. The university allegedly “slowed-walked” requests for over ten months, missing numerous deadlines, according to officials. Harvard contests that, stating that its cooperation was sincere. Both sides may hold their own version of events to be true.
The argument is particularly troubling because it raises the possibility that artificial intelligence, or at the very least predictive modeling, could be used to assess applicants. Harvard has not acknowledged using AI to determine admissions. However, the possibility doesn’t seem implausible in a time when banks use algorithms to determine who gets loans and employers use them to determine who gets interviews.
Rarely does technology wait for authorization. Last spring, I was strolling through Harvard Yard and saw potential students posing for pictures under the iron gates of the university while their parents brushed leaves off their shoulders and adjusted collars. Everyone appeared optimistic. Everyone thought their success was the result of talent and hard work.
One can’t help but wonder how much of that belief is still true. The admissions process at Harvard has always been complicated, with layers of recommendations, personal essays, holistic reviews, and institutional priorities. The university maintains that those factors continue to influence choices today. However, algorithms are very good at finding patterns, and it’s easy for pattern recognition to lead to preference reinforcement.
Once encoded, bias is not self-evident. The question of whether human or algorithmic admissions standards may have disadvantaged Asian American and white applicants is one that the Department of Justice seems especially interested in. The inquiry is part of a larger federal effort to implement the Supreme Court’s decision, which virtually immediately changed admissions practices at prestigious universities.
Adaptation might have been more difficult behind the scenes. These days, admissions officers work in a peculiar hybrid environment that juggles institutional objectives, legal restrictions, and increasing data volumes. AI promises efficiency by predicting academic success or identifying trends in applicant pools. Efficiency, however, is not neutral. Whatever priorities were ingrained during design is reflected in it.

It seems that the objectivity that algorithms once promised has become more elusive. Harvard, on the other hand, contends that the lawsuit is more about pressure than fairness. The university claims in public statements that it has updated its procedures in response to the Supreme Court ruling and complied with civil rights laws. Officials maintain that admissions choices are still made on an individual basis.
However, it is challenging to gauge compliance externally. It seems like Harvard isn’t merely defending its admissions procedure as we watch this play out. It is protecting its enigma.
Selective transparency has long been a hallmark of elite universities. Sufficient transparency to preserve credibility. Sufficient secrecy to maintain authority. That balance is made more difficult by the advent of AI. In contrast to human committees, algorithms leave behind data points, correlations, and weightings.
evidence, should someone demand to see it. Even the students appear to be caught in the middle. Some are concerned that algorithms may remove context from their lives and reduce them to statistical profiles. Others worry that rather than enhancing human bias, algorithms may actually eliminate it.
The lawsuit’s outcome might depend more on requiring disclosure than it does on establishing discrimination. At this time, the government is not pursuing financial penalties. Information is what it seeks. Information has the power to destabilize.
Because admissions decisions run the risk of losing their air of human discernment once they can be explained in technical terms. They turn into procedures. systems. computations. Additionally, computations encourage examination.