AI systems already decide who gets a job interview, who qualifies for a loan, who gets flagged by police, and who gets cut off from government benefits. UCC Media Justice wrote about this in December when endorsing the AI Civil Rights Act. Without safeguards, this technology is not accountable to the people it is supposed to serve, placing profit and power over human rights. 

Since December, the federal government’s approach to AI has been in the news for the wrong reasons. The Pentagon tried to bully an AI company into dropping its safety limits and got blocked by a federal judge. But on May 1, OpenAI, Google, Nvidia, and four other companies signed on to exactly those terms, agreeing to let the Pentagon use their AI for “any lawful purpose” with no restrictions. One company said no. Seven said yes. Civil rights protections cannot depend on whether an AI company feels like standing its ground.

This spring, we joined civil rights partners in filing comments and letters on three federal actions that will shape how AI affects real people in real communities.

The government’s AI shopping list needs a rewrite
The fight between the Pentagon and Anthropic started over four words: “all lawful government purposes.” The Department of Defense demanded that Anthropic allow its AI to operate with no limits on mass surveillance or autonomous weapons. Anthropic refused. The Pentagon labeled the company a supply chain risk, a designation normally reserved for foreign adversaries. A federal judge called the move likely illegal First Amendment retaliation.

Now the General Services Administration wants to write the same language into procurement rules that would apply across the entire federal government, not just the military.

Some parts of GSA’s draft clause are sound: requirements for human oversight, intervention when systems go wrong, and formal channels for reporting problems. But the draft would also give every federal agency an unlimited license to use AI for “any lawful government purpose” while barring vendors from maintaining their own safety guardrails. That is the same demand that blew up the Pentagon’s contract, except GSA wants to make it standard for every federal AI purchase.

The draft also incorporates the Trump administration’s “Preventing Woke AI” executive order, requiring AI tools sold to the government to be free of “ideological dogmas such as Diversity, Equity, and Inclusion.” Compliance with civil rights law, including the Fair Housing Act, Title VI, and Title VII, sometimes requires AI systems to account for race, gender, and disability. Labeling that compliance work as “ideology” does not make discrimination disappear. It makes discrimination harder to detect.

UCC Media Justice filed comments alongside the National Fair Housing Alliance, the Leadership Conference on Civil and Human Rights, the ACLU, and Color of Change. The coalition urged GSA to list prohibited AI uses explicitly, preserve vendors’ ability to maintain civil rights safeguards, replace the “woke AI” provision with a civil rights compliance standard, and require public notice whenever an agency deploys AI for surveillance.

The coalition letter also flagged a risk that goes beyond government: when federal procurement rules normalize unlimited AI use, private employers, landlords, lenders, and insurers follow suit. What starts as a contract term ends up as standard practice in your community.

AI evaluations need to test for discrimination, not just performance
UCC Media Justice also joined comments on the National Institute for Standards and Technology (NIST’s) draft guidance for evaluating AI large language models–the underlying tools that power AI. What gets measured in these standards determines what gets built, and NIST’s draft left out tests for civil rights harms.

An AI system could pass every benchmark NIST laid out and still discriminate against people of color in housing recommendations or benefit determinations. This is a problem because we know, through testing conducted by the National Fair Housing Alliance, that several models produced patterns of racial bias and steering that the proposed federal testing framework would miss.

The coalition, which included NFHA, EPIC, the Center for Democracy and Technology, and Common Cause, urged NIST to require disparate impact testing, build specific benchmarks for housing, lending, and criminal justice, and mandate post-deployment monitoring.

Congress needs to protect workers from AI, not just talk about it
A letter to Congress organized by the Economic Policy Institute, the AFL-CIO Tech Institute, We Build Progress, and workers’ rights advocate Workshop brought together more than 40 organizations, including UCC Media Justice.

The message: AI adoption in the workplace is accelerating, and Congress has done almost nothing about it. Employers use AI to screen job applicants, monitor workers, and make firing decisions. Workers often do not know whether an algorithm played a role. Nearly two years after the Bipartisan Senate AI Working Group released its policy roadmap, the Senate has not taken up comprehensive legislation. The letter calls on Congress to establish a federal floor of worker protections, not a ceiling–particularly because any protections adopted now are likely to go stale before the ink dries.

Meanwhile, the Trump administration is going in the opposite direction: issuing an executive order with the goal of blocking state regulation of AI, while offering no federal protections in return. UCC Media Justice is working with allies to stop this on a range of issues, including child protection and consumer rights.

Why this is a faith issue
UCC Media Justice has been filing in federal agencies and in federal courts since 1959. The technology changes. The fight for fairness does not.

When AI systems quietly label some people as too risky or less deserving because of the community they come from, that is a spiritual problem as well as a legal one. The United Church of Christ believes every person bears the image of God and deserves fair treatment.

UCC Media Justice will keep pressing for AI fairness in procurement, in evaluation standards, and in legislation, alongside civil rights partners. 

You can view all of the letters below:

Letter to Congress - AI and workers 04.28.26
Click to open the PDF in your browser.

Coalition Comment on Proposed GSAR 552.239-7001 — MAS Refresh 31 AI Clause_04-03-2026 (1)
Click to open the PDF in your browser.

Civil Coalition Letter_NIST Draft Guidance on Automated Benchmark Evaluations of Language Models_03-31-2026
Click to open the PDF in your browser.

Pin It on Pinterest

Share This