The following is a guest post and opinion from Ahmad Shadid, Founder of O.xyz.
Under the pretense of enhancing efficiency, the Department of Government Efficiency (DOGE) is significantly reducing its workforce. An independent report indicates that DOGE implemented around 222,000 job eliminations just in March. These cuts are impacting sectors where the U.S. can least afford setbacks, particularly in artificial intelligence and semiconductor innovation.
The more pressing question isn’t just about job cuts — it’s that DOGE is using artificial intelligence to scrutinize federal employees’ communications, searching for any hint of disloyalty. There’s evidence of this already occurring within the EPA.
DOGE’s emphasis on AI to downsize federal agencies resembles a rogue Silicon Valley approach—seizing data, automating processes, and hastily rolling out underdeveloped tools like the GSA’s “intern-level” chatbot to rationalize layoffs. It’s reckless.
Moreover, according to a report, DOGE is using Musk’s Grok AI to keep tabs on Environmental Protection Agency personnel with ambitions for sweeping reductions in government roles.
Federal employees, who have been used to email transparency due to public record laws, are now facing advanced tools that analyze their every communication.
How can federal workers have confidence in a system where AI surveillance correlates with mass layoffs? Is the United States drifting toward a surveillance-filled dystopia, with artificial intelligence heightening the stakes?
AI-Driven Surveillance
Can we trust an AI model developed on government data? Additionally, integrating AI into a complex bureaucracy invites traditional issues: biases—concerns that GSA’s help page acknowledges without robust enforcement.
The consolidation of information within AI models represents a growing risk to privacy. Furthermore, Musk and DOGE are infringing on the Privacy Act of 1974, which was established during the Watergate scandal to prevent the misuse of government-held information.
The Privacy Act stipulates that no one, including special government employees, should access “systems of records” without appropriate legal authorization. Yet, DOGE seems to be violating this act under the guise of efficiency. Is the pursuit of governmental efficiency worth endangering the privacy of Americans?
Surveillance transcends mere cameras or keywords now. It extends to who processes the data, who owns the models, and who determines the criteria that matter. Without robust public governance, this trajectory risks having corporate-controlled systems dictate governmental operations. Such a scenario sets a dangerous precedent. Trust in AI will diminish if individuals perceive that decisions are made by opaque systems beyond democratic accountability. The federal government should establish standards, not delegate them.
What’s at stake?
The National Science Foundation (NSF) recently reduced its staff by more than 150 employees, and internal documents indicate even deeper cuts on the horizon. The NSF funds essential AI and semiconductor research at universities and public institutions, supporting everything from foundational machine learning models to innovations in chip architecture. Additionally, the White House is proposing a two-thirds budget cut to the NSF, which would obliterate the very foundation vital for maintaining American competitiveness in AI.
The National Institute of Standards and Technology (NIST) faces similar threats, with almost 500 employees earmarked for cuts. These include the majority of teams responsible for the CHIPS Act’s incentive programs and research strategies. NIST oversees the U.S. AI Safety Institute and developed the AI Risk Management Framework.
Is DOGE Compromising Confidential Public Data for the Private Sector?
DOGE’s role also brings to light significant concerns regarding confidentiality. The department has quietly acquired broad access to federal records and data sets. Reports indicate that AI tools are scouring this information to pinpoint processes for automation. Consequently, the administration is allowing private entities to handle sensitive information concerning government activities, public services, and regulatory tasks.
This amplification of risk is alarming. AI systems trained on sensitive information require stringent oversight, not merely a focus on efficiency. This shift transfers public data into private hands without defined policy protections, opening the door for biased or erroneous systems to make impactful decisions. Algorithms cannot substitute for accountability.
There is a lack of transparency regarding what data DOGE utilizes, which models it deploys, or how agencies authenticate the outcomes. Federal employees are being dismissed based on AI recommendations, but the logic, weightings, and assumptions behind those models remain undisclosed. This is a governance failure.
What to anticipate?
Surveillance does not equate to efficient government; without regulations, oversight, or even basic transparency, it merely fosters fear. When artificial intelligence is employed to monitor loyalty or flag terms like “diversity,” we are not streamlining the government—we are eroding trust in it.
Federal workers should not have to question if they are monitored for simply performing their jobs or for speaking freely in meetings. This situation underscores the urgent need for improved, more reliable AI models designed to address the specific challenges and standards inherent in public service.