Opinion: Don’t cloak artificial intelligence in national security secrecy

Many of the problems posed by artificial intelligence are rooted in secrecy around how it works and the data on which it feeds, so letting the national security community take the lead on AI will only make things worse.

President Joe Biden’s White House recently released a memorandum on “advancing the United States’ leadership in artificial intelligence” that included, among other things, a directive for the national security apparatus to become a world leader in the use of AI.

Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives. 

Private AI systems operated by tech companies already are incredibly opaque, to our detriment. People are uncomfortable, and rightly so, with companies that use AI to decide all sorts of things about their lives, from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance and housing.  

For-profit companies are leasing their automated decision-making services to all manner of companies and employers, and most of us affected will never know that a computer made a choice about us, much less understand how the choice was made or be able to appeal that decision. 

But it can get worse: Combining private AI with national security secrecy threatens to make an already secretive system even more unaccountable and opaque.  

The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. The Electronic Frontier Foundation and other civil liberties organizations have had to fight in court time and again to try to expose even the most basic frameworks of global dragnet surveillance and the rules that govern it.  

Giving this apparatus dominion over AI will create a Frankenstein’s monster of secrecy, unaccountability and decision-making power. While the Executive Branch pushes agencies to leverage private AI expertise, more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy.  

It’s like the old computer science axiom of “garbage in, garbage out” — without transparency, data that contains our society’s systemic biases will train AI to propagate and amplify those biases. With secret training data and black-box algorithms that the public can’t analyze, the bias becomes “tech-washed” and oppressive decisions are hidden behind the supposed objectivity of code. 

Related Articles

Commentary |


The internet is rife with fake reviews. Will AI make it worse?

Commentary |


Letters: Don’t idolize UnitedHealthcare CEO; for-profit health care ensures more rage

Commentary |


Ireland’s AI data centers are sucking up too much of the country’s energy

Commentary |


‘It’s Silicon Valley’s turn’: Bay Area VC David Sacks pulled into Trump’s close orbit

Commentary |


OpenAI whistleblower found dead in San Francisco apartment

AI operates by collecting and processing a tremendous amount of data, so understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well. 

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.”  

The national security apparatus’ default approach is keeping the public in the dark. The default approach to AI should be crystal-clear transparency and accountability in training data and algorithmic decision-making. These are inherently contradictory goals, and moving AI’s rapidly expanding impact on our society into the shadowy realm of national security could spell disaster for decades to come. 

Matthew Guariglia is a senior policy analyst at the Electronic Frontier Foundation, a digital civil rights organization headquartered in San Francisco. 

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *