On Tuesday, a privacy and security report published by Gizmodo revealed that Google and the Pentagon are collaborating on developing drones. Known as Project Maven, the Department of Defense pilot project involves analyzing, combing through, defining, and categorizing visual data amassed by aerial drones. It wouldn’t be too far off to say the project would function as the Pentagon’s all-seeing eye.
According to Greg Allen, a Center for a New American Society adjunct fellow, the current amount of obtained footage is so vast it isn’t possible for human analysts at the defense agency to sift through it and correctly define objects in the footage. As it stands, the United States’ drone strike program is already criticized by human rights groups like Reprieve for reportedly killing hundreds of civilians in Pakistan, Afghanistan, Yemen, and beyond in spite of claims of “surgical” precision from former CIA director John Brennan in 2011. With the help of Google’s artificial intelligence resources, the Defense Department will apparently have the opportunity to correctly process the footage obtained by drones. Think vehicles, buildings and human beings.
Project Maven was first initiated last year in April and is also known by the more tech-y title, Algorithmic Warfare Cross-Functional Team (AWCFT). According to Air Force Lieutenant General Jack Shanahan, the project aims to be the “spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department.”
Through Project Maven, the Pentagon is able to follow the movement of people in the crosshairs of the aerial drones. And it’s apparently gearing up to attack ISIS enclaves in the Middle East. The purpose of allocating Google resources to a security apparatus like that of the Pentagon’s is apparently to optimize and streamline the agency’s processing of drone footage; in other words, to minimize the possibility of error.
But there’s no guarantee machine learning will always correctly identify the objects it is tasked with seeing. In mathematician Cathy O’Neil’s book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” and investigative reports from ProPublica on algorithms used in American police departments to supposedly “predict” recidivism, it becomes distressingly clear that relying on machine learning for security purposes can be more harmful than beneficial to vulnerable people.
Technically, the collaboration is reportedly based on APIs (application program interfaces). A Google spokesperson told Gizmodo that the company would be allocating TensorFlow APIs for the Department of Defense’s Project Maven. TensorFlow APIs are meant to optimize machine learning by correctly receiving and processing input like basic requests from users. The spokesperson noted that Google was working on developing “policies and safeguards” with concern to possible military use.
In the statement, the Google spokesperson noted, “We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. …read more