Google does a lot of stupid things. All giant corporations are the same in that regard. But it takes special effort to do something truly terrible. That's where Google's Project Nimbus comes in on the spectrum.
Project Nimbus is a joint effort of Google, Amazon, and the Israeli government that provides futuristic surveillance capabilities through the use of advanced machine learning models. Like it or not, that's part of the future of state security, and not any more terrible than many other similar projects. Many of us even use similar tech in and around our homes.
Where things get dark and ugly is what Google says about Project Nimbus' capabilities using the company's technology:
Nimbus training documents emphasize “the ‘faces, facial landmarks, emotions’-detection capabilities of Google’s Cloud Vision API,” and in one Nimbus training webinar, a Google engineer confirmed for an Israeli customer that it would be possible to “process data through Nimbus in order to determine if someone is lying”.
Yes, the company that gave us the awesomely bad YouTube algorithms now wants to sell algorithms to determine if someone is lying to the police. Let that sink in. This is a science that Microsoft has abandoned (opens in new tab) because of its inherent problems.
Unfortunately, Google disagrees so much that it retaliates against people in the company that speak out against it.
I'm not going to wade too deeply into the politics at play here, but the entire project was designed so the Israeli government could hide what it is doing. According to Jack Poulson, former head of Security for Google Enterprise, one of the main goals of Project Nimbus is "preventing the German government from requesting data relating to the Israel Defence Forces for the International Criminal Court" according to The Intercept. (Israel is said to be committing crimes against humanity against Palestinians, according to some people's interpretation of the laws.)
Really, though, it doesn't matter how you feel about the conflict between Israel and Palestine. There is no good reason to provide this sort of technology to any government at any scale. Doing so makes Google evil.
Nimbus' supposed capabilities are scary, even if Google's Cloud Vision API was 100% correct, 100% of the time. Imagine police body cameras that use AI to help decide whether or not to charge and arrest you. Everything becomes terrifying when you consider how often machine learning vision systems get things wrong, though.
This isn't just a Google problem. All one needs to do is look to content moderation on YouTube, Facebook, or Twitter. 90% of the initial work is done by computers using moderation algorithms that make wrong decisions far too frequently. Project Nimbus would do more than just delete your snarky comment, though — it could cost you your life.
No company has any business providing this sort of AI until the technology has matured to a state where it is never wrong, and that will never happen.
Look, I'm all for finding the bad guys and doing something about them just like most everyone else is. I understand that law enforcement, whether a local police department or the IDF, is a necessary evil. Using AI to do so is an unnecessary evil.
I'm not saying Google should just stick to writing the software which powers the phones you love and not trying to branch out. I'm just saying there is a right way and a wrong way — Google chose the wrong way here, and now it's stuck because the terms of the agreement do not allow Google to stop participating.
You should form your own opinions and never listen to someone on the internet who has a soapbox. But you should also be well-informed when a company who was founded on a principle of "Don't Be Evil" turns full circle and becomes the evil it warned us about.
Google's Project Nimbus is the future of evil - Android Central
Read More
No comments:
Post a Comment