Yuchiro Chino | Getty Images

Companies are playing big bucks on cloud computing providers like Amazon, Microsoft and Google to avoid running their own digital infrastructure. Google's cloud division will soon be inviting customers to outsource something less tangible than CPUs and hard drives – the rights and wrongs of using artificial intelligence.

The company plans to roll out new AI ethics services before the end of the year. First, Google will provide other advice on tasks like identifying racial bias in computer vision systems or developing ethical guidelines for AI projects. In the longer term, the company can offer to check the customer's AI systems for ethical integrity and to charge fees for ethics advice.

The new offerings from Google are being used to determine whether a lucrative but increasingly suspicious industry can boost its business through ethical advice. The company is a distant third in the cloud computing market behind Amazon and Microsoft and positions its AI expertise as a competitive advantage. If successful, the new initiative could generate a new catchphrase: EaaS for ethics as a service, modeled on coins from the cloud industry, such as SaaS for software as a service.

Google learned some lessons about AI ethics the hard way – through its own controversy. In 2015, Google apologized and blocked its photos app from detecting gorillas after a user reported that the service had applied this label to photos of him with a black friend. In 2018, thousands of Google employees protested a Pentagon contract called Maven that used the company's technology to analyze surveillance images from drones.

Soon after, the company published a set of ethical principles for using its AI technology, stating it would no longer compete for similar projects, but not ruling out all defense work. That same year, Google admitted to testing a version of its search engine that conformed to China's authoritarian censorship, saying it would not offer facial recognition technology due to the risk of abuse, as rivals Microsoft and Amazon had done for years.

Google's struggles are part of a broader view by technologists that AI can harm and help the world. For example, black facial recognition systems are often less accurate and text software can reinforce stereotypes. At the same time, regulators, lawmakers and citizens have become more suspicious of the impact technology has on society.

A deeper investigation

In response, some companies have invested in research and verification processes to keep the technology from getting out of hand. Microsoft and Google say they are now reviewing both new AI products and potential offerings for ethical concerns and have therefore turned down the deal.

Tracy Frey, who works on AI strategy in Google's cloud division, says the same trends have led customers who rely on Google for powerful AI to seek ethical help as well. "The world of technology is shifting to not saying," I'll build it just because I can, "but rather" Shall I? ", She says.

Google has already helped some customers, like global banking giant HSBC, think about it. Now it is planned to launch formal AI ethics services by the end of the year. According to Frey, the first will likely include training on topics such as identifying ethical issues in AI systems, similar to those offered to Google employees, and developing and implementing AI ethics guidelines. Later on, Google may offer advisory services to review or review customer AI projects, such as checking whether a lending algorithm is biased against people from certain demographic groups. Google has not yet decided whether any of these services will be charged.

Google, Facebook, and Microsoft have recently released many free technical tools that developers can use to test their own AI systems for reliability and fairness. IBM last year introduced a tool with the Check Fairness button that examines whether the output of a system has a potentially problematic correlation with attributes such as ethnicity or zip code.

Going a step further to help customers define their ethical boundaries for AI could raise ethical questions of your own. "It is very important to us that we don't sound like the moral police," says Frey. Her team works to provide ethical advice to clients without dictating their decisions or taking responsibility.

Another challenge is that a company looking to make money off of AI may not be the best moral mentor for curbing technology, says Brian Green, director of technology ethics at Santa Clara University's Markkula Center for Applied Ethics. "You are legally compelled to make money, and while ethics may be compatible, it can also prevent some decisions from going in the most ethical direction," he says.

According to Frey, Google and its customers are all motivated to use AI in an ethical manner, as the technology has to work well to be widely accepted. "Successful AI depends on you doing it carefully and thoughtfully," she says. She points out that IBM recently withdrew its facial recognition service amid nationwide protests against police brutality against blacks. This was apparently partly triggered by work such as the Gender Shades project, which showed that facial analysis algorithms were less accurate on darker skin tones. Microsoft and Amazon were quick to say they would suspend their own sales to law enforcement until more regulations were in place.

Ultimately, customer signing up for AI ethics services may depend on convincing companies that have turned to Google to move faster into the future that they should actually move more slowly.

Late last year, Google launched a celebrity-limited facial recognition service primarily aimed at businesses that need to search or index large collections of entertainment videos. Celebrities can unsubscribe and Google vets which customers can use the technology.

The ethical review and draft process lasted 18 months, including consultations with civil rights representatives and addressing an issue with training data that resulted in decreased accuracy for some black male actors. At the time Google launched the service, Amazon's celebrity identification service, which also allows celebrities to unsubscribe, had been available to everyone for more than two years.

This story originally appeared on wired.com.

LEAVE A REPLY

Please enter your comment!
Please enter your name here