Google announces new AI code of ethics

Web Log: Principles focus on socially beneficial AI and built-in privacy measures

Google chief executive Sundar Pichai has announced a new set of principles the company will be using as a code of ethics to abide by when developing future AI.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” said Pichai in an official Google blog post.

These seven principles outline that AI developed by Google should be socially beneficial and accountable to people while being built and tested for safety and absence of algorithmic bias.

Built-in privacy

The principles also focus on built-in privacy measures while adhering to high standards of scientific excellence and finally, making sure that the resulting AI technologies will not be used for purposes that fall outside these principles, ie Google won’t be selling off its AI or APIs to companies that might use them for unsavoury purposes.

READ MORE

"With some caveats, and recognising that the proof will be in their application by Google, we recommend that other tech companies consider adopting similar guidelines for their AI work," commented Peter Eckersley, writing for the Electronic Frontier Foundation.