Google instructs scientists to set a positive tone in a study on artificial intelligence – 12/24/2020 – Tec

Google took steps this year to tighten scrutiny over the research published by its scientists and to review issues it deems sensitive. In at least three cases, he asked the authors, after internal documents and discussions with researchers involved in the work, to avoid a negative assessment of the company’s technology.

Google is asking researchers to consult with legal and public relations teams before moving on to topics that analyze faces and emotions and categorize race, gender and political affiliation according to internal company documents that explain the standard.

“Advances in technology and the increasing complexity of our external environment increasingly lead to situations in which seemingly innocuous projects cause ethical, computational, revocable or legal problems,” states one of the pages of the document, which is addressed to the corporate research team.

Reuters cannot determine the date of the release, despite three company officials saying the policy went into effect in June. Google representatives did not comment on the matter.

According to eight employees and former Google employees, the “Sensitive Topics” procedure adds an analysis layer to the standard process for reviewing research results, with which problems such as the possible disclosure of company secrets are to be identified.

In some projects, representatives from Google even intervened in the final stages of the publication.

A senior Google executive reviewed a study on content recommendation technology prior to its publication and urged the authors to “take extra care to use a positive tone,” according to an internal Reuters statement.

The manager added, “This doesn’t mean we have to hide from the real challenges” the technology poses.

Messages from a researcher to the company’s reviewers show that the study’s authors “removed all references to Google products.” YouTube was mentioned in a draft of the document seen by Reuters.

Four of the company’s scientists, including researcher Marraste Mitchell, believe Google is starting to disrupt important studies about the technology’s potential dangers.

“When we research a subject based on our expertise and can’t get it published because it doesn’t conform to high-level peer review, we run into a serious censorship problem,” said Mitchell.

Google claims on its public website that its scientists enjoy “considerable” freedom.

Disputes between Google and some of its employees went public this month after computer science specialist Tilinta Gebru, who led a team of 12 with Mitchell, abruptly looked into the ethics of artificial intelligence (AI) software.

According to Gebru, Google fired her after questioning orders not to publish a survey on how language-mimicking AI tools can disadvantage marginalized populations. Google said it would accept the scientist’s resignation. It was not possible to determine whether Gebru’s research went through the “sensitive issues” review process.

Jeff Jean, Google’s senior vice president, said this month that Gebru’s research looked at potential losses from the technology without discussing the company’s efforts to correct them.

Jean added that Google supports AI ethics scholarships and is “actively working to improve our paper review processes, knowing that many controls can become obstacles”.

SENSITIVE TOPICS

The growth in AI research and development by the technology industry has led government agencies in the US and other countries to propose rules for its use. Some cite that scientific studies show that facial analysis software and other AI tools can perpetuate prejudice or undermine privacy.

Over the past few years, Google has built AI into its services, using technology to interpret complex search data to determine video recommendations on YouTube. The company’s scientists published more than 200 articles on developing accountability for AI from a total of more than 1,000 projects over the past year, Jean said.

The investigation of Google’s services to identify biased behavior is one of the “sensitive issues” under the supervision of the new company policy, according to the internal Reuters document.

This along with dozens of other sensitive topics such as the oil industry, China, Iran, Israel, Covid-19, homeland security, insurance, location data, religion, autonomous vehicles, telecommunications and systems that recommend or personalize web content.

The Google Doc, which instructed researchers to sound positive, discusses AI recommendation technology, which services like YouTube use to personalize content for users. A draft seen by Reuters raised concerns that the technology could “promote disinflation, discrimination or otherwise unfair outcomes”, as well as causing “insufficient diversity” and leading to “political polarization”.

The paper, which was published instead, states that the systems can promote “accurate, fair information and diverse content”. In the published version with the title “What are you optimizing for? Aligning recommendation systems with human values”, Google researchers did not receive any credit. Reuters cannot determine why.

An article earlier this month on AI understanding of foreign languages, at the request of the company’s reviewers, mitigated an indication of how the Google Translate product is making mistakes, a source said. The published version of the survey claims that the authors used Google Translate and that part of the survey methodology was to “rate and fix incorrect translations”.

In an article published last week, a Google employee described the process as a “long journey,” which, according to internal company news from Reuters, exchanges more than 100 cases between scientists and reviewers.

Leave a Reply

Your email address will not be published. Required fields are marked *