The Top Three
Google told its scientists to ‘strike a positive tone’ in AI research
The Big Picture: The boom in research and development in artificial intelligence in the tech industry has prompted a clamor for tighter regulations. Google in recent years has increasingly incorporated AI throughout its services, from interpreting complex search queries, refining its recommendation engine to auto-completing sentences in Gmail.
Between The Lines: Studying Google services for biases is among the “sensitive topics” under the company’s new policy among dozens. Staff researchers, including senior scientist Margaret Mitchell, believe that Google is attempting to interfere with critical studies of potential technology harms.
Faster and refined data is necessary as India heads for a bigger global goal
Tackling the vast 1.3B population with a low internet literacy rate and limited statistical manpower, India is looking to establish a complete intuitive electronic survey system along with a digital database.
The Big Picture: India wants to overthrow China to be the next manufacturing superpower. As a result of their lag in statistical reporting, foreign investment is hindered, with job statistics being one of the most pressing issues.
Between The Lines: India has long depended on manual processes to authenticate its economic data amidst controversies. The suspension of field surveys due to the pandemic has further exposed the inadequacy of the current processes.
What’s Next: Supported by the World Bank, the Ministry Of Statistics is collating real-time data and leveraging artificial intelligence to analyze and report all of its economic data. “End-to-end automation will enhance the quality, credibility, and timeliness of data”, says Statistics Secretary Kshatrapati Shivaji.
#AIEthics: Image-Generation Algorithms are perpetrating the same sexist and racists idea on the internet
Researches uncovered that semi-supervised image-generation algorithms embed similar sexist ideas as language-generation ones. Trained on vast internet data that may include hate speech and disinformation, it has many dangerous implications.
The Big Picture: The enormous datasets compiled to feed the algorithms capture everything on the internet, which includes an overrepresentation of scantily clad women and other often harmful stereotypes.
Between The Lines: Deborah Raji, an influential Mozilla fellow, says the study is a wake-up call to the computer vision field. “For a long time, a lot of the critique on bias was about the way we label our images,” she says. Now this paper is saying “the actual composition of the dataset is resulting in these biases. We need accountability on how we curate these data sets and collect this information.”
What’s Next: The team calls for greater transparency from developers and closer collaboration with the academia. They also encourage fellow researchers to more rigorously test before deploying a vision model and develop more responsible ways of compiling and documenting training datasets.
Pictures of the Month
Above: OpenAI’s newest initiative – DALL-E, based on GPT-3, is trained to generate images from text descriptions. The team aims to leverage the model to explore societal issues like the potential for bias in the model outputs, and longer-term ethical challenges implied by this technology.
Below: The “National Artificial Intelligence Initiative Act of 2020” is a major milestone for AI legislation in the US. It provides the foundation and authorizes major investments in AI, and endorses a whole-of-government approach to leadership in AI research and development.
Original Article Sources: