Rare Elements

đź’  Chrysos #02: Your Monthly AI Digest

Chrysos is Rare Elements’ monthly AI newsletter, covering the latest in artificial intelligence controversies, developments, and opportunities across academics, regulators, and companies. Subscribe to stay on top of the latest insights, curated by insiders
Share on facebook
Share on twitter
Share on linkedin

The Top Three

Image Credit: Alex Castro / The Verge

Google Fires Another AI Ethics Lead

Google has fired Margaret Mitchell, a co-lead of the ethical AI team, following the controversial dismissal of Timnit Gebru, a fellow co-lead, igniting upheavals over Google’s allegedly censorship research critical of its products.

The Big Picture: After the recent dismissal of Timnit Gebru, Google’s Ethics in Artificial Intelligence Research unit is already under great scrutiny. This high-profile firing of Margaret Mitchell, the second co-lead of the unit has sparked further external discussion and internal discontent over Google’s management of sensitive AI issues.

Between The Lines: With the announcement of plans to clarify Google’s process for approving research papers written by Google authors for outside publication, the issue that Gebru raised regarding AI’s use in Google’s core business products for search and advertising products remain unaddressed.

Image Credit: Bayerischer Rundfunk

Questioning the use of Artificial Intelligence in recruitment

Bayerischer Rundfunk (German Public Broadcasting) investigation’s in AI applications in recruitment furthers skepticism of the objectivity of facial analysis in such software. Researchers found that changing lighting conditions, backgrounds, and outfits can lead to significant fluctuations in results, resembling the bias which the software attempts to eliminate.

The Big Picture: Almost two-thirds (63%) of talent acquisition professionals say AI has changed the way recruiting is done at their company, according to a Korn Ferry survey. While employers increasingly see AI as a tool to streamline processes, a significant degree of skepticism still exists concerning its application in screening and interview assessments.

Between The Lines: People are actively seeking technology to drive efficiency in the hiring process, which remains largely subjective and manual. “The criticism about doing facial analysis through AI is that there is not a lot of science that supports the idea that your face is some sort of indicator performing a job.”, says Julie Angwin, Head of the investigative US newsroom “The Markup”

What’s Next: The experts are calling for appropriate regulations that go beyond just being an “algorithm inspection authority” and look at the social process as a whole, considering the “human-centered” nature of artificial intelligence.

Image Credit: AĂŻda Amer/Axios

#EthicalAI: How memes became a bigger vehicle for disinformation than deepfakes?

Online misinformation around COVID vaccinations has been amplified by memes, and existing AI is not capable of detecting them, struggling to understand cultural contexts and how context changes when images and text are layered on top.

The Big Picture: As deepfakes make headlines, memes have been more effective in spreading misinformation, as they’re easier to create and harder to moderate with current AI models. Researchers from media intelligence firm Zignal Labs discovered that most misinformation on social platforms in the form of media that manipulates context, such as memes — not deepfakes.

Between The Lines: Artificial intelligence’s inability to combat the issue is partly a problem with image recognition with the complexity of a meme’s composition. AI also struggles in understanding the cultural context such as deciphering satire. This would require fundamental advances in artificial intelligence that draw more on principles from the way the human brain evaluates cultural context.

Pictures of the Month

Image Credit: Bryce Durbin, TechCrunch

Above: The Way of the Future Church, formed by controversial former Google engineer Anthony Levandowski, was officially dissolved. “Humans United in support of AI, committed to a peaceful transition to the precipice of consciousness”, where its belief system was founded on beliefs, including the inevitable creation of “superintelligence”.

Below: MAIA, an AI chess program, focuses on predicting human moves, including their mistakes, rather than mastering the art of winning over their opponent. They consider this to be a critical step towards creating AI technologies that better understand human fallibility to be a more effective teacher, assistant, and companion to humans.

Image Credit: Maia Chess

Leave with a smile 🙂

Helga Stentzel: Russia-based visual artist to create optical illusions from everyday objects https://www.instagram.com/helga.stentzel/

Original Article Sources:

Related articles

You may also be interested in

đź’  Chrysos #03: Your Monthly AI Digest

Chrysos is Rare Elements’ monthly AI newsletter, covering the latest in artificial intelligence controversies, developments, and opportunities across academics, regulators, and companies. Subscribe to stay on top of the latest insights, curated by insiders

Read More »

đź’  Chrysos #02: Your Monthly AI Digest

Chrysos is Rare Elements’ monthly AI newsletter, covering the latest in artificial intelligence controversies, developments, and opportunities across academics, regulators, and companies. Subscribe to stay on top of the latest insights, curated by insiders

Read More »