📰 The Top Three
The NYPD is cancelling its robot dog trial in light of fierce criticism
The NYPD has canceled its trial of robot dogs that would have served as mobile cameras in potentially hostile environments. It is heavily criticized for contributing to the militarization of the police amidst the ferocious discussion of the “Defund the Police” movement.
The Big Picture: A movement popularized by Black Visions Collection during the George Floyd protests, “Defund the police” advocates the reallocation of funds from police departments into non-policing forms of public safety and community support.
Between the Lines: Critics say the robot dogs illustrate a concerning wider militarization of police. The French military has also been seen testing robot dogs in combat exercises. While not weaponized, they were used by soldiers for surveillance. The weaponization of spots would violate the company’s terms of service, but spot deployments increasingly take place in worrying grey areas and uncharted territory.
Deepfake "Amazon workers" Twitter accounts resurface the threat of false information
Ahead of a landmark vote for unionization at a US-based Amazon warehouse, new Twitter accounts, with deepfake photos, purporting to be Amazon employees started appearing. Though unlikely to be a coordinated effort, it emphasizes the threat of malicious coordinated efforts that could undermine people’s trust in real media in the long term.
The Big Picture: In recent years, deepfake photos have been increasingly used by disinformation campaigns in several high-profile cases. Facebook identified and took down a network of over 900 pages, groups, and accounts associated with the far-right outlet the Epoch Times in 2019. A fake “intelligence” document about Hunter Biden, created by a fake analyst with a deepfaked profile photo in 2020, was also circulated among Trump’s circle.
Between The Lines: Experts warn against a false sense of security as technology continues to evolve. A hyper-awareness of deepfakes could also lead many people to stop trusting real media, which could have equally dire consequences.
What’s Next: Experts recommend individuals to avoid focusing on whether a photo is real, but zoom out and consider its context – for example, “it’s a journalist who claims to be someone, but has never written anything else you could find online”.
#EthicalAI: Intel's "Bleep" solution ignores rather than addresses systemic problems
Bleep is Intel’s new artificial intelligence app that can detect and redact hate speech in real-time. Allowing users to control the amount of hate and abuse they encounter, many criticized Bleep as another example of technology solutionism.
“While we recognize that solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction—giving gamers a tool to control their experience,” Roger Chandler, Vice President and General Manager of Intel Client Product Solutions
Between The Lines: Chandler noted that Intel can’t “fix” racism among the many historical issues in gaming and the broader culture. “We realize technology isn’t a complete answer but we believe it can help mitigate the problem while deeper solutions are explored.”
📸 Pictures of the Month
Above: Data poisoning tools, such as Fawkes and LowKey, aim to prevent facial recognition by making changes unnoticeable to humans, but detrimental to AI models.
Below: A simple online sweepstakes campaign generated 9 million fake comments submitted to manipulate comments on net neutrality for Trump-era FCC in 2017. The dangers of simple campaigns are often glossed over, in light of the grandiose AI-driven disinformation tactics.
Original Article Sources: