Though those are exactly those who one would not want in such a position
Announcement
Collapse
No announcement yet.
More Facebook c***ery
Collapse
X
-
[URL]https://twitter.com/jbouie/status/1187096190162231301?s=21[/URL]
still something nasty about how he says Congresswoman
Comment
-
So this is the kind of thing that Clegg has been up to
https://twitter.com/JuddLegum/status/1285207032900059136
Comment
-
This is good, and an utter indictment of Zuckerberg
https://twitter.com/_karenhao/status/1370006942807121920
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Qui?onero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulationthat might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.
Comment
-
Utterly unsurprising though if you've had any sort of contact with the kind of people who run these companies (the kind of people who would name a company Palantir, after scrying stones used to turn men to evil, seemingly without any sense of irony).
Comment
-
Also utterly unsurprising, but nonetheless instructive (click through to read the whole thread)
https://twitter.com/glichfield/status/1370735851882299393
Comment
-
It really is a shame that this series is behind the Journal's paywall, because it needs to be seen by as many users of the service as possible.
A summary of the series so far from the Washington Post
Facebook knew that teen girls on Instagram reported in large numbers that the app was hurting their body image and mental health. It knew that its content moderation systems suffered from an indefensible double standard in which celebrities were treated far differently than the average user. It knew that a 2018 change to its news feed software, intended to promote “meaningful interactions,” ended up promoting outrageous and divisive political content. Facebook knew all of those things because they were findings from its own internal research teams. But it didn’t tell anyone. In some cases, its executives even made public statements at odds with the findings. This week, each of those revelations was the subject of a story in the Wall Street Journal, part of an ongoing investigative series that it’s calling the Facebook Files. The reporting is based on internal Facebook documents, some of which were turned over to the Journal by a person seeking federal whistleblower protection, and interviews with current and former employees, most of whom have remained anonymous.Last edited by ursus arctos; 17-09-2021, 03:33.
Comment
Comment