Ethan Elias Johnson is just some guy.
4016 stories
·
0 followers

Universe Price Tiers

3 Comments and 13 Shares
In Universe Pro®™ the laws of physics remain unchanged under time reversal, to maintain backward compatibility.
Read the whole story
SharedProphet
30 days ago
reply
peace
Share this story
Delete
3 public comments
silberbaer
30 days ago
reply
I would gladly pay $49.95/month for optional aging and death, and bad things not happening.
New Baltimore, MI
cjheinz
30 days ago
reply
Nice!
fxer
30 days ago
reply
How do I get Enterprise pricing
Bend, Oregon
gordol
30 days ago
That'll be NCC1701 dollars.

Horrifying: Google Flags Parents As Child Sex Abusers After They Sent Their Doctors Requested Photos

1 Share

Over the last few years, there has been a lot of attention paid to the issue of child sexual abuse material (CSAM) online. It is a huge and serious problem. And has been for a while. If you talk to trust and safety experts who work in the field, the stories they tell are horrifying and scary. Trying to stop the production of such material (i.e., literal child abuse) is a worthy and important goal. Trying to stop the flow of such material is similarly worthy.

The problem, though, is that as with so many things that have a content moderation component, the impossibility theory rears its head. And nothing demonstrates that quite as starkly as this stunning new piece by Kashmir Hill in the New York Times, discussing how Google has been flagging people as potential criminals after they shared photos of their children in response to requests from medical professionals trying to deal with medical conditions the children have.

There is much worth commenting on in the piece, but before we get into the details, it’s important to give some broader political context. As you probably know if you read this site at all, across the political spectrum, there has been tremendous pressure over the last few years to pass laws that “force” websites to “do something” about CSAM material. Again, CSAM is a massive and serious problem, but, as we’ve discussed, the law (namely 18 USC 2258) already requires websites to report any CSAM content they find, and they can face stiff penalties for failing to do so.

Indeed, it’s quite likely that much of the current concern about CSAM is due to there finally being some level of recognition of how widespread it is thanks to the required reporting by tech platforms under the law. That is, because most websites take this issue so seriously, and carefully follow the law, we now know how widespread and pervasive the problem is.

But, rather than trying to tackle the underlying problem, politicians often want to do the politician thing, and just blame the tech companies for doing the required reporting. It’s very much shooting the messenger and using the fact that the reporting by tech companies is shining a light on the underlying societal failures that resulted in this, as an excuse to blame the tech companies, rather than the societal failings.

It’s easier to blame the tech companies — most of whom have bent over backwards to work with law enforcement and to build technology to help respond to CSAM — than to come up with an actual plan for dealing with the underlying issues. And so almost all of the legal proposals we’ve seen are really about targeting tech companies… and, in the process, removing underlying rights. In the US, we’ve seen the EARN IT Act, which completely misdiagnoses the problem, and would actually make it that much harder for law enforcement to track down abusers. EARN It attempts to blame tech companies for law enforcement’s unwillingness to go after CSAM producers and distributors.

Meanwhile, over in the EU, there’s an apparently serious proposal to effectively outlaw encryption and require client-side scanning of all content in an attempt to battle CSAM. Even as experts have pointed out how this makes everyone less safe, and there has been pushback on the proposal, politicians are still supporting it by basically just repeating “we must protect the children” without seriously responding to the many ways in which these bills will make children less safe.

Separately, it’s important to understand some of the technology behind hunting down and reporting CSAM. The most famous of which is PhotoDNA, initially developed by Microsoft and used among many of the big platforms to share hashes of known CSAM material to make sure that the material that has been discovered isn’t more widely spread. There are some other similar tools, but for fairly obvious reasons these tools have some risks associated with them, and there are concerns both about false positives and about who is allowed to have access to the tools (even as they’re sharing hashes, not actual images, the possibility of such tools to be abused is a real concern). A few companies, including Google, have developed more AI-based tools to try to identify CSAM, and Apple (somewhat infamously) has been working on its own client-side scanning tools along with cloud based scanning. But client-side scanning has significant limits, and there is real fear that it will be abused.

Of course, spy agencies also love the idea of everyone being forced to do client-side scanning in response to CSAM, because they know that basically creates a backdoor to spy on everyone’s devices.

Whenever people talk about this and highlight the potential for false positives, they’re often brushed off by supporters of these scanning tools, saying that the risk is minimal. And, until now, there weren’t many good examples of false positives beyond things like Facebook pulling down iconic photographs, claiming they were CSAM.

However, this article (yes, finally we’re talking about the article) by Hill gives us some very real world examples of how aggressive scanning for CSAM can not just go wrong, but can potentially destroy lives as well. In horrifying ways.

It describes how a father noticed his son’s penis was swollen and apparently painful to the child. An advice nurse at their healthcare provider suggested they take photos to send to the doctor, so the doctor could review them in advance of a telehealth appointment. The father took the photos and texted them to his wife so she could share with the doctor… and that set off a huge mess.

In texting them — in Google’s terms, taking “affirmative action,” — it caused Google to scan the material, and it’s AI-based detector flagged the image as potential CSAM. You can understand why. But the context was certainly missing. And, it didn’t much matter to Google — which shut down the guy’s entire Google account (including his Google Fi phone service) and reported him to local law enforcement.

The guy, just named “Mark” in the story, appealed, but Google refused to reinstate his account. Much later, Mark found out about the police investigation this way:

In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back.

“You have to talk to Google,” Mr. Hillard said, according to Mark. “There’s nothing I can do.”

In the article, Hill highlights at least one other example of nearly the same thing happening, and also talks to (former podcast guest) Jon Callas, about how it’s likely that this happens way more than we realize, but the victims of it probably aren’t willing to speak about it, because then their names are associated with CSAM.

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries in this particular coal mine.”

“There could be tens, hundreds, thousands more of these,” he said.

Given the toxic nature of the accusations, Mr. Callas speculated that most people wrongfully flagged would not publicize what had happened.

There’s so much in this story that is both horrifying, but also a very useful illustration of the trade-offs and risks with these tools, and the process for correcting errors. It’s good that these companies are making proactive efforts to stop the creation and sharing of CSAM. The article already shows how these companies go above and beyond what the law actually requires (contrary to the claims of politicians and some in the media — and, unfortunately, many working for public interest groups trying to protect children).

However, it also shows the very real risks of false positives, and how it can create very serious problems for people, and how very few people are even willing to publicly discuss it for fear of the impact on their own lives and reputations for even highlighting the issue.

If politicians (pushed by many in the media) continue to advocate for regulations mandating even more aggressive behavior from these companies, including increasing liability for missing any content, it is inevitable that we will have many more such false positives — and the impact will be that much bigger.

There are real trade-offs here, and any serious discussion of how to deal with them should recognize that. Unfortunately, most of the discussions are entirely one-sided, and refuse to even acknowledge the issue of false positives and the concerns about how such aggressive scanning can impact people’s privacy.

And, of course, since the media (with the exception of this article!) and political narrative are entirely focused on “but think of the children!” the companies are bending even further backwards to appease them. Indeed, Google’s response to the story of Mark seems ridiculous as you read the article. Even after the police clear him of any wrongdoing, it refuses to give him back his account.

But that response is totally rational when you look at the typical media coverage of these stories. There have been so many stories — often misleading ones — accusing Google, Facebook and other big tech companies of not doing enough to fight CSAM. So any mistakes in that direction are used to completely trash the companies, saying that they’re “turning a blind eye” to abuse or even “deliberately profiting” off of CSAM. In such a media environment, companies like Google aren’t even going to risk missing something, and its default is going to be to shut down the guy’s account. Because the people at the company know they’d get destroyed publicly if it turns out he was involved in CSAM.

As with all of this stuff, there are no easy answers here. Stopping CSAM is an important and noble goal, but we need to figure out the best way to actually do that, and deputizing private corporations to magically find and stop it, with serious risk of liability for mistakes (in one direction), seems to have pretty significant costs as well. And, on top of that, it distracts from trying to solve the underlying issues, including why law enforcement isn’t actually doing enough to stop the actual production and distribution of actual CSAM.

Read the whole story
SharedProphet
40 days ago
reply
peace
Share this story
Delete

Elon Musk’s Twitter Business Model Idea: Ignore Free Speech Rights And Try To The Charge Media To Quote Tweets

1 Share

As everyone’s trying to read the tea leaves of what an Elon Musk-owned Twitter will actually look like, it’s been reported that in his presentation to Wall St. banks to get the financing he needs to complete the deal, he suggested the deal would be profitable because of some of his new business model ideas. Now, obviously, these are entirely speculative, and my guess is that he hasn’t thought through any of this that deeply (just like he hasn’t thought through content moderation’s challenges, even though he’s sure he can fix it). But, at least some of the banks are buying into the deal based on Musk promising a stronger Twitter business, so we need to pay attention to his ideas. Like this one, that, um, would be effectively impossible under the 1st Amendment.

Musk told the banks he also plans to develop features to grow business revenue, including new ways to make money out of tweets that contain important information or go viral, the sources said.

Ideas he brought up included charging a fee when a third-party website wants to quote or embed a tweet from verified individuals or organizations.

So, like, I don’t want to throw any cold water on the business model ideas of the guy people keep telling me is the most brilliant innovative business mind of our generation, but… it… um… seems at least a little ironic that he’s spent the past month screaming about “free speech” and enabling whatever the law allows… and now he wants to charge companies for quoting a tweet.

Yeah, so, thanks to the 1st Amendment (that he claims to support so much) he’s unlikely to be able to do that successfully. Quoting a tweet (we’ll deal with embedding shortly) in almost every damn case is going to be fair use under copyright law. And, a key reason we have fair use in copyright law… is that the 1st Amendment requires it, or else copyright law would stifle the very free speech that Musk claims to love so much.

In Eldred v. Ashcroft, the important (if wrongly decided) case on the Constitutionality of copyright term extension, Justice Ruth Bader Ginsburg repeatedly talked about how fair use was a “safeguard” in copyright law to make sure that copyright law could exist under the 1st Amendment, even as it could be used to suppress speech. The crux of the argument is that, because there’s fair use that allows people to do things like quote a 240 character outburst, then there’s no serious concern about copyright silencing speech. This point is often raised in the context of calling fair use a necessary safety valve on copyright to make it compatible with the 1st Amendment.

Given that Musk has claimed (incorrectly, but really, whatever) that free speech laws represent “the will of the people,” and his apparent big business model innovation is to demand that media organizations pay to quote tweets, which violates our fair use rights, which are necessary under the 1st Amendment… well, it appears that his biggest business model idea so far is to try to ignore the 1st Amendment rights of people wishing to quote tweets.

Good luck with that.

Also, under the current terms of service on Twitter, users hold any copyright interest in their own tweets. Twitter holds a license for it, but that wouldn’t allow Twitter as an entity to file copyright claims against any media organization that was quoting tweets in the first place. The only way it could do that is if it changed the terms entirely and required all its users to actually assign their copyrights to Twitter and, well, good luck with that as well.

Now, of course, the report claimed that the fee could be charged if someone “wants to quote or embed a tweet from verified individuals,” and the company certainly could set up some convoluted system to try to make people pay to embed, but that would (a) be fucking annoying for most everyone else and (b) would just lead to everyone screenshotting, instead of embedding, which is a lot less useful in the long run for Twitter, since it would drive fewer people to interact with Twitter. And, again, fair use and (I feel I must remind you) the 1st Amendment would protect all that screenshotting and quoting. Free speech, ftw!

And that’s not even getting into the idea that Twitter might now be effectively selling its popular tweets to websites. I mean, if this plan were to go forward (and somehow got over all the other hurdles), I’d imagine the company would literally need to cut its users in on the deal and set up some sort of “every time the NY Times embeds your tweet, they pay us $5 and we revert $3 of them to you” or some sort of nonsense like that. And, sure, maybe it’ll excite some Twitter users that they could get paid for their tweets (again, assuming any third party website out there ignores its fair use/1st Amendment rights to simply quote or screenshot and chooses to pay instead).

But, this would also likely create a whole world of complications. First, Twitter would need to set up an entirely new kind of operation to manage all of this. Musk also promised in these documents that he’s planning on reducing headcount at Twitter, but he’d need to staff up at least on managing the payments and payouts to tweeters. But, again, this is Elon Musk, so I’m guessing the system will work on the blockchain in Dogecoin and payments will flow automagically. And sure, maybe you could see how that could actually kinda work, if you’re into that sort of thing?

But, now, we get into the next issue: when you add money (even cute dog-meme based money) to a platform where people normally did shit for free, the incentives change. Oh, boy do they ever change. Suddenly you’re going to get scammers galore, looking to abuse the system, and get filthy stinkin’ Doge rich. I guess maybe this needs to be expressed in meme form?

And Elon should understand this better than anyone, given how frequently crypto scammers follow him around and try to scam his fans. Introducing actual money, even of the meme variety, into the mix is going to lead to a lot of scam behavior. And it would probably be helpful if the company had a… what’s it called… oh yeah, trust & safety staff to help think these issues through.

I’m never going to knock anyone for experimenting with creative business model ideas. And I’m all for Twitter trying out non-advertising based business models, as Elon has suggested is part of his focus. That actually seems like a good idea. But, it’s kinda weird when this whole deal is premised on the idea of bringing more “free speech” to the site… and his first business model suggestion when trying to convince banks to back him is to ignore the free speech rights of others and try to force them to pay up.

Read the whole story
SharedProphet
152 days ago
reply
peace
Share this story
Delete

Reality Check: Twitter Actually Was Already Doing Most Of The Things Musk Claims He Wants The Company To Do (But Better)

1 Share

So there has been lots of talk about Elon Musk and his takeover of Twitter. I’ve written multiple things about how little he understands about free speech and how little he understands content moderation. I’ve also written (with giant caveats) about ways in which his takeover of Twitter might improve some things. Throughout this discussion, in the comments here, and on Twitter, a lot of people have accused me of interpreting Musk’s statements in bad faith. In particular, people get annoyed when I point out that the two biggest points he’s made — that (1) Twitter should allow all “legal” speech, and (2) getting rid of spambots is his number one priority — contradict each other, because spambots are protected speech. People like to argue that’s not true, but they’re wrong, and anyone arguing that expression by bots is not protected doesn’t understand the 1st Amendment at all.

Either way, I am always open to rethinking my position, and if people are claiming that I’m interpreting Musk in bad faith, I can try to revisit his statements in a more forgiving manner. Let’s, as the saying goes, take him figuratively, rather than literally.

But… here’s the thing. If you interpret Musk’s statements in the best possible light, it’s difficult to see how Twitter is not already doing pretty much everything he wants it to do. Now, I can already hear the angry keyboard mashing of people who are very, very sure that’s not true, and are very, very sure that Twitter is an evil company “censoring political views” and “manipulating elections” and whatever else the conspiracy theory of the day is. But it’s funny that the same people who insist that I’m not being fair to Musk, refuse to offer the same courtesy or willingness to understand why and how Twitter actually operates.

So, let’s look at Musk’s actual suggestions, phrased in the best possible light, and look at what Twitter has actually done and is doing… and again, you’ll realize that Twitter is (by far!) the social media service that has gone the farthest to make what he wants real, and in the few areas that he seems to think the company has fallen short, the reality is that it has had to balance difficult competing interests, and realized that its approach is the most likely to get to the larger goal of providing a platform for global conversation.

Musk has repeatedly said that he sees free speech on Twitter as an important part of democracy. So do many people at Twitter. They were the ones who framed themselves as the “free speech wing of the free speech party.” But as any actual expert in free speech will tell you, free speech does not mean that private websites should allow all free speech. And I know people — including Musk — will argue against this point, but it’s just fundamentally wrong. We’ve gone over this over and over again. The internet itself (which is not owned by any entity) is the modern public square, and anyone is free to set up shop on it. But that does not mean that they get to commandeer private property for their own screaming fits.

If it did, you would not have free speech, because you would (1) just get inundated with spam and garbage, and (2) only the loudest, most obnoxious voices would ever be heard. The team at Twitter actually understands the tradeoffs here, and while they don’t always get it “right” (in part because there is no “right”), Twitter’s team is so far above and beyond any other social media website, it’s just bizarre that the public narrative insists the opposite.

Twitter has long viewed its mission as enabling more free speech and more conversation in the world, and has taken steps to actually make that possible. Opening up the platform to people who violate the rules, abuse and harass others, and generally make a mess of things, does not aid free speech or “democracy.” You can disagree with where Twitter draws the lines (and clearly, Musk does), but Musk has shown little to no understanding of why and how the line drawing is done in the first place, and if he moves in the direction he claims, will quickly realize that Twitter’s lines are drawn much much much more permissively than nearly any other website (including, for what it’s worth, Trump’s Truth Social), and that there are actually clear reasons for why it drew the lines it did — and those lines are often to enable more ability for there to be communication and conversation on the platform.

Twitter has long allowed all sorts of dissenting viewpoints and arguments on its platform. Indeed, there are many activists who insist that the problem is that Twitter doesn’t do enough moderation. Instead, Twitter has put in place some pretty clear rules, and it tries to only take down accounts that really break those rules. It doesn’t always get that right. And it misses some accounts, and takes down others it shouldn’t. But on the whole, it’s way more permissive than most any other site that is much quicker to ban users.

Second, even as it contradicts his first point, Musk has claimed that he wants to get rid of spambots and scambots. This is a good goal. And, again, it’s also one that Twitter has been working on for ages. And it has really good, really smart people working on the issue (some of the best out there). And, in part because the company is so open and so permissive (again much more so than other platforms), this is an extraordinarily difficult problem to solve, especially at the scale of Twitter. People assume, falsely, that Twitter doesn’t care about spammers, but part of the issue is that if you want to have an “open” platform for “free speech,” that means that people will take advantage of that. Musk is going to find that Twitter already has some of the best people working on this issue — that is if they don’t rush out the door (or get pushed out by him).

Third, Musk has talked about redoing the verification system. He’s said that Twitter should “authenticate all real humans.” This appears to be (at least partly) part of his method for dealing with the bots and spam he’d like to eradicate. For years we’ve discussed the dangers of a “real names” policy, that requires people to post under their own names, including that studies have shown that the trolling often is worse under real names. It’s especially dangerous for marginalized people, and those who have stalkers, or are otherwise at risk.

But, some people respond, it’s unfair to assume he means a real names policy. Perhaps he just means that Twitter will keep a secret database of your verified details, and you can still be pseudonymous on the site. Except, as experts will tell you, that still is massively problematic, especially for marginalized groups, at-risk individuals, and those in countries with authoritarian regimes. Because now that database becomes a massive target. You get extremely questionable subpoenas, seeking to unmask users all the time. Or, you get the government demanding you cough up info on your users. Or you get hackers trying to get into the database. Or, you get authoritarian countries getting employees into these companies to seek out info on critics of the regime.

All of these things have happened with Twitter. And Twitter was in a position to push back. But it sure helped that in many of those cases Twitter didn’t actually have their “verification,” but much less information, like an IP address and an email.

Or, to take it another level, perhaps Musk really just means that Twitter should offer verification to those who want it. That’s not at all what he said, but it’s how some of his vocal supporters have interpreted this. Well, once again, Twitter has tried that. And it didn’t work. Back in 2016, Twitter opened up verification for everyone, and the company quickly realized it had a huge mess on its hands. First people gamed the system. Second, even though the program was only meant to just verify that the name on the account was the real person it was labeled as, people took it to be an “endorsement” by Twitter, which created a bunch of other headaches. Given that, Twitter paused the program.

It then spent years trying to figure out a way to open up verification to anyone without running into more problems. Indeed, Jack Dorsey made it clear that the plan has always been to “open verification to everyone.” But it turns out that, like dealing with spam and like dealing with content moderation, this is a much harder problem to solve at scale than most people think. It took Twitter almost four years to finally relaunch its verification program in a much more limited fashion, which they hoped would allow the company to test out the new process in a way that would avoid abuse.

But even in that limited fashion the program ran into all sorts of problems. It had to shut down the program a week after launching it, to sort out some of the issues. Then, it had to do so again 3 months later, after finding more problems with the program — specifically that fake accounts were able to game the verification process.

But, again, Twitter has been trying to do exactly what Musk’s fans insist he wants to do. And they’ve been doing so thoughtfully, and recognizing the challenges of actually doing it right, and realizing that it involves a lot of careful thought and tradeoffs.

Next, Musk said that Twitter DMs should have end-to-end encryption, and on this I totally agree. It should. And lots of others have been asking for this as well. Including… people within Twitter who have been working on it. But there are a lot of issues in making that actually work. It’s not something that you can just flip a switch on. There are some technical challenges… but also some social issues as well. All you have to do is look at how long it’s taken Facebook to do the same thing — in part because as soon as the company planned to do this, they were accused of not caring about child safety. Maybe, a privately owned Twitter, controlled by Musk just ignores all that, but there are real challenges here, and it’s not quite as easy as he seems to think. But, once again, it’s not an issue that’s never occurred to Twitter either.

Another recent Musk “idea” was that content moderation should be “politically neutral,” which he (incorrectly) claims “means upsetting the far right and far left equally.” For a guy who’s apparently so brilliant, you’d think he’d understand that there is no fundamental law that says (1) political viewpoints are distributed equally across a bell curve and (2) the differences between neutrality of inputs and neutrality of outputs. That is, every single study has shown that, if anything, Twitter’s content moderation practices greatly favor the right. It’s just that (right now), the right is much, much, much more prone to sharing misinformation. But if you have an unequal distribution of troublemakers, then a “neutral” policy will lead to unequal outcomes. Musk seems to want equal outcomes which literally would mean a non-neutral policy that gives much, much, much more leeway to troublemakers on the right. You can’t have equal outcomes with a neutral policy if the distribution is unequal.

Finally, the only other idea that Musk has publicly talked about is “open sourcing” the algorithm. At a first pass, this doesn’t make much sense, because it’s not like you can just put the code on Github and let everyone figure it out. It’s a lot more complicated than that. In order to release such code, you first have to make sure that it doesn’t reveal anything sensitive, or reveal any kind of vulnerabilities. The process for securing production code that was built in a closed source environment to make it open source… is not easy. Having dealt with multiple projects attempting to do that, it almost always fails.

In addition, if they were open sourcing the algorithm, the people it would benefit the most are the spammers and scammers — the very accounts Musk claims are his very first priority to stomp out. So once again, his stated plans contradict his other stated plans.

But… Twitter has actually again been making moves in this general direction all along anyway. Jack Dorsey, for years, has talked about why there should be “algorithmic choice” on Twitter, where others can build up their own algorithms, and users can pick and choose whose algorithm to use. That’s not the same as open sourcing it, but actually seems like it would be a hell of a lot closer to what Musk actually wants — a more open platform where people aren’t limited to just Twitter’s content moderation choices. And, as Dorsey has pointed out, Twitter is also the only platform that allows you to turn off the algorithm if you don’t want it.

So, as we walk down the list of each of the “ideas” that Musk has publicly talked about, taking them in the most generous light, it’s difficult to argue that Twitter isn’t (1) already doing most of it, but in a more thoughtful and useful manner, (2) much further along in trying to meet those goals than any other social media platform, and (3) already explored, tested, and rejected some of his ideas as unworkable.

Indeed, about the only actual practical point that Musk seems to disagree with Twitter about is a few specific content moderation decisions that he believes should have gone in a different direction. And this is, as always, the fundamental disconnect in any conversation about content moderation. Every individual — especially those with no experience doing any actual moderation — insists that they have the perfect way to do content moderation: just get rid of the content they don’t want and keep the content they do want.

But the reality is that it’s ridiculously more complicated than that, especially at scale. And no company has internalized that more than Twitter (though, I expect many of the people who understand this the best will not be around very long).

Now, I’m sure that Musk fans (and Techdirt haters, some of whom overlap), will quickly rush out the same tired talking points that have already been debunked. Studies have shown, repeatedly, that, no, Twitter does not engage in politically biased moderation. Indeed, the company had to put in place special safe space rules to protect prominent Republican accounts that violated its rules. Lots of people will point to individual examples of specific moderation choices that they personally don’t like, but refuse to engage on why or how they happened. We’ve already explained the whole “Biden Laptop” thing so it doesn’t help your case to bring it up again — not unless you’re able to explain why you’re not screaming about Twitter’s apparently anti-BLM bias for shutting down an account for leaking internal police files.

The simple fact is that content moderation at scale is impossible to do well, but Twitter actually does it better than most. That doesn’t mean you’ll agree with every decision. You won’t. People within the company don’t either. I don’t. I regularly call the company out for bad content moderation decisions. But I actually recognize that it’s not because of bias or a desire to be censorial. It’s because it’s impossible for everyone to agree on all of these decisions, and one thing the company absolutely needs to do is to try to craft policies that can be understood by a large content moderation team, around the globe, who can make relatively quick decisions at an astounding speed. And that leads to (1) a lot of scenarios that don’t neatly fit inside or outside of a policy, and (2) a lot of edge case judgment calls.

Indeed, so much of what people on the outside wrongly assume is “inconsistent” enforcement of policy is actually the exact opposite. A company like Twitter can’t keep changing policy on every decision. It needs to craft policy and stick with it for a while. So, something like the Biden laptop story comes along and someone points out that it seems pretty similar to the Blueleaks case, so if the company is being consistent, shouldn’t it block the NY Post’s account as well? And you can make an argument as to how it’s different, but there’s also a strong argument as to how it’s the same. And, so then you begin to realize that not blocking the NY Post in that scenario would actually be the “inconsistent” approach, since the “hacked materials” policy existed, and had been enforced against others before.

Now, some people like to claim that the Biden laptop didn’t involve “hacked” materials, but that’s great to be able to say in retrospect. At the time, it was extremely unclear. And, again, as described above, Twitter has to make these decisions without the benefit of hindsight. Indeed, they need to be made without the benefit of very much time to investigate at all.

These are all massive challenges, and even if you disagree with some of the decisions, it’s simply wrong to assume that the decisions are driven by bias. I’ve worked with people doing content moderation work at tons of different internet companies. And they do everything they can to avoid allowing bias to enter into their work. That doesn’t mean it never does, because of course, everyone is human. But on the whole, it’s incredible how much effort people put into being truly agnostic about political views, even ridiculous or abhorrent ones. And Twitter, pretty much above all others, is incredibly good at taking the politics out of its trust and safety efforts.

So, again, once Musk owns Twitter, he is free to do whatever he wants. But it truly is incredible to look over his stated goals, and to look at what Twitter has actually done and what it’s trying to do, and to realize that… Twitter already is basically the company Musk insists it needs to be. Only it’s been doing so in a more thoughtful, more methodical, more careful manner than he seems interested in. And that means we seem much more likely to lose the company that actually has done the most towards enabling free speech in support of democratic values. And that would be unfortunate.

Read the whole story
SharedProphet
156 days ago
reply
peace
Share this story
Delete

Elon Musk Demonstrates How Little He Understands About Content Moderation

1 Share

Lots of talk yesterday as Elon Musk made a hostile takeover bid for all of Twitter. This was always a possibility, and one that we discussed before in looking at how little Musk seemed to understand about free speech. But soon after the bid was made public, Musk went on stage at TED to be interviewed by Chris Anderson and spoke more about his thoughts on Twitter and content moderation.

It’s worth watching, though mostly for how it shows how very, very little Musk understands about all of this. Indeed, what struck me about his views is how much they sound like what the techies who originally created social media said in the early days. And here’s the important bit: all of them eventually learned that their simplistic belief in how things should work does not work in reality and have spent the past few decades trying to iterate. And Musk ignores all of that while (somewhat hilariously) suggesting that all of those things can be figured out eventually, despite all of the hard work many, many overworked and underpaid people have been doing figuring exactly that out, only to be told by Musk he’s sure they’re doing it wrong.

Because these posts tend to attract very, very angry people who are very, very sure of themselves on this topic they have no experience with, I’d ask that before any of you scream in the comments, please read all of Prof. Kate Klonick’s seminal paper on the history of content moderation and free speech called The New Governors. It is difficult to take seriously anyone on this topic who is not aware of the history.

But, just for fun, let’s go through what Musk said. Anderson asks Musk why he wants to buy Twitter and Elon responds:

Well, I think it’s really important for there to be an inclusive arena for free speech. Twitter has become the de facto town square, so, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law. And one of the things I believe Twitter should do is open source the algorithm, and make any changes to people’s tweets — if they’re emphasized or de-emphasized — that should be made apparent so that anyone can see that action has been taken.  So there’s no sort of behind-the-scenes manipulation, either algorithmically or manually.

First, again, this is the same sort of thing that early Twitter and Facebook and other platform people said in the early days. And then they found out it doesn’t work for reasons that will be discussed shortly. Second, Twitter is not the town square, and it’s a ridiculous analogy. The internet itself is the town square. Twitter is just one private shop in that town square with its own rules.

Anderson asks Musk why he wants to take over Twitter when Musk had apparently told him just last week that taking over the company would lead to everyone blaming him for everything that went wrong, and Musk responds that things will still go wrong and you have to expect that. And he’s correct, but what’s notable here is how he’s asking for a level of understanding that he refuses to provide Twitter itself. Twitter has spent 15 years experimenting and iterating its policies to deal with a variety of incredibly complex and difficult challenges, nuances, and trade-offs, and as Musk demonstrates later in this interview, he’s not even begun to think through any of them.

My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization.

Again, this is the same sort of things that the founders of these websites said… until they had to deal with the actual challenges of running such platforms at scale. And, I should note, anyone who’s spent any time at all working on these issues knows that “maximally trusted” requires some level of moderation, because otherwise platforms fill up with spam and scams (more on that later) and are not trusted at all. There’s a reason these efforts are put under the banner of “trust & safety.”

Finally, the “public platform” is the internet. And trust is earned, but opening up a platform broadly does not inspire trust. Being broadly inclusive and trustworthy also requires recognizing that bad actors need to be dealt with in some form or another. This is what people have spent over a decade working on. And Musk acts like it’s a brand new issue.

And so then we get to the inevitable point of any such discussion in which Musk admits that of course some moderation is important.

Chris Anderson: You’ve described yourself as a free speech absolutist. Does that mean that there’s literally nothing that people can’t say and it’s ok?

Elon Musk: Well, I think, obviously Twitter or any forum is bound by the laws of the country it operates in. So, obviously there are some limitations on free speech in the US. And of course, Twitter would have to abide by those rules.

CA: Right. So you can’t incite people to violence, like direct incitement to violence… like, you can’t do the equivalent of crying fire in a movie theater, for example.

EM: No, that would be a crime (laughs). It should be a crime.

And all the free speech experts scream out in unison at the false notion of “fire in a crowded theater.”

But just the fact that Musk (1) agrees with this sentiment and (2) thinks that it would obviously be a crime shows how little he actually understands about free speech or the laws governing free speech. As a reminder for those who don’t know, the “fire in a crowded theater” line was a non-binding rhetorical aside in a case that was used to lock up a protestor for handing out anti-war literature (not exactly free speech supportive), and the Supreme Court Justice who used the phrase basically denounced it in rulings soon after — and the case that it came from was effectively overturned a few decades later, in the new case that set up the actual standard that Anderson suggests about incitement to imminent lawless action (which, in most cases, crying fire in a theater absolutely would not reach).

Anderson then tries (but basically fails) to get into some of the nuance of content moderation. It would have been nice if he’d actually spoken to, well, anyone with any experience in the space, because his examples aren’t just laughable, they’re kind of pathetic.

CA: But here’s the challenge, because it’s such a nuanced between different things. So, there’s incitement to violence, that’s a no if it’s illegal. There’s hate speech, which some forms of hate speech are fine. I… hate… spinach.

First of all, “I hate spinach” is not hate speech. I mean, of all the examples you could pull out… that’s not an example of hate speech (and we’ll leave aside Musk’s joke response, suggesting that if you cooked spinach right it’s good). But, much more importantly, here’s where Anderson and Elon could have confronted the actual issue which is that, in the US, hate speech is entirely protected under the 1st Amendment. And, we’ve explained why this is actually important and a good thing, because in places where hate speech is against the law, those laws are frequently abused to silence government critics.

But keeping hate speech legal is very different from saying that any private website must keep that speech on the platform. Indeed, keeping hate speech on a private platform takes away from the supposed “trust” and “broadly inclusive” nature Musk claimed to want. That would be an interesting point to discuss with Musk — and instead we’re left discussing what’s the best way to cook spinach.

Anderson again sorta weakly tries to get more to the point, but still doesn’t seem to know enough about the actual challenges of content moderation to have a serious discussion of the issue:

CA: So let’s say… here’s one tweet: ‘I hate politician X.’ Next tweet is ‘I wish politician X wasn’t alive.’ As some of us have said about Putin, right now for example. So that’s legitimate speech. Another tweet is ‘I wish Politician X wasn’t alive’ with a picture of their head with a gunsight over it. Or that plus their address. I mean at some point, someone has to make a decision as to which of those is not okay. Can an algorithm do that, or surely you need human judgment at some point.

First of all, broadly speaking all of the above are protected under the 1st Amendment. Somewhat incredibly, his final hypothetical is one I can talk about directly, because I was an expert witness in a case where a guy was facing criminal charges for literally Photoshopping gunsights over government officials, and the jury found him not guilty. But, also broadly speaking, there are plenty of legitimate reasons why a private platform would not want to host that content. In part, that gets back to the “maximally trusted” and “broadly inclusive” points.

But, on top of that, none of those examples are hate speech. Hate speech is not, as Chris Anderson bizarrely seems to believe, saying “I hate X.” Hate speech is generally seen as forms of expression designed to harass, humiliate, or incite hatred against a group or class of persons based on various characteristics about them (generally including things like race, religion, sexual identity, ethnicity, disability, etc.). The examples he raises are not, in fact, hate speech.

Either way, here’s where Elon shows how little he understands any of this, and how unfamiliar he is with all that’s happened in this space in the past two decades.

In my view, Twitter should match the laws of the country. And, really, there’s an obligation to do that. But going beyond that, and having it be unclear who’s making what changes to who… to where… having tweets mysteriously be promoted and demoted without insight into what’s going on, having a black box algorithm promote some things and not other things, I think those things can be quite dangerous.

Again, in the US, the laws say that such speech is protected, but that’s not a reasonable answer. We’ve gone through this before. Parler claimed it would only moderate speech that violated the law and then flipped out when it realized that people were getting on the site to mock Parler’s supporters or to post porn (which is also protected by the 1st Amendment). Simply saying that moderation should follow the law generally shows that one has never actually tried to moderate anything. Because it’s much more complicated than that, as Musk will implicitly admit later on in this interview, without the self-awareness to see how he’s contradicting himself.

There’s then a slightly more interesting discussion of open sourcing the algorithm, which is its own can of worms that I’m not sure Musk understands. I’m all for more transparency, and the ability for competing algorithms to be available for moderation, but open sourcing it is different and not as straightforward as Musk seems to imply. First of all, it’s often not the algorithm that is the issue. Second, algorithms that are built up in a proprietary stack are not so easy to just randomly “open source” without revealing all sorts of other stuff. Third, the biggest beneficiaries of open sourcing the ranking algorithm will be spammers (which is doubly amusing because in just a few moments Musk is going to whine about spammers). Open sourcing the algorithm will be most interesting to those looking to abuse and game the system to promote their own stuff.

We know this. We’ve seen it. There’s a reason why Google’s search algorithm has become more and more opaque over the years. Not because it’s trying to suppress people, but because the people who were most interested in understanding how it all worked were search engine spammers. Open sourcing the Twitter algorithm would do the same thing.

Chris then gets back to the moderation process (again in a slightly confused way about how Twitter trust & safety actually works), pointing out that “the algorithm” is probably less of an issue than all the human moderators, leading Musk to give a very long pause before stumbling through a bit of a word-salad response:

Well, I…I… I think we would want to err on the side… if in doubt, let… let… let the speech… let it exist. It would have… if it’s.. uh… a gray area, I would say, l would say let the tweet exist. But… obviously… in a case where perhaps there’s a lot of controversy where perhaps you’d not want to necessarily promote that tweet, you know… so…so… so… I’m not saying I have all the answers here, but I do think that we want to be very reluctant to delete things and be very cautious with permanent bans. I think time outs are better than permanent bans. 

But just in general, like I said, it won’t be perfect but I think we want to really have the perception and reality that speech is as free as is reasonably possible and a good sign as to whether there is free speech, is ‘is someone you don’t like allowed to say something you don’t like.’ And if that is the case, then you have free speech. And it’s damn annoying when someone you don’t like says something you don’t like. That is a sign of a healthy, functioning free speech situation.

Again, so much to unpack here. First off, that approach of “when in doubt, let it exist” has almost always been the default position of the major social media companies from the beginning. Again, it’s important to go back to things like Klonick’s paper which describes all this. It’s just that over time anyone who’s done this quickly learns that fuzzy standards like “when in doubt” don’t work at all, especially at scale. You need specific rules that can be easily understood and rolled out to thousands of moderators around the world. Rules that can take into account local laws, local contexts, local customs. It’s not nearly as simple as Musk makes it out to be.

Indeed, to get to the spot that we’re in now, basically all of these companies started with that same premise, realized it wasn’t workable, and then iterated. And Musk is basically saying “I have a brilliant idea: let’s go back to step 1 and pretend none of the things experts in this space have learned over the past decade actually happened.”

And, again, Twitter and Facebook — just as Musk claims he wants — tend to lean towards time outs over permanent bans, but both recognize that malicious actors eventually will just keep trying, so some people you will have to ban. But Musk pretends like this is some deep wisdom when every website with any moderation at all knew this ages ago. Including Twitter.

Second, his definition of free speech is utter nonsense (and ridiculously got a big applause from the audience). That’s not the definition of free speech and if it is, then Twitter already has that. Tons of people I dislike are allowed to say things I dislike. You see that all over Twitter. But that’s not a reasonable or enforceable standard at all without context. The problem is not “someone I dislike saying something I dislike” the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. Musk not understanding any of that is just a representation of how little he understands this topic.

Anderson then asks Musk about what changes he would make to Twitter, leading Musk to basically contradict everything he just said and go straight to banning speech on Twitter:

Frankly, the top priority I would have is eliminating the spam and scam bots and the bot armies that are on Twitter. You know, I think, these influence… they make the product much worse. 

Um, nearly all of those are legal (the scam ones are a bit more hazy there, but the spam ones are legal speech). And just the fact that he acknowledges that they make the product much worse underlines how confused he is about everything else. Dealing with the things that “make the product much worse” is the underlying point of any trust & safety content moderation program — and tons and tons of work, and research, and testing have gone into how Twitter (and every other platform) tries to manage those things, and they all pretty much end up at the same place.

To deal with the spam and the scams and the things that “make the product much worse” you have to have rules, and you have to have enforcement that deals with the people who break the rules, meaning that you have to have people knowledgeable about content moderation and who are able to iterate and adjust, especially in the face of malicious actors trying to game the system.

But it’s quite incredible for him to say “pretty much leave it up if it’s legal” one moment, and the next moment say his top priority is to get rid of spam. Spam is legal.

And, again, as anyone who has lived through (or read up on) the history of content moderation knows, platforms all went through this exact process. The process that Musk thinks no one has actually done. They all started with a fundamental default towards allowing more speech and moderating less. And they all realized over time that it’s a lot more nuanced than that.

They all realized that there are massive trade-offs to every decision, but that some decisions still need to be made in order to stop “making the product worse” and to figure out ways to build “maximal trust” and to be “broadly inclusive.” In other words, for all of Musk’s complaining, Twitter has already done all the work he seems to pretend it hasn’t done. And his “solution” is to go back to square one while ignoring all the people who learned about the pitfalls, challenges, nuances, and trade-offs of the various approaches to dealing with these things… and to pretend that no one has done any work in this area.

Every time I post about this, Musk’s fans get angry and insist I couldn’t possibly understand this better than Musk. And, again, I actually really admire Musk’s ability to present visions and get the companies he’s run to achieve those visions. But dealing with human speech isn’t about building a car, a robot, a tunnel, or a rocket ship. It’s about dealing with human beings, human nature, and society.

None of this is to say that, if Musk does succeed in the bid, he doesn’t have the right to make these massive steps back to square one. Of course he has every right to make those mistakes. But it would be a disappointing move for Twitter, a company that has been more thoughtful, more careful, and more advanced than many other companies in this space. And it would likely wipe out the important institutional knowledge around all of this that has been so helpful.

I know that the narrative — which Musk has apparently bought into — is that Twitter’s content moderation efforts are targeted at stifling conservatives. There is, yet again, no actual evidence to support this. If anything, Twitter and Facebook have bent over backwards to be extra accommodating to those pushing the boundaries in order to use Twitter mainly as a platform to rile up those they dislike. But, from knowing how much effort Twitter has actually put into understanding interventions and how to build a trustworthy platform, I fear that what Musk would do with it would be a massive step backwards and a general loss for the world.

Incredibly, there’s a pretty good analogy to all of this earlier in that video. At the beginning, Anderson plays a snippet of a taped interview he did with Musk a week ago (when they weren’t sure if he’d be able to attend in person). And in that interview, Anderson points out that Musk predicted to Anderson five years ago that Tesla would have full self-driving working that year, and it obviously has not come to pass. Musk jokes about how he’s not always right, and explains that he’s only now realized that just how hard a problem driverless artificial intelligence is, and he talks about how every time it seems to be moving forward it hits an unexpected ceiling.

The simple fact is that dealing with human nature and human communication is much, much, much more complex than teaching a car how to drive by itself. And there is no perfect solution. There is no “congrats, we got there” moment in content moderation. Because humans are complex and ever-changing. And content moderation on a platform like Twitter is about recognizing that complexity and figuring out ways to deal with it. But Musk seems to be treating it as if it’s the same sort of challenge as self-driving — where if you just throw enough ideas at it you’ll magically fix it. But, even worse than that, he doesn’t realize that the people who have actually worked in this field for years have been making the kind of progress he talked about with self-driving cars — getting the curve to move in the right direction, before hitting some sort of ceiling. And Musk wants to take them all the way back to the ground floor for no reason other than he doesn’t seem to recognize that any of the work that’s already been done.

Read the whole story
SharedProphet
169 days ago
reply
peace
Share this story
Delete

Canadian Vaccine Protesters Are Confused About the Law, Too

2 Shares

You will be surprised to learn that some of the people involved in the “Freedom Convoy,” a protest against Canada’s attempt to limit the freedom of the virus that causes COVID-19, are confused about more than just science.

The first example of this comes from the CBC, which reported this week on the bail hearing for Tamara Lich, one of the protest’s organizers. Lich and some other participants have been arrested and charged with what we would probably call “incitement to riot” but Canada more politely calls “counselling to commit mischief.” (The CBC said that “[b]efore her arrest, Lich told journalists she wasn’t concerned about being arrested,” which wasn’t the first and won’t be the last thing she’s wrong about.)

Prosecutors argued bail for Lich should be denied because she’s already proven that she “has no respect for the law” and that she and her husband and/or their associates have resources that would allow her to keep stirring up trouble if released. The Liches said they have virtually no assets and have not done anything wrong anyway.

I guess I should say something at this point to acknowledge that these people are called “the Liches,” something the nerds among you are surely all agog about. As you know, a “lich” is a type of undead creature that may be encountered while playing Dungeons & Dragons, although in my experience such encounters do not involve fighting but rather fleeing in panic and later searching local towns for vendors of replacement undergarments. (I of course gained this experience while researching my dissertation on the habits of the nerd population, not as a member of it.) Lich was an Old English word for “corpse,” but through fiction and especially D&D-style gaming has come to mean the animated corpse of a powerful evil wizard or sorcerer.

Here, though, it is just the last name of some Canadian goofball.

Two of them, actually, because Tamara Lich’s husband Dwayne—definitely not a name I would have associated with a lich before today—was also present for the hearing. Dwayne was there because he was proposing to act as “surety,” meaning he would have to report if his wife jumped bail. But the court had some questions about whether he would be an appropriate surety, given that he had been in Ottawa during the protest his wife had organized. The report doesn’t say whether he too had been actively protesting, but the court apparently suspected maybe he was doing more than helping his protest-organizing wife with her luggage.

Mr. Lich claimed he did not agree with the strategy of trying to blockade downtown Ottawa until the government stops persecuting the virus, but also said he didn’t see anything wrong with it. According to the CBC, he “equat[ed] the blockades to a large traffic jam or parked cars in a snow storm,” neither of which is a situation that people bring about intentionally to force other people to do something. “I don’t see no guns,” Lich told the court, placing himself mentally back at the scene of the protest. “I don’t see anything criminal as far as I can see. I just see trucks parked.” And doesn’t a person have a right to park his truck wherever he wants, and for as long as he wants, regardless of whether that might inconvenience others?

Well, no. And this is true even under the First Amendment to the U.S. Constitution—which Lich invoked at the hearing, although he is Canadian:

[Lich] questioned whether the Emergencies Act … was implemented legally, at times confusing the numbered amendments found in the U.S. Constitution with Canada’s Charter of Rights and Freedoms.

“Honestly? I thought it was a peaceful protest and based on my first amendment, I thought that was part of our rights,” he told the court.

“What do you mean, first amendment? What’s that?” Judge Julie Bourgeois asked him.

“I don’t know. I don’t know politics. I don’t know,” he said. “I wasn’t supportive of the blockade or the whatever, but I didn’t realize that it was criminal to do what they were doing. I thought it was part of our freedoms to be able to do stuff like that.”

Turns out Canada does have something similar to the First Amendment, and a bunch of other laws too. Those are the ones that apply in Canada. Of course Lich had the right general idea here (though the wrong answer), but citing the wrong country’s laws tends to make people think that maybe you haven’t really done your homework.

Whether this error contributed to the result or not, the court denied bail.

The second example was reported by Althia Raj on Twitter and also by The Globe and Mail. Both were reporting comments made on Facebook Live by Pat King, another protest organizer who was arrested this week. In fact, he was arrested during one of his video streams. “They’ve cornered me,” he told viewers, as if he had been engaged in a dramatic escape attempt instead of sitting in a truck goofing around on Facebook. But cornered him they had.

Thankfully, this was only after he had taken the opportunity to provide some legal advice to his fellow protesters. And what glorious legal advice it was. King told viewers that it was time to regroup (not retreat, he made clear), and said that if they were confronted by police while regrouping they should wave a white shirt or white underpants at the officers. “They cannot touch you if you’re holding a white flag,” he declared. “It’s international law.”

This, of course, is not true. Under international law, waving your underpants at a police officer only confers immunity if the country in question has signed the Treaty of Guadalupe-Hidalgo (which Canada has not) or if the underpants have gold fringe down the sides. Or maybe that’s maritime law, I don’t know. And I might be wrong. I certainly don’t want to discourage any anti-vaccine protesters from trying this. Also, anyone who sees them trying it should definitely get it on video and post that online immediately. Whether it works or not, it will definitely be educational.

Read the whole story
SharedProphet
219 days ago
reply
peace
Share this story
Delete
Next Page of Stories