Ethan Elias Johnson is just some guy.
4014 stories

Elon Musk’s Twitter Business Model Idea: Ignore Free Speech Rights And Try To The Charge Media To Quote Tweets

1 Share

As everyone’s trying to read the tea leaves of what an Elon Musk-owned Twitter will actually look like, it’s been reported that in his presentation to Wall St. banks to get the financing he needs to complete the deal, he suggested the deal would be profitable because of some of his new business model ideas. Now, obviously, these are entirely speculative, and my guess is that he hasn’t thought through any of this that deeply (just like he hasn’t thought through content moderation’s challenges, even though he’s sure he can fix it). But, at least some of the banks are buying into the deal based on Musk promising a stronger Twitter business, so we need to pay attention to his ideas. Like this one, that, um, would be effectively impossible under the 1st Amendment.

Musk told the banks he also plans to develop features to grow business revenue, including new ways to make money out of tweets that contain important information or go viral, the sources said.

Ideas he brought up included charging a fee when a third-party website wants to quote or embed a tweet from verified individuals or organizations.

So, like, I don’t want to throw any cold water on the business model ideas of the guy people keep telling me is the most brilliant innovative business mind of our generation, but… it… um… seems at least a little ironic that he’s spent the past month screaming about “free speech” and enabling whatever the law allows… and now he wants to charge companies for quoting a tweet.

Yeah, so, thanks to the 1st Amendment (that he claims to support so much) he’s unlikely to be able to do that successfully. Quoting a tweet (we’ll deal with embedding shortly) in almost every damn case is going to be fair use under copyright law. And, a key reason we have fair use in copyright law… is that the 1st Amendment requires it, or else copyright law would stifle the very free speech that Musk claims to love so much.

In Eldred v. Ashcroft, the important (if wrongly decided) case on the Constitutionality of copyright term extension, Justice Ruth Bader Ginsburg repeatedly talked about how fair use was a “safeguard” in copyright law to make sure that copyright law could exist under the 1st Amendment, even as it could be used to suppress speech. The crux of the argument is that, because there’s fair use that allows people to do things like quote a 240 character outburst, then there’s no serious concern about copyright silencing speech. This point is often raised in the context of calling fair use a necessary safety valve on copyright to make it compatible with the 1st Amendment.

Given that Musk has claimed (incorrectly, but really, whatever) that free speech laws represent “the will of the people,” and his apparent big business model innovation is to demand that media organizations pay to quote tweets, which violates our fair use rights, which are necessary under the 1st Amendment… well, it appears that his biggest business model idea so far is to try to ignore the 1st Amendment rights of people wishing to quote tweets.

Good luck with that.

Also, under the current terms of service on Twitter, users hold any copyright interest in their own tweets. Twitter holds a license for it, but that wouldn’t allow Twitter as an entity to file copyright claims against any media organization that was quoting tweets in the first place. The only way it could do that is if it changed the terms entirely and required all its users to actually assign their copyrights to Twitter and, well, good luck with that as well.

Now, of course, the report claimed that the fee could be charged if someone “wants to quote or embed a tweet from verified individuals,” and the company certainly could set up some convoluted system to try to make people pay to embed, but that would (a) be fucking annoying for most everyone else and (b) would just lead to everyone screenshotting, instead of embedding, which is a lot less useful in the long run for Twitter, since it would drive fewer people to interact with Twitter. And, again, fair use and (I feel I must remind you) the 1st Amendment would protect all that screenshotting and quoting. Free speech, ftw!

And that’s not even getting into the idea that Twitter might now be effectively selling its popular tweets to websites. I mean, if this plan were to go forward (and somehow got over all the other hurdles), I’d imagine the company would literally need to cut its users in on the deal and set up some sort of “every time the NY Times embeds your tweet, they pay us $5 and we revert $3 of them to you” or some sort of nonsense like that. And, sure, maybe it’ll excite some Twitter users that they could get paid for their tweets (again, assuming any third party website out there ignores its fair use/1st Amendment rights to simply quote or screenshot and chooses to pay instead).

But, this would also likely create a whole world of complications. First, Twitter would need to set up an entirely new kind of operation to manage all of this. Musk also promised in these documents that he’s planning on reducing headcount at Twitter, but he’d need to staff up at least on managing the payments and payouts to tweeters. But, again, this is Elon Musk, so I’m guessing the system will work on the blockchain in Dogecoin and payments will flow automagically. And sure, maybe you could see how that could actually kinda work, if you’re into that sort of thing?

But, now, we get into the next issue: when you add money (even cute dog-meme based money) to a platform where people normally did shit for free, the incentives change. Oh, boy do they ever change. Suddenly you’re going to get scammers galore, looking to abuse the system, and get filthy stinkin’ Doge rich. I guess maybe this needs to be expressed in meme form?

And Elon should understand this better than anyone, given how frequently crypto scammers follow him around and try to scam his fans. Introducing actual money, even of the meme variety, into the mix is going to lead to a lot of scam behavior. And it would probably be helpful if the company had a… what’s it called… oh yeah, trust & safety staff to help think these issues through.

I’m never going to knock anyone for experimenting with creative business model ideas. And I’m all for Twitter trying out non-advertising based business models, as Elon has suggested is part of his focus. That actually seems like a good idea. But, it’s kinda weird when this whole deal is premised on the idea of bringing more “free speech” to the site… and his first business model suggestion when trying to convince banks to back him is to ignore the free speech rights of others and try to force them to pay up.

Read the whole story
15 days ago
Share this story

Reality Check: Twitter Actually Was Already Doing Most Of The Things Musk Claims He Wants The Company To Do (But Better)

1 Share

So there has been lots of talk about Elon Musk and his takeover of Twitter. I’ve written multiple things about how little he understands about free speech and how little he understands content moderation. I’ve also written (with giant caveats) about ways in which his takeover of Twitter might improve some things. Throughout this discussion, in the comments here, and on Twitter, a lot of people have accused me of interpreting Musk’s statements in bad faith. In particular, people get annoyed when I point out that the two biggest points he’s made — that (1) Twitter should allow all “legal” speech, and (2) getting rid of spambots is his number one priority — contradict each other, because spambots are protected speech. People like to argue that’s not true, but they’re wrong, and anyone arguing that expression by bots is not protected doesn’t understand the 1st Amendment at all.

Either way, I am always open to rethinking my position, and if people are claiming that I’m interpreting Musk in bad faith, I can try to revisit his statements in a more forgiving manner. Let’s, as the saying goes, take him figuratively, rather than literally.

But… here’s the thing. If you interpret Musk’s statements in the best possible light, it’s difficult to see how Twitter is not already doing pretty much everything he wants it to do. Now, I can already hear the angry keyboard mashing of people who are very, very sure that’s not true, and are very, very sure that Twitter is an evil company “censoring political views” and “manipulating elections” and whatever else the conspiracy theory of the day is. But it’s funny that the same people who insist that I’m not being fair to Musk, refuse to offer the same courtesy or willingness to understand why and how Twitter actually operates.

So, let’s look at Musk’s actual suggestions, phrased in the best possible light, and look at what Twitter has actually done and is doing… and again, you’ll realize that Twitter is (by far!) the social media service that has gone the farthest to make what he wants real, and in the few areas that he seems to think the company has fallen short, the reality is that it has had to balance difficult competing interests, and realized that its approach is the most likely to get to the larger goal of providing a platform for global conversation.

Musk has repeatedly said that he sees free speech on Twitter as an important part of democracy. So do many people at Twitter. They were the ones who framed themselves as the “free speech wing of the free speech party.” But as any actual expert in free speech will tell you, free speech does not mean that private websites should allow all free speech. And I know people — including Musk — will argue against this point, but it’s just fundamentally wrong. We’ve gone over this over and over again. The internet itself (which is not owned by any entity) is the modern public square, and anyone is free to set up shop on it. But that does not mean that they get to commandeer private property for their own screaming fits.

If it did, you would not have free speech, because you would (1) just get inundated with spam and garbage, and (2) only the loudest, most obnoxious voices would ever be heard. The team at Twitter actually understands the tradeoffs here, and while they don’t always get it “right” (in part because there is no “right”), Twitter’s team is so far above and beyond any other social media website, it’s just bizarre that the public narrative insists the opposite.

Twitter has long viewed its mission as enabling more free speech and more conversation in the world, and has taken steps to actually make that possible. Opening up the platform to people who violate the rules, abuse and harass others, and generally make a mess of things, does not aid free speech or “democracy.” You can disagree with where Twitter draws the lines (and clearly, Musk does), but Musk has shown little to no understanding of why and how the line drawing is done in the first place, and if he moves in the direction he claims, will quickly realize that Twitter’s lines are drawn much much much more permissively than nearly any other website (including, for what it’s worth, Trump’s Truth Social), and that there are actually clear reasons for why it drew the lines it did — and those lines are often to enable more ability for there to be communication and conversation on the platform.

Twitter has long allowed all sorts of dissenting viewpoints and arguments on its platform. Indeed, there are many activists who insist that the problem is that Twitter doesn’t do enough moderation. Instead, Twitter has put in place some pretty clear rules, and it tries to only take down accounts that really break those rules. It doesn’t always get that right. And it misses some accounts, and takes down others it shouldn’t. But on the whole, it’s way more permissive than most any other site that is much quicker to ban users.

Second, even as it contradicts his first point, Musk has claimed that he wants to get rid of spambots and scambots. This is a good goal. And, again, it’s also one that Twitter has been working on for ages. And it has really good, really smart people working on the issue (some of the best out there). And, in part because the company is so open and so permissive (again much more so than other platforms), this is an extraordinarily difficult problem to solve, especially at the scale of Twitter. People assume, falsely, that Twitter doesn’t care about spammers, but part of the issue is that if you want to have an “open” platform for “free speech,” that means that people will take advantage of that. Musk is going to find that Twitter already has some of the best people working on this issue — that is if they don’t rush out the door (or get pushed out by him).

Third, Musk has talked about redoing the verification system. He’s said that Twitter should “authenticate all real humans.” This appears to be (at least partly) part of his method for dealing with the bots and spam he’d like to eradicate. For years we’ve discussed the dangers of a “real names” policy, that requires people to post under their own names, including that studies have shown that the trolling often is worse under real names. It’s especially dangerous for marginalized people, and those who have stalkers, or are otherwise at risk.

But, some people respond, it’s unfair to assume he means a real names policy. Perhaps he just means that Twitter will keep a secret database of your verified details, and you can still be pseudonymous on the site. Except, as experts will tell you, that still is massively problematic, especially for marginalized groups, at-risk individuals, and those in countries with authoritarian regimes. Because now that database becomes a massive target. You get extremely questionable subpoenas, seeking to unmask users all the time. Or, you get the government demanding you cough up info on your users. Or you get hackers trying to get into the database. Or, you get authoritarian countries getting employees into these companies to seek out info on critics of the regime.

All of these things have happened with Twitter. And Twitter was in a position to push back. But it sure helped that in many of those cases Twitter didn’t actually have their “verification,” but much less information, like an IP address and an email.

Or, to take it another level, perhaps Musk really just means that Twitter should offer verification to those who want it. That’s not at all what he said, but it’s how some of his vocal supporters have interpreted this. Well, once again, Twitter has tried that. And it didn’t work. Back in 2016, Twitter opened up verification for everyone, and the company quickly realized it had a huge mess on its hands. First people gamed the system. Second, even though the program was only meant to just verify that the name on the account was the real person it was labeled as, people took it to be an “endorsement” by Twitter, which created a bunch of other headaches. Given that, Twitter paused the program.

It then spent years trying to figure out a way to open up verification to anyone without running into more problems. Indeed, Jack Dorsey made it clear that the plan has always been to “open verification to everyone.” But it turns out that, like dealing with spam and like dealing with content moderation, this is a much harder problem to solve at scale than most people think. It took Twitter almost four years to finally relaunch its verification program in a much more limited fashion, which they hoped would allow the company to test out the new process in a way that would avoid abuse.

But even in that limited fashion the program ran into all sorts of problems. It had to shut down the program a week after launching it, to sort out some of the issues. Then, it had to do so again 3 months later, after finding more problems with the program — specifically that fake accounts were able to game the verification process.

But, again, Twitter has been trying to do exactly what Musk’s fans insist he wants to do. And they’ve been doing so thoughtfully, and recognizing the challenges of actually doing it right, and realizing that it involves a lot of careful thought and tradeoffs.

Next, Musk said that Twitter DMs should have end-to-end encryption, and on this I totally agree. It should. And lots of others have been asking for this as well. Including… people within Twitter who have been working on it. But there are a lot of issues in making that actually work. It’s not something that you can just flip a switch on. There are some technical challenges… but also some social issues as well. All you have to do is look at how long it’s taken Facebook to do the same thing — in part because as soon as the company planned to do this, they were accused of not caring about child safety. Maybe, a privately owned Twitter, controlled by Musk just ignores all that, but there are real challenges here, and it’s not quite as easy as he seems to think. But, once again, it’s not an issue that’s never occurred to Twitter either.

Another recent Musk “idea” was that content moderation should be “politically neutral,” which he (incorrectly) claims “means upsetting the far right and far left equally.” For a guy who’s apparently so brilliant, you’d think he’d understand that there is no fundamental law that says (1) political viewpoints are distributed equally across a bell curve and (2) the differences between neutrality of inputs and neutrality of outputs. That is, every single study has shown that, if anything, Twitter’s content moderation practices greatly favor the right. It’s just that (right now), the right is much, much, much more prone to sharing misinformation. But if you have an unequal distribution of troublemakers, then a “neutral” policy will lead to unequal outcomes. Musk seems to want equal outcomes which literally would mean a non-neutral policy that gives much, much, much more leeway to troublemakers on the right. You can’t have equal outcomes with a neutral policy if the distribution is unequal.

Finally, the only other idea that Musk has publicly talked about is “open sourcing” the algorithm. At a first pass, this doesn’t make much sense, because it’s not like you can just put the code on Github and let everyone figure it out. It’s a lot more complicated than that. In order to release such code, you first have to make sure that it doesn’t reveal anything sensitive, or reveal any kind of vulnerabilities. The process for securing production code that was built in a closed source environment to make it open source… is not easy. Having dealt with multiple projects attempting to do that, it almost always fails.

In addition, if they were open sourcing the algorithm, the people it would benefit the most are the spammers and scammers — the very accounts Musk claims are his very first priority to stomp out. So once again, his stated plans contradict his other stated plans.

But… Twitter has actually again been making moves in this general direction all along anyway. Jack Dorsey, for years, has talked about why there should be “algorithmic choice” on Twitter, where others can build up their own algorithms, and users can pick and choose whose algorithm to use. That’s not the same as open sourcing it, but actually seems like it would be a hell of a lot closer to what Musk actually wants — a more open platform where people aren’t limited to just Twitter’s content moderation choices. And, as Dorsey has pointed out, Twitter is also the only platform that allows you to turn off the algorithm if you don’t want it.

So, as we walk down the list of each of the “ideas” that Musk has publicly talked about, taking them in the most generous light, it’s difficult to argue that Twitter isn’t (1) already doing most of it, but in a more thoughtful and useful manner, (2) much further along in trying to meet those goals than any other social media platform, and (3) already explored, tested, and rejected some of his ideas as unworkable.

Indeed, about the only actual practical point that Musk seems to disagree with Twitter about is a few specific content moderation decisions that he believes should have gone in a different direction. And this is, as always, the fundamental disconnect in any conversation about content moderation. Every individual — especially those with no experience doing any actual moderation — insists that they have the perfect way to do content moderation: just get rid of the content they don’t want and keep the content they do want.

But the reality is that it’s ridiculously more complicated than that, especially at scale. And no company has internalized that more than Twitter (though, I expect many of the people who understand this the best will not be around very long).

Now, I’m sure that Musk fans (and Techdirt haters, some of whom overlap), will quickly rush out the same tired talking points that have already been debunked. Studies have shown, repeatedly, that, no, Twitter does not engage in politically biased moderation. Indeed, the company had to put in place special safe space rules to protect prominent Republican accounts that violated its rules. Lots of people will point to individual examples of specific moderation choices that they personally don’t like, but refuse to engage on why or how they happened. We’ve already explained the whole “Biden Laptop” thing so it doesn’t help your case to bring it up again — not unless you’re able to explain why you’re not screaming about Twitter’s apparently anti-BLM bias for shutting down an account for leaking internal police files.

The simple fact is that content moderation at scale is impossible to do well, but Twitter actually does it better than most. That doesn’t mean you’ll agree with every decision. You won’t. People within the company don’t either. I don’t. I regularly call the company out for bad content moderation decisions. But I actually recognize that it’s not because of bias or a desire to be censorial. It’s because it’s impossible for everyone to agree on all of these decisions, and one thing the company absolutely needs to do is to try to craft policies that can be understood by a large content moderation team, around the globe, who can make relatively quick decisions at an astounding speed. And that leads to (1) a lot of scenarios that don’t neatly fit inside or outside of a policy, and (2) a lot of edge case judgment calls.

Indeed, so much of what people on the outside wrongly assume is “inconsistent” enforcement of policy is actually the exact opposite. A company like Twitter can’t keep changing policy on every decision. It needs to craft policy and stick with it for a while. So, something like the Biden laptop story comes along and someone points out that it seems pretty similar to the Blueleaks case, so if the company is being consistent, shouldn’t it block the NY Post’s account as well? And you can make an argument as to how it’s different, but there’s also a strong argument as to how it’s the same. And, so then you begin to realize that not blocking the NY Post in that scenario would actually be the “inconsistent” approach, since the “hacked materials” policy existed, and had been enforced against others before.

Now, some people like to claim that the Biden laptop didn’t involve “hacked” materials, but that’s great to be able to say in retrospect. At the time, it was extremely unclear. And, again, as described above, Twitter has to make these decisions without the benefit of hindsight. Indeed, they need to be made without the benefit of very much time to investigate at all.

These are all massive challenges, and even if you disagree with some of the decisions, it’s simply wrong to assume that the decisions are driven by bias. I’ve worked with people doing content moderation work at tons of different internet companies. And they do everything they can to avoid allowing bias to enter into their work. That doesn’t mean it never does, because of course, everyone is human. But on the whole, it’s incredible how much effort people put into being truly agnostic about political views, even ridiculous or abhorrent ones. And Twitter, pretty much above all others, is incredibly good at taking the politics out of its trust and safety efforts.

So, again, once Musk owns Twitter, he is free to do whatever he wants. But it truly is incredible to look over his stated goals, and to look at what Twitter has actually done and what it’s trying to do, and to realize that… Twitter already is basically the company Musk insists it needs to be. Only it’s been doing so in a more thoughtful, more methodical, more careful manner than he seems interested in. And that means we seem much more likely to lose the company that actually has done the most towards enabling free speech in support of democratic values. And that would be unfortunate.

Read the whole story
20 days ago
Share this story

Elon Musk Demonstrates How Little He Understands About Content Moderation

1 Share

Lots of talk yesterday as Elon Musk made a hostile takeover bid for all of Twitter. This was always a possibility, and one that we discussed before in looking at how little Musk seemed to understand about free speech. But soon after the bid was made public, Musk went on stage at TED to be interviewed by Chris Anderson and spoke more about his thoughts on Twitter and content moderation.

It’s worth watching, though mostly for how it shows how very, very little Musk understands about all of this. Indeed, what struck me about his views is how much they sound like what the techies who originally created social media said in the early days. And here’s the important bit: all of them eventually learned that their simplistic belief in how things should work does not work in reality and have spent the past few decades trying to iterate. And Musk ignores all of that while (somewhat hilariously) suggesting that all of those things can be figured out eventually, despite all of the hard work many, many overworked and underpaid people have been doing figuring exactly that out, only to be told by Musk he’s sure they’re doing it wrong.

Because these posts tend to attract very, very angry people who are very, very sure of themselves on this topic they have no experience with, I’d ask that before any of you scream in the comments, please read all of Prof. Kate Klonick’s seminal paper on the history of content moderation and free speech called The New Governors. It is difficult to take seriously anyone on this topic who is not aware of the history.

But, just for fun, let’s go through what Musk said. Anderson asks Musk why he wants to buy Twitter and Elon responds:

Well, I think it’s really important for there to be an inclusive arena for free speech. Twitter has become the de facto town square, so, it’s really important that people have both the reality and the perception that they’re able to speak freely within the bounds of the law. And one of the things I believe Twitter should do is open source the algorithm, and make any changes to people’s tweets — if they’re emphasized or de-emphasized — that should be made apparent so that anyone can see that action has been taken.  So there’s no sort of behind-the-scenes manipulation, either algorithmically or manually.

First, again, this is the same sort of thing that early Twitter and Facebook and other platform people said in the early days. And then they found out it doesn’t work for reasons that will be discussed shortly. Second, Twitter is not the town square, and it’s a ridiculous analogy. The internet itself is the town square. Twitter is just one private shop in that town square with its own rules.

Anderson asks Musk why he wants to take over Twitter when Musk had apparently told him just last week that taking over the company would lead to everyone blaming him for everything that went wrong, and Musk responds that things will still go wrong and you have to expect that. And he’s correct, but what’s notable here is how he’s asking for a level of understanding that he refuses to provide Twitter itself. Twitter has spent 15 years experimenting and iterating its policies to deal with a variety of incredibly complex and difficult challenges, nuances, and trade-offs, and as Musk demonstrates later in this interview, he’s not even begun to think through any of them.

My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization.

Again, this is the same sort of things that the founders of these websites said… until they had to deal with the actual challenges of running such platforms at scale. And, I should note, anyone who’s spent any time at all working on these issues knows that “maximally trusted” requires some level of moderation, because otherwise platforms fill up with spam and scams (more on that later) and are not trusted at all. There’s a reason these efforts are put under the banner of “trust & safety.”

Finally, the “public platform” is the internet. And trust is earned, but opening up a platform broadly does not inspire trust. Being broadly inclusive and trustworthy also requires recognizing that bad actors need to be dealt with in some form or another. This is what people have spent over a decade working on. And Musk acts like it’s a brand new issue.

And so then we get to the inevitable point of any such discussion in which Musk admits that of course some moderation is important.

Chris Anderson: You’ve described yourself as a free speech absolutist. Does that mean that there’s literally nothing that people can’t say and it’s ok?

Elon Musk: Well, I think, obviously Twitter or any forum is bound by the laws of the country it operates in. So, obviously there are some limitations on free speech in the US. And of course, Twitter would have to abide by those rules.

CA: Right. So you can’t incite people to violence, like direct incitement to violence… like, you can’t do the equivalent of crying fire in a movie theater, for example.

EM: No, that would be a crime (laughs). It should be a crime.

And all the free speech experts scream out in unison at the false notion of “fire in a crowded theater.”

But just the fact that Musk (1) agrees with this sentiment and (2) thinks that it would obviously be a crime shows how little he actually understands about free speech or the laws governing free speech. As a reminder for those who don’t know, the “fire in a crowded theater” line was a non-binding rhetorical aside in a case that was used to lock up a protestor for handing out anti-war literature (not exactly free speech supportive), and the Supreme Court Justice who used the phrase basically denounced it in rulings soon after — and the case that it came from was effectively overturned a few decades later, in the new case that set up the actual standard that Anderson suggests about incitement to imminent lawless action (which, in most cases, crying fire in a theater absolutely would not reach).

Anderson then tries (but basically fails) to get into some of the nuance of content moderation. It would have been nice if he’d actually spoken to, well, anyone with any experience in the space, because his examples aren’t just laughable, they’re kind of pathetic.

CA: But here’s the challenge, because it’s such a nuanced between different things. So, there’s incitement to violence, that’s a no if it’s illegal. There’s hate speech, which some forms of hate speech are fine. I… hate… spinach.

First of all, “I hate spinach” is not hate speech. I mean, of all the examples you could pull out… that’s not an example of hate speech (and we’ll leave aside Musk’s joke response, suggesting that if you cooked spinach right it’s good). But, much more importantly, here’s where Anderson and Elon could have confronted the actual issue which is that, in the US, hate speech is entirely protected under the 1st Amendment. And, we’ve explained why this is actually important and a good thing, because in places where hate speech is against the law, those laws are frequently abused to silence government critics.

But keeping hate speech legal is very different from saying that any private website must keep that speech on the platform. Indeed, keeping hate speech on a private platform takes away from the supposed “trust” and “broadly inclusive” nature Musk claimed to want. That would be an interesting point to discuss with Musk — and instead we’re left discussing what’s the best way to cook spinach.

Anderson again sorta weakly tries to get more to the point, but still doesn’t seem to know enough about the actual challenges of content moderation to have a serious discussion of the issue:

CA: So let’s say… here’s one tweet: ‘I hate politician X.’ Next tweet is ‘I wish politician X wasn’t alive.’ As some of us have said about Putin, right now for example. So that’s legitimate speech. Another tweet is ‘I wish Politician X wasn’t alive’ with a picture of their head with a gunsight over it. Or that plus their address. I mean at some point, someone has to make a decision as to which of those is not okay. Can an algorithm do that, or surely you need human judgment at some point.

First of all, broadly speaking all of the above are protected under the 1st Amendment. Somewhat incredibly, his final hypothetical is one I can talk about directly, because I was an expert witness in a case where a guy was facing criminal charges for literally Photoshopping gunsights over government officials, and the jury found him not guilty. But, also broadly speaking, there are plenty of legitimate reasons why a private platform would not want to host that content. In part, that gets back to the “maximally trusted” and “broadly inclusive” points.

But, on top of that, none of those examples are hate speech. Hate speech is not, as Chris Anderson bizarrely seems to believe, saying “I hate X.” Hate speech is generally seen as forms of expression designed to harass, humiliate, or incite hatred against a group or class of persons based on various characteristics about them (generally including things like race, religion, sexual identity, ethnicity, disability, etc.). The examples he raises are not, in fact, hate speech.

Either way, here’s where Elon shows how little he understands any of this, and how unfamiliar he is with all that’s happened in this space in the past two decades.

In my view, Twitter should match the laws of the country. And, really, there’s an obligation to do that. But going beyond that, and having it be unclear who’s making what changes to who… to where… having tweets mysteriously be promoted and demoted without insight into what’s going on, having a black box algorithm promote some things and not other things, I think those things can be quite dangerous.

Again, in the US, the laws say that such speech is protected, but that’s not a reasonable answer. We’ve gone through this before. Parler claimed it would only moderate speech that violated the law and then flipped out when it realized that people were getting on the site to mock Parler’s supporters or to post porn (which is also protected by the 1st Amendment). Simply saying that moderation should follow the law generally shows that one has never actually tried to moderate anything. Because it’s much more complicated than that, as Musk will implicitly admit later on in this interview, without the self-awareness to see how he’s contradicting himself.

There’s then a slightly more interesting discussion of open sourcing the algorithm, which is its own can of worms that I’m not sure Musk understands. I’m all for more transparency, and the ability for competing algorithms to be available for moderation, but open sourcing it is different and not as straightforward as Musk seems to imply. First of all, it’s often not the algorithm that is the issue. Second, algorithms that are built up in a proprietary stack are not so easy to just randomly “open source” without revealing all sorts of other stuff. Third, the biggest beneficiaries of open sourcing the ranking algorithm will be spammers (which is doubly amusing because in just a few moments Musk is going to whine about spammers). Open sourcing the algorithm will be most interesting to those looking to abuse and game the system to promote their own stuff.

We know this. We’ve seen it. There’s a reason why Google’s search algorithm has become more and more opaque over the years. Not because it’s trying to suppress people, but because the people who were most interested in understanding how it all worked were search engine spammers. Open sourcing the Twitter algorithm would do the same thing.

Chris then gets back to the moderation process (again in a slightly confused way about how Twitter trust & safety actually works), pointing out that “the algorithm” is probably less of an issue than all the human moderators, leading Musk to give a very long pause before stumbling through a bit of a word-salad response:

Well, I…I… I think we would want to err on the side… if in doubt, let… let… let the speech… let it exist. It would have… if it’s.. uh… a gray area, I would say, l would say let the tweet exist. But… obviously… in a case where perhaps there’s a lot of controversy where perhaps you’d not want to necessarily promote that tweet, you know… so…so… so… I’m not saying I have all the answers here, but I do think that we want to be very reluctant to delete things and be very cautious with permanent bans. I think time outs are better than permanent bans. 

But just in general, like I said, it won’t be perfect but I think we want to really have the perception and reality that speech is as free as is reasonably possible and a good sign as to whether there is free speech, is ‘is someone you don’t like allowed to say something you don’t like.’ And if that is the case, then you have free speech. And it’s damn annoying when someone you don’t like says something you don’t like. That is a sign of a healthy, functioning free speech situation.

Again, so much to unpack here. First off, that approach of “when in doubt, let it exist” has almost always been the default position of the major social media companies from the beginning. Again, it’s important to go back to things like Klonick’s paper which describes all this. It’s just that over time anyone who’s done this quickly learns that fuzzy standards like “when in doubt” don’t work at all, especially at scale. You need specific rules that can be easily understood and rolled out to thousands of moderators around the world. Rules that can take into account local laws, local contexts, local customs. It’s not nearly as simple as Musk makes it out to be.

Indeed, to get to the spot that we’re in now, basically all of these companies started with that same premise, realized it wasn’t workable, and then iterated. And Musk is basically saying “I have a brilliant idea: let’s go back to step 1 and pretend none of the things experts in this space have learned over the past decade actually happened.”

And, again, Twitter and Facebook — just as Musk claims he wants — tend to lean towards time outs over permanent bans, but both recognize that malicious actors eventually will just keep trying, so some people you will have to ban. But Musk pretends like this is some deep wisdom when every website with any moderation at all knew this ages ago. Including Twitter.

Second, his definition of free speech is utter nonsense (and ridiculously got a big applause from the audience). That’s not the definition of free speech and if it is, then Twitter already has that. Tons of people I dislike are allowed to say things I dislike. You see that all over Twitter. But that’s not a reasonable or enforceable standard at all without context. The problem is not “someone I dislike saying something I dislike” the problem is spam, abuse, harassment, threats of violence, dangerously misleading false information, and more. Musk not understanding any of that is just a representation of how little he understands this topic.

Anderson then asks Musk about what changes he would make to Twitter, leading Musk to basically contradict everything he just said and go straight to banning speech on Twitter:

Frankly, the top priority I would have is eliminating the spam and scam bots and the bot armies that are on Twitter. You know, I think, these influence… they make the product much worse. 

Um, nearly all of those are legal (the scam ones are a bit more hazy there, but the spam ones are legal speech). And just the fact that he acknowledges that they make the product much worse underlines how confused he is about everything else. Dealing with the things that “make the product much worse” is the underlying point of any trust & safety content moderation program — and tons and tons of work, and research, and testing have gone into how Twitter (and every other platform) tries to manage those things, and they all pretty much end up at the same place.

To deal with the spam and the scams and the things that “make the product much worse” you have to have rules, and you have to have enforcement that deals with the people who break the rules, meaning that you have to have people knowledgeable about content moderation and who are able to iterate and adjust, especially in the face of malicious actors trying to game the system.

But it’s quite incredible for him to say “pretty much leave it up if it’s legal” one moment, and the next moment say his top priority is to get rid of spam. Spam is legal.

And, again, as anyone who has lived through (or read up on) the history of content moderation knows, platforms all went through this exact process. The process that Musk thinks no one has actually done. They all started with a fundamental default towards allowing more speech and moderating less. And they all realized over time that it’s a lot more nuanced than that.

They all realized that there are massive trade-offs to every decision, but that some decisions still need to be made in order to stop “making the product worse” and to figure out ways to build “maximal trust” and to be “broadly inclusive.” In other words, for all of Musk’s complaining, Twitter has already done all the work he seems to pretend it hasn’t done. And his “solution” is to go back to square one while ignoring all the people who learned about the pitfalls, challenges, nuances, and trade-offs of the various approaches to dealing with these things… and to pretend that no one has done any work in this area.

Every time I post about this, Musk’s fans get angry and insist I couldn’t possibly understand this better than Musk. And, again, I actually really admire Musk’s ability to present visions and get the companies he’s run to achieve those visions. But dealing with human speech isn’t about building a car, a robot, a tunnel, or a rocket ship. It’s about dealing with human beings, human nature, and society.

None of this is to say that, if Musk does succeed in the bid, he doesn’t have the right to make these massive steps back to square one. Of course he has every right to make those mistakes. But it would be a disappointing move for Twitter, a company that has been more thoughtful, more careful, and more advanced than many other companies in this space. And it would likely wipe out the important institutional knowledge around all of this that has been so helpful.

I know that the narrative — which Musk has apparently bought into — is that Twitter’s content moderation efforts are targeted at stifling conservatives. There is, yet again, no actual evidence to support this. If anything, Twitter and Facebook have bent over backwards to be extra accommodating to those pushing the boundaries in order to use Twitter mainly as a platform to rile up those they dislike. But, from knowing how much effort Twitter has actually put into understanding interventions and how to build a trustworthy platform, I fear that what Musk would do with it would be a massive step backwards and a general loss for the world.

Incredibly, there’s a pretty good analogy to all of this earlier in that video. At the beginning, Anderson plays a snippet of a taped interview he did with Musk a week ago (when they weren’t sure if he’d be able to attend in person). And in that interview, Anderson points out that Musk predicted to Anderson five years ago that Tesla would have full self-driving working that year, and it obviously has not come to pass. Musk jokes about how he’s not always right, and explains that he’s only now realized that just how hard a problem driverless artificial intelligence is, and he talks about how every time it seems to be moving forward it hits an unexpected ceiling.

The simple fact is that dealing with human nature and human communication is much, much, much more complex than teaching a car how to drive by itself. And there is no perfect solution. There is no “congrats, we got there” moment in content moderation. Because humans are complex and ever-changing. And content moderation on a platform like Twitter is about recognizing that complexity and figuring out ways to deal with it. But Musk seems to be treating it as if it’s the same sort of challenge as self-driving — where if you just throw enough ideas at it you’ll magically fix it. But, even worse than that, he doesn’t realize that the people who have actually worked in this field for years have been making the kind of progress he talked about with self-driving cars — getting the curve to move in the right direction, before hitting some sort of ceiling. And Musk wants to take them all the way back to the ground floor for no reason other than he doesn’t seem to recognize that any of the work that’s already been done.

Read the whole story
33 days ago
Share this story

Canadian Vaccine Protesters Are Confused About the Law, Too


You will be surprised to learn that some of the people involved in the “Freedom Convoy,” a protest against Canada’s attempt to limit the freedom of the virus that causes COVID-19, are confused about more than just science.

The first example of this comes from the CBC, which reported this week on the bail hearing for Tamara Lich, one of the protest’s organizers. Lich and some other participants have been arrested and charged with what we would probably call “incitement to riot” but Canada more politely calls “counselling to commit mischief.” (The CBC said that “[b]efore her arrest, Lich told journalists she wasn’t concerned about being arrested,” which wasn’t the first and won’t be the last thing she’s wrong about.)

Prosecutors argued bail for Lich should be denied because she’s already proven that she “has no respect for the law” and that she and her husband and/or their associates have resources that would allow her to keep stirring up trouble if released. The Liches said they have virtually no assets and have not done anything wrong anyway.

I guess I should say something at this point to acknowledge that these people are called “the Liches,” something the nerds among you are surely all agog about. As you know, a “lich” is a type of undead creature that may be encountered while playing Dungeons & Dragons, although in my experience such encounters do not involve fighting but rather fleeing in panic and later searching local towns for vendors of replacement undergarments. (I of course gained this experience while researching my dissertation on the habits of the nerd population, not as a member of it.) Lich was an Old English word for “corpse,” but through fiction and especially D&D-style gaming has come to mean the animated corpse of a powerful evil wizard or sorcerer.

Here, though, it is just the last name of some Canadian goofball.

Two of them, actually, because Tamara Lich’s husband Dwayne—definitely not a name I would have associated with a lich before today—was also present for the hearing. Dwayne was there because he was proposing to act as “surety,” meaning he would have to report if his wife jumped bail. But the court had some questions about whether he would be an appropriate surety, given that he had been in Ottawa during the protest his wife had organized. The report doesn’t say whether he too had been actively protesting, but the court apparently suspected maybe he was doing more than helping his protest-organizing wife with her luggage.

Mr. Lich claimed he did not agree with the strategy of trying to blockade downtown Ottawa until the government stops persecuting the virus, but also said he didn’t see anything wrong with it. According to the CBC, he “equat[ed] the blockades to a large traffic jam or parked cars in a snow storm,” neither of which is a situation that people bring about intentionally to force other people to do something. “I don’t see no guns,” Lich told the court, placing himself mentally back at the scene of the protest. “I don’t see anything criminal as far as I can see. I just see trucks parked.” And doesn’t a person have a right to park his truck wherever he wants, and for as long as he wants, regardless of whether that might inconvenience others?

Well, no. And this is true even under the First Amendment to the U.S. Constitution—which Lich invoked at the hearing, although he is Canadian:

[Lich] questioned whether the Emergencies Act … was implemented legally, at times confusing the numbered amendments found in the U.S. Constitution with Canada’s Charter of Rights and Freedoms.

“Honestly? I thought it was a peaceful protest and based on my first amendment, I thought that was part of our rights,” he told the court.

“What do you mean, first amendment? What’s that?” Judge Julie Bourgeois asked him.

“I don’t know. I don’t know politics. I don’t know,” he said. “I wasn’t supportive of the blockade or the whatever, but I didn’t realize that it was criminal to do what they were doing. I thought it was part of our freedoms to be able to do stuff like that.”

Turns out Canada does have something similar to the First Amendment, and a bunch of other laws too. Those are the ones that apply in Canada. Of course Lich had the right general idea here (though the wrong answer), but citing the wrong country’s laws tends to make people think that maybe you haven’t really done your homework.

Whether this error contributed to the result or not, the court denied bail.

The second example was reported by Althia Raj on Twitter and also by The Globe and Mail. Both were reporting comments made on Facebook Live by Pat King, another protest organizer who was arrested this week. In fact, he was arrested during one of his video streams. “They’ve cornered me,” he told viewers, as if he had been engaged in a dramatic escape attempt instead of sitting in a truck goofing around on Facebook. But cornered him they had.

Thankfully, this was only after he had taken the opportunity to provide some legal advice to his fellow protesters. And what glorious legal advice it was. King told viewers that it was time to regroup (not retreat, he made clear), and said that if they were confronted by police while regrouping they should wave a white shirt or white underpants at the officers. “They cannot touch you if you’re holding a white flag,” he declared. “It’s international law.”

This, of course, is not true. Under international law, waving your underpants at a police officer only confers immunity if the country in question has signed the Treaty of Guadalupe-Hidalgo (which Canada has not) or if the underpants have gold fringe down the sides. Or maybe that’s maritime law, I don’t know. And I might be wrong. I certainly don’t want to discourage any anti-vaccine protesters from trying this. Also, anyone who sees them trying it should definitely get it on video and post that online immediately. Whether it works or not, it will definitely be educational.

Read the whole story
83 days ago
Share this story

The Myth of Artificial Intelligence

1 Share
‘The Age of AI’ advances a larger political and corporate agenda.

Read the whole story
160 days ago
Share this story

The Night The United States Supreme Court Cancelled Law

1 Share

Last week's news about Justice Barrett fretting about the Supreme Court being seen as partisan calls to mind the old joke about a defendant on trial for murdering his parents and begging the court for mercy because he's an orphan. If you've created the mess you find yourself in, you have no one to blame but yourself.

Nevertheless, there is credence to her protest (which other justices have since echoed) that the way the Court has acted recently is not actually "partisan." After all, Republican-appointed Justice Roberts has been frequently joining the Democrat-appointed justices of late, which we wouldn't expect if political loyalties were all that were at the root of all Supreme Court actions. As Justice Barrett herself suggests, to understand what the Court has been doing of late, we need to look deeper:

“To say the court’s reasoning is flawed is different from saying the court is acting in a partisan manner,” said Barrett[.] “I think we need to evaluate what the court is doing on its own terms.”

So let's do what she suggests and evaluate the Court's actions on its own terms. Because what we'll find is even worse than partisanship.

Justice Barrett argues that what the public is seeing is merely a difference in "judicial philosophies," as if the prevalent splits among justices are but two sides of the same coin. But what we are seeing from this Court is hardly a case of the justices simply calling balls and strikes differently according to their respective vantagepoints. Instead we are seeing the majority deploy a "judicial philosophy" willing if not eager to erode the previously stalwart foundations upon which American law has historically depended. It is a philosophy of little more than legal nihilism. And it represents a profound change in the nature of the Court of enormous if not cataclysmic consequence.

Trouble has been brewing for some time now, with the majority's increasing use of its "shadow docket" to wield a heavy hand on legal questions without any meaningful opportunity for briefing or substantive argument by anyone affected. Instead of carefully weighing the pros and cons of the particular issue raised by the case before them in an open and transparent way, as the Court traditionally has on matters of such significance, they are instead making ad hoc and inconsistent procedural decisions behind the scenes, despite the fact that these sorts of decisions are having huge practical effect and impacting people's rights just as much they would in any case brought before them for their full and reasoned review.

This problematic practice culminated a few weeks ago with its rushed, unsigned, barely two-page, late-night order in Whole Women's Health v. Jackson, when the majority declined to exercise its procedural powers to stop Texas's SB8, a facially unconstitutional law that offended the Constitution in almost every way a law possibly could, from coming into force. As a result, rather than upholding the Constitution, or protecting the public from a wayward state actor, or even acting consistently with its own principles of jurisprudence, that slim majority, with only a few, ill-supported sentences, casually abdicated the Court's role as a protector of liberty and ruled instead as arbitrary, unaccountable autocrats.

There are at least two key reasons why the majority's behavior here is so deserving of such excoriation. The first relates to the specious way the majority misapplied procedural rules as convenient cover for producing substantively consequential outcomes, apparently deliberately, although even if it had been unintentionally it would still be a problem. Procedural rules exist to help ensure that justice can be meted out timely and fairly. While it's true that in this case the Supreme Court found itself in the position of having to clean up the mess caused by the Fifth Circuit's own procedural hijinks – which had abruptly, and dubiously, snatched the Texas statute away from the district court's established review process and thus made it practically impossible for it to act before the law was supposed to go into effect – the Supreme Court's astonishing refusal to take corrective action is what made this review ultimately impossible. And it did it by turning those very same procedural rules designed to help administer justice into outright obstacles obstructing it, opting instead to hide behind them with nothing more than a brief prevarication explaining why these rules somehow, and suddenly, had made it, the most powerful court in the land, unusually powerless to prevent a clearly unconstitutional law from going into effect.

In failing to act the Court also unilaterally overruled the long-standing judicial preference in American courts for preserving the status quo when there is a reasonable chance of a law potentially causing an improper injury before the matter has been able to receive appropriate review. And not only did the Court ignore that concern, but it all but invited those injuries to occur. The statute in question had basically walked up to several areas of settled precedent protecting constitutional rights and proverbially punched them all in the nose, openly daring the Supreme Court to come after it. Yet, shockingly, the majority declined to.

This refusal to defend the Court's own precedents was yet another way the majority's behavior was aberrant and destructive. Precedent is what gives the law stability, because once the Court has spoken we can all know where we stand. Sure, new cases will come up and be litigated, but the questions then will be about if and how precedent applies to the new situation. Sometimes this inquiry may result in the narrowing or limiting a precedent's reach, but precedent has historically been outright nullified only on the rarest of occasions and only when there has been a material change in the circumstances upon which the Court's reasoning had rested, like a new statute, a new Constitutional amendment (rare), or some other fundamental shift in society prompting a second look by the Court.

And even then the Court's practice has not been to simply ignore or overturn its previous rulings; rather, it would generally issue decisions to explain what holdings were being revisited, and why, so that the new decisions could take on the same weight of recognized authority the previous precedent once had. But that standard went out the window on that Thursday night when it issued the Whole Women's Health order. With this order it signaled that it is happy to cavalierly trash the Court's previous rulings, and, worse, with no explanation. While reasonable minds may disagree about the wisdom of a particular Court decision, everyone should be able to read its analysis to understand how the Court arrived at its conclusion. But there is nothing here in this order to legitimize the Court's sudden and drastic rejection of all the past precedent the statute implicated. Worse, in so rejecting it, it has told the world that we can never know what the law is, because it can change instantly, depending entirely on the majority's mood of that moment.

Such a reality is untenable. No matter what you think of the Texas statute, even if you believe in or support its policy goals, what the Supreme Court did on this Thursday night should still strike fear in your heart. Because the impact of what it did transcends any particular law or policy. Not only did it undermine its own esteem as an institution, but it made America unsustainable, a hollowed-out Potemkin Village of abandoned constitutional principle, and Americans no better off than the wretched citizens of the ancient feudal empire that inspired the story.

What happened on that Thursday night was the catastrophic undermining of not only the Court's own legitimacy but the legitimacy of the entire American legal system. It left all our laws and freedoms, and even the very adjudication of these questions, subject only to the capricious whim of the handful of people with enough power to unilaterally decree, with no argument, consideration, or any need to justify themselves, how we must live our lives. We might as well replace their black robes with crimson ermine and sit them on thrones, so at least we can all see and acknowledge the sheer unchecked power they now rule us with.

This is not how our constitutional order has worked. It is not how our constitutional order can work. Yes, courts have always had lots of power. And the Supreme Court in particular has always had an enormous amount of power to shape our legal world. But there were always apparent rules tempering this power. Which meant that such things as reason, persuasion, equitable procedure, predictable precedent, transparency, and notions of fair play could function as guiding pillars within which advocacy took place so that, win or lose, we all could believe in the justice of the result. But not anymore. With this order all those basic tenets have now been bulldozed. Even any sort of reasonable standard for injunctive relief is out the window. As Justice Kagan noted in her dissent, the Court's unconstrained behavior has become increasingly "unreasoned, inconsistent, and impossible to defend." In other words: our law has itself become lawless.

Supreme Court justices are of course human beings and therefore fallible, and the Supreme Court itself is a human institution that necessarily has to evolve as the society it serves does as well. But the concern is not that the Supreme Court may be evolving, because evolution is one thing; radically altering the operation of the Court practically overnight is another. And what the majority did can hardly be explained away as mere mistake, as in, "Oops, five justices' pens slipped and they accidentally repudiated decades if not centuries of past practice and precedent." But when even the most generous view of what happened is incompetence it severely undermines the esteem of the institution and those who inhabit it.

Nor can we say it's simply a matter of one bad decision. Bad decisions have happened before, and while it's never good when they do, as long as the system still works they can eventually be overcome. But what happened here represented a fundamental shift in the way the Court exercises its power, from one of predictable certainty to one of subjective judicial impulse, and there's no overcoming that change.

How could we? For those of us connected to the legal profession, what power would we still possess as practitioners to influence the cause of justice in this new system? What skills could we still exercise? How could we continue to play our own constitutional role in furthering justice in the courts when everything we were taught in law school about the American legal system has just suddenly been rendered moot?

Yes, life will go on for most tomorrow, and the day after, and the day after that. But for how long can we deceive ourselves that everything remains normal when the new normal is anything but? When the Supreme Court can so dramatically change our understanding of the law and the scope and dimension of our rights with little more than a snap of its fingers, how are we to live in a society predicated on the rule of law and guaranteed rights? How can we even tell ourselves that we are? We're like the coyote that has run off the cliff, and sooner or later we're going to notice that there is nothing supporting us anymore. And then where will we be?

Read the whole story
237 days ago
Share this story
Next Page of Stories