Steven L. Taylor
Friday, May 19, 2023
About Steven L. Taylor
Steven L. Taylor is a Professor of Political Science and a College of Arts and Sciences Dean. His main areas of expertise include parties, elections, and the institutional design of democracies. His most recent book is the co-authored A Different Democracy: American Government in a 31-Country Perspective.
He earned his Ph.D. from the University of Texas and his BA from the University of California, Irvine. He has been blogging since 2003 (originally at the now defunct Poliblog).
Follow Steven on Twitter
DeSantis has done wonders for Florida: Disney cancels plans for $1bn campus in Florida amid battle with DeSantis
One wonders whatever he will do for them next.
More than half of the world’s lakes have shrunk in past 30 years, study finds (I should point out that human consumption has also had an impact on lakes tho the majority is still on climate change)
Global heating has likely made El Niños and La Niñas more ‘frequent and extreme’, new study shows
But Inhofe threw a snowball in the Senate once, proving it’s all a hoax.
More fun with Puerto Ricans: US airline ‘sincerely apologizes’ to family over Puerto Rico passport error
Not just US police:
Suffering from dementia, 95 yo, 5’2, 95#s, walking with a walker and a steak knife. How terrifying.
From Photography for the Ocean – in pictures
Wow. Just wow.
Two thoughts. Obviously the cops feared for their lives. Hey, if you have a Taser, you should use it.
I can’t believe how naive the Supreme’s Courts opinion on algorithms is. “If your product does harm but you don’t deliberately program it to do harm and instead use machine learning, you can’t be held responsible.” What could go wrong?
What court case was this?
@Kurtz: This one (no subscription needed). “As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content,” Thomas wrote. “The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”
From today’s newspaper, a follow-up to yesterday’s conversation on land ownership by Chinese in Alabama.
Today In History:
@MarkedMan: My guess is that they feared a slippery slope from there to gun manufacturers being held liable for the use of their products under the same arguments.
Scaring little kids with a high-powered semi-automatic rifle is a viable form of protest?
Hey! Let my comment out of moderation, please.
No. It’s upholding Section 230–which is “the 26 words that created the internet“.
@Tony W: They feared restrictions on harmful and false information, which would disproportionally hurt Republicans. From NRO-Corner this morning,
FOX/GOP has put a lot of effort into building an alternate reality with a body of alternate “facts”. The Federalist Society would hate to see it threatened.
@CSK: My post about the gun nut at a school bus stop went into moderation too. I don’t think that has happened to me in a long long time*. I wonder if somethings gone buggy.
* and then it was a recurring thing that started up suddenly and eventually ended just as suddenly
@Tony W: Think about how many technologies use, or would like to use, machine learning. To name just two: automated driving and medical diagnosis. Are they going to be granted similar blanket immunity?
And my post was a response to yours. I wonder what tripped the wire?
@OzarkHillbilly: I think the larger lesson in Imhoff’s stunt is why it’s wiser to refer to “climate change” than “global warming” despite the assertion that the increasing temperature in the oceans is the proximate cause.
Guys, I gotta say it is my opinion that Section 230 ought to have bipartisan support. Yeah, it makes some odious right-wing stuff possible. AND, it enables discussion of abortion, contraception, homosexuality and trans issues possible. And lots more.
Section 230 has a similar impact to “common carrier” laws as regards telephone service. If terrorists used the telephone to organize themselves, does it make sense to sue the telephone company? I don’t think it does.
Good news for Ukraine:
Biden agrees to joint fighter jet training for Ukrainian pilots
This will soon be followed by an announcement that the US will permit other nations to provide F-16s to the Ukraine.
@Jay L Gischer:
IMO, the crucial difference is that the phone company does not suggest calling specific people who might also be interested in terrorism.
@Scott: Your post triggered a memory from my produce days about the Tanimura and Antle agribusiness. taproduce.com/our-heritage/
[CRT Trigger Warning!!!]
The policies started in the 1910s in California continued into the WWII years in that while George Tanimura and his family went to an internment camp, while his younger brothers Charlie and Johnny fought in the war (avoiding internment and showing their family’s [wasted] loyalty), Lester and Bud Antle became government contractors.
@Jay L Gischer: For your analogy to hold, the telephone company would have to be selling advertisements on your goals and information about where and when you make them, so they have a financial incentive to get you as riled up as possible so you are calling everyone you know.
Then yes, if they start giving nazis your kids number, they should be liable even if it is their algorithm.
@MarkedMan: “goals” = “calls”
GOP negotiators hit ‘pause’ on debt talks
What the media is not reporting is that this walking away from the table is exactly what the “Freedom” Caucus is demanding.
Freedom Caucus says ‘no further discussion’ on debt ceiling until Senate passes House GOP bill
So GOP House leadership is not leading; it is following the extremists in their party and involved in bad faith negotiation.
I’m getting glasses, again.
I think I first noticed a problem in my left eye around age 16 or 17. At the time, I was diagnosed with myopia in that eye, and no problems at all in the right one. I only wore glasses when I needed to see things farther out. While driving, for instance, or at the movies. Eventually I’d lose my glasses and notice little difference.
This went on with intermittent use of glasses until in 2015 I got diagnosed with mild astigmatism in both eyes. But I still saw perfectly well at close and medium range. While working with a computer or watching TV. That time the glasses lasted maybe six months before I lost them.
Why I keep losing them is unclear. I think I know what must have happened to the last pair. After driving to the airport, I took them off and placed them in their case, fully intending to put them in my laptop bag. the phone rang wile I was getting out of the car and grabbing my things. I must have placed the glasses on top of the car in order to grab the phone, and then left them there. I just don’t remember doing so.
I didn’t notice until mid flight, when I realized I couldn’t quite make out the moving map on the small overhead screens. I looked in my bag and the glasses weren’t there. I figured I’d left them in the car. Next day upon returning, the glasses weren’t in the car. I looked under the seats, between the seats, any place they might have wound up in, and nothing.
I still don’t think I need them much. I’ve been driving without major incidents for literally decades without them. My propensity to sit close to the screen at the movies (I like the big screen to seem BIG) means I see just fine. But lately I can’t make out some titles here and there when browsing the streaming menu on the TV. While driving, I notice I sometimes need to get closer to a road sign to see it.
I think what I’ll do this time is see how useful they are now, and then get an extra pair just in case. And try to be more careful.
I love the title of this piece:
“Trump Might Not Be Able to Use His Idiocy as a Defense Anymore
Maybe some among his die-hard base have a greater idiocy he can borrow.
Ok, let’s try a different analogy. Let’s suppose that 50 years ago, the John Birch Society had taken out classified ads in newspapers across the country that said, “Jews Will Not Replace Us! Call “.
Naturally, I think quite a few newspapers would refuse to run those ads, but some wouldn’t. Could they be sued? I actually don’t think so, but if some of y’all know more about this than I do, please inform me.
There is speech that is illegal. But the above doesn’t meet that criteria, as far as I know.
Jim Brown has died at 87.
Perhaps the original GOAT.
Leading rusher 8 of his 9 years in the league.
Only player in history to average over 100 yards…in every game he played.
And an equally great lacrosse player.
Someone here posts as Jim Brown…I hope he will change his signature.
The Senate is NEVER going to pass the House GOP bill.
Use the 14th Amendment to put an end to this craziness.
@Mister Bluster: RIP to a Great American and a man committed to values over wealth and status
@Jim Brown 32:
He was one of the great civil rights pioneers.
@Jay L Gischer:
Traditionally newspapers and magazines have been held accountable for the content they publish. I don’t know about classified ads. Otherwise, though, it’s rare for a reporter to be sued for libel and not sue the paper or magazine that published the report. I vaguely recall reading somewhere this applied also to letters to the editor that get published.
My problem with social media companies is that they are not mere carriers. They do exercise something like an editorial function in what they promote to whom, and what they allow to be posted on their platforms.
Section 230 dates from 1996, when the commercial internet was getting started. I don’t know what the intent was, but the internet of today, especially social media, is a completely different beast than that of 1996. Regulation should adapt to developments.
@Kathy: The problem is you’re only talking about social media but section 230 covers WAAY more than that including your ISP. If you want to take that earlier telephone as an example. Repealing section 230 would make your phone company liable if you say something defaming/illegal/etc. That’s what section 230 is to your ISP and your ISP’s ISP etc. Without section 230 they would be held liable for anything anywhere posted on the internet if any bit of data traveled over their network at any point.
What’s kind of funny is that when phone carriers went full digital they are basically all VOIP now which puts them on par with any other internet traffic and thus applicable to section 230.
As you can see without section 230 the internet would be impossible….
So of course the righties want to get rid of section 230 so they can go after any company/service they deem insufficiently right wing.
Seeing people railing against section 230 here is incredibly disappointing.
I was on the internet in the early 90s and I can assure you the only thing that’s changed is the bandwidth is better, the videos are higher resolution and there’s more people/devices connected. People were every bit the same back then as they are now. Flame wars, trolling, etc are terms that all date back to the early to mid 80s…
Also regulations HAVE adapted to various developments and if you could spend just a little time looking into the related laws you would see that the last update was in 2018….
That’s completely irrelevant as internet backbone providers and your ISP aren’t publishing anything when you decide to visit a site offering illegal services etc. When you post something illegal it’s not the ISP publishing it nor when you’re browsing the web. They are just relaying the data you requested.
So in your world where section 230 is repealed all someone would need to do to close down all the major internet backbones is look at a single illegal image….
I’ll leave you all with this, as I sign off for the weekend:
Florida Governor Signs 5 Bills Labeled As “Protecting Innocence of Florida’s Children”
They are not “protecting children”. They are codifying hate, oppression, and forced speech into state law.
I hate it when people cry “Nazi” at the slightest discomfort. This? This is how the Fourth Reich starts. This is the state telling people who they can and cannot be. And using “disclaimers” as a loophole to (try to) protect themselves against liability while pushing a state-backed, extremist, religious agenda.
I seriously hope that Bob Iger takes Disney onto the offensive and uses their ~$200B market value, and massive media presence to destroy DeSantis and the Florida legislature. I hope that Google, Apple, Microsoft, Facebook, and Amazon step up and add the force of their influence to that of companies like Wells Fargo, Levi Straus, IBM, Paypal, and Getty Images who are already supporting LGBTQA+ issues will hundreds of millions of dollars in donations, and active PR campaigns.
@Jay L Gischer: I still think your analogy is flawed and would be happy to argue it out separately, but in this case it doesn’t matter. The Supremes didn’t rule on 230. They ruled that no harm was shown. It never reached 230. Why? Because it wasn’t a person’s decision that fed the hateful stuff into the newsfeeds, but an algorithm.
I may disagree with you about 230, but a straightforward decision on that would be no worse (and no better) than upholding tort limits on gun manufacturers. This decision is much worse and, as I said, demonstrates an incredible naiveté. All kinds of things rely on algorithms developed by machine learning (autonomous driving, medical diagnostics). To simply say that a product that relies on machine learning is exempt from tort law is… stupefying. Just read what Thomas said in the majority opinion:
So – if a person had done it with evil intent, it is a harm, but if an algorithm owned by a person did it for unknowable reasons, the harm disappears?
@Matt: There’s a whole lotta yardage between “getting rid of every protection encompassed in 230” and “update to address the changing nature of the internet”.
How can you even say that? Do you think Google is Altavista? It blindly matches search words and gives you whatever matches the closest? Do you think Facebook or Twitter is the equivalent of a dial up bulletin board (which is why the 230 law came about in the first place), letting users post to a shared space and treating every post as equal? Not in the slightest. It promotes specific content in order to get you to engage more so they can make more money. If the content they promote is harmful then they should be liable to the extent they promoted it to you.
With very few exceptions, criminal law requires “mens rea”–a “guilty mind”. If I dig a deep hole in my backyard and push my neighbor into it in the hopes of killing him, I’m a murderer. If I dig that same deep hole and my neighbor trips and falls into it, I might be liable for negligence.
Nobody is forcing people to watch ISIS videos–just like nobody is forcing me to watch science videos. The algorithm that sees me watching videos on astronomy and physics and lectures from the Royal Academy also offers up “here’s why flat-Earthers are wrong” videos. You know what I do? I click the three dots and select “Don’t recommend this stuff”. And you know what happens? YouTube has stopped recommending that stuff.
YouTube is not forcing anyone to watch anything. In fact, they offer a very simple way to modify the algorithm so that it doesn’t show you things.
Individuals are responsible for what they do. “Google made me do it!” is, in my opinion, right up there with “The Devil made me do it!”.
An individual has to click on a suggested video–because they want to watch it. They have to choose to believe what it says. And they have to go through an amazing amount of active choices in order to become a terrorist. That’s not on Google. That’s on them.
If I linked you to an ISIS recruitment video, would you suddenly join the jihad against infidels?
@MarkedMan: I’m responding to specific points that Kathy made so I don’t get your point? They already have updated section 230 multiple times including the latest update in 2018.
Frankly I don’t see what you call the “changing nature of the internet”. If you could assist me in seeing what you mean I’d appreciate it.
Man the edit function is an absolute mess for me today. I’m not getting the option to edit then magically after 5 refreshes I get the option. When I click the option to edit it inserts the text from a prior unrelated comment which forces me to copy paste the actual comment I clicked to begin the editing.
Somehow I think “the algorithm didn’t intend to do any harm” isn’t going to work as a defence when people start getting squished by autonomous vehicles running around and not stopping in time.
Not that some eager-beaver lawyer won’t use this SCOTUS decision to argue such, however.
Little indicators sometime convey significant messages.
President Zelensky is on his way to the G7 in Japan abroad a French air force plane.
Also, indication now that F-16’s from Europe are on the agenda: US indicates “no objections to transfer”
Yes. Murder requires intent. Negligence does not, by definition. A tort (harm) due to your negligence makes you liable. But… the judges said if you interpose an algorithm between you and the negligence then you cannot be held to harm. And of course the algorithm cannot be held to harm as it is not a person.
If a human being noticed someone acting twitchy and realized they could make them twitch more if they showed them horrendously photos and videos, and the person eventually acted out per those videos and killed someone, a trial could determine the instigators criminal or civil liability. Unless, according to the supremes, they interpose an algorithm in the middle.
Which is why I called the decision naive rather than malicious. It will be overturned because of cases like you just mentioned. Or, given the current court, they will allow two mutually exclusive rulings to exist simultaneously.
So was I.
People change more slowly, but they do change. For instance, back then I’d have posted a savage personal attack for your unwarranted assumption, or outright misunderstanding, that I said section 230 needs to be repealed.
I no longer do that.
The claim was not that Youtube made anyone do anything, but that Twitter promoted ISIS’s ideology and recruitment. The latter is true. The notion that an algorithm shields anyone from liability is inane, given that someone, or various someones, decided to implement said algorithm for a very intentional purpose. Not just that, but they maintain it and do poorly at moderating content.
NPR and CNN are reporting that the Representatives are back to negotiating tonight.
Suggestion algorithms basically didn’t exist then, let alone the discovery that pushing outrage kept people stickier.
I don’t think anyone here wants to remove protections on hosting content. The question is whether the choice of what content to suggest (there’s lots to choose from!) is an active editorial choice that wouldn’t be automatically protected.
If you linked 1.5M troubled teens to incel communities where they glorify school shootings, are you responsible for the increase in school shootings that will result?
What if it was demonstrated that you knew that you were promoting this content and put no effort into dialing it back? What if it were shown that you were recommending content that was incrementally more extreme?
And now the kicker: what if you can algorithmically determine whether content is likely to be about something or inciting something — not perfectly, but 80% accuracy?
Suggestions existed back then and I didn’t have to click on them. Sure there is a different form of automation involved in the generation of suggestions today but I am still not forced to click them. As clearly stated above by Mu Yixiao you have the power to not only ignore those suggestions but to also shape future suggestions. If you were getting more extreme content it’s because you were clicking more extreme content.
Except this very post is talking about removing protections for hosting or transmitting content. Who decides what content flowing over your network makes you liable? What content is no longer protected ? How do you deal with people who intentionally access said content over your network to bring liability into the equation and thus forcefully shut you and/or your business down? Do you have any idea what MESH is? With the way providers are using that to offload from their backbone you’re looking at individuals becoming legally liable without a clue they are. Hell if you provide wifi for a building etc you’re just opening yourself up to legal trouble in your hypothetical.
People don’t seem to understand the hardware aspect of the internet’s setup these days…
@Kathy: Then what exactly are you trying to say? You start of claiming that provider’s should be held liable for “content that they publish” via examples that you have generated. You conflated section 230 with “social media companies” and then stated that section 230 was passed in 1996 and should be updated. I pointed out both that your examples are irrelevant when it comes to the internet and that section 230 has been updated several times including recently. If you’re mad about that then you should probably do some research before making such comments.
The definition of “social media” are websites and applications that enable users to create and share content or to participate in social networking. Just like the internet providers just provide the framework for you to create content or to network.
@MarkedMan: Define harmful content and how to regulate it. How do you detect and stop the content flowing over your network from somewhere else?
I can assure you that your definition of harmful content will NOT align with the MAGAs etc. Anything you setup to make content you don’t like illegal will be used by others to do the same.
So, according to your logic, you have zero problem with Google (or any internet company) being held responsible–and punished–for suggesting resources for people seeking information on abortions, birth control, being LGBTQ+, being transgender, or wanting medical assistance with transitioning from one sex to another. All of those things are illegal (or pending) somewhere in the US.
You want “things I don’t like to be illegal, and things I like to be legal”–completely ignoring the fact that there’s no way to differentiate the two (except “Because I say so.”)
If you believe that YouTube should be held libel for their algorithms suggesting ISIS videos*, then you must believe that they should be held libel for suggesting videos relating to LGBTQ+.
Or… do you want one set of laws for the people you like, and another set for the people you don’t?
This is the price of freedom: The people you hate get exactly the same rights and protections that you do. I can’t find the exact quote, but it goes something like this: A democracy is judged on how it treats its most hated members.
The Satanists get to put their monuments next to the Baptists, the Nazis get to march in Skokie, and the people you don’t like get exactly the same rights and protections as the people you do.
* YouTube actually takes down extremist videos when they’re alerted to them.