Emotional_Series7814

  • 1 Post
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • I don’t consciously make these calculations either, but what you just described sounds exactly like how I choose what to click on. Also came here for suggestions!

    I’ll say that I’ve looked up hobbies I enjoy but don’t think about much so I can boost my engagement on the Fediverse. Normally I wouldn’t bother, but I want to help this place grow, so I’ve let in things that I have a milder interest in as well as my usual interests. This is also how I get variety in the posts I see, as I usually stick to /sub. When I wander out, it’s on purpose and to a specific known community, because /all usually has some depressing political news or ragebait that would get me to outrage-click. I’m here to have a good time, not to doomscroll or get angry. Kbin has no algorithm intended to keep us scrolling on it, but those things do generate the most engagement, so it’s only natural they end up on /all frequently enough (though not as frequently as they’d appear on the popular page on Reddit) that I feel a desire to avoid /all.






  • “We believe that users should have a say in how their attention is directed, and developers should be free to experiment with new ways of presenting information,” Bluesky’s chief executive, Jay Graber, told me in an email message.

    Of course, there are also challenges to algorithmic choice. When the Stanford political science professor Francis Fukuyama led a working group that in 2020 proposed outside entities offer algorithmic choice, critics chimed in with many concerns.

    Robert Faris and Joan Donovan, then of Harvard’s Shorenstein Center, wrote that they were worried that Fukuyama’s proposal could let platforms off the hook for their failures to remove harmful content. Nathalie Maréchal, Ramesh Srinivasan and Dipayan Ghosh argued that his approach would do nothing to change the some tech platforms’ underlying business model that incentivizes the creation of toxic and manipulative content.

    Mr. Fukuyama agreed that his solution might not help reduce toxic content and polarization. “I deplore the toxicity of political discourse in the United States and other democracies today, but I am not willing to try solving the problem by discarding the right to free expression,” he wrote in response to the critics.

    When she ran the ethics team at Twitter, Rumman Chowdhury developed prototypes for offering users algorithmic choice. But her research revealed that many users found it difficult to envision having control of their feed. “The paradigm of social media that we have is not one in which people understand having agency,” said Ms. Chowdhury, whose Twitter team was let go when Mr. Musk took over. She went on to found the nonprofit Humane Intelligence.

    But just because people don’t know they want it doesn’t mean that algorithmic choice is not important. I didn’t know I wanted an iPhone until I saw one.

    And with another national election looming and disinformation circulating wildly, I believe that asking people to choose disinformation — rather than to accept it passively — would make a difference. If users had to pick an antivaccine news feed, and to see that there are other feeds to choose from, the existence of that choice would itself be educational.

    Algorithms make our choices invisible. Making those choices visible is an important step in building a healthy information ecosystem.


  • Here’s the text!

    Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

    But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

    There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our pre-existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

    Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

    Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

    Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define, but that largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

    The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand.

    In my ideal world, I would like to be able to choose my feed from a list of providers. I would love to have a feed put together by librarians, who are already expert at curating information, or from my favorite news outlet. And I’d like to be able to compare what a feed curated by the American Civil Liberties Union looks like compared with one curated by the Heritage Foundation. Or maybe I just want to use my friend Susie’s curation, because she has great taste.

    There is a growing worldwide movement to provide us with some algorithmic choice — from a Belgrade group demanding that recommender algorithms should be a “public good” to European regulators who are demanding that platforms give users at least one algorithm option that is not based on tracking user behavior.

    One of the first places to start making this vision a reality is a social network called Bluesky, which recently opened up its data to allow developers to build custom algorithms. The company, which is financially supported by the Twitter founder Jack Dorsey, said that 20 percent of its 265,000 users are using custom feeds.

    On my Bluesky feed, I often toggle between feeds called Tech News, Cute Animal Pics, PositiviFeed and my favorite, Home+, which includes “interesting content from your extended social circles.” Some of them were built by Bluesky developers, and others were created by outside developers. All I have to do is go to My Feeds and select a feed from a wide menu of choices including from MLB+, a feed about baseball, to #Disability, one that picks up keywords related to disability or UA fundraising, a feed of Ukrainian fund-raising posts.

    Choosing from this wide selection of feeds frees me from having to decide whom to follow. Switching social networks is less exhausting — I don’t have to rebuild my Twitter network. Instead, I can just dip my toes into already curated feeds that introduce me to new people and topics.


  • I don’t moderate anything.

    Quotes taken from https://maya.land/monologues/2023/07/01/spez-feudalism-reddit.html

    Imagine starting [a subreddit], hyping it up, patiently providing four-fifths of the content until people show up, moderating spam, moderating jerks, growing it gradually over time. Setting rules, establishing tone, running the weekly topical threads. Would you feel like that /r/whateverItWas existed because of Reddit the company? Would you feel like it fundamentally belonged to his Royal Highness Steve, and Steve was just delegating it to you to run? No! You started it! You shaped it! You collaborated with the people it attracted to make it what it is! Even those users – they could switch tomorrow to /r/whateverItWasTwo and you couldn’t do a thing about it – if they decided they didn’t like your vision for /r/whateverItWas, they would, so the fact that they’re still here is a kind of voting with your feet, it validates what you’re doing… To the extent that /r/whateverItWas exists as a thing within Reddit as a whole, to be run or misrun, managed or mismanaged? It feels like yours.

    But at the same time, to an external observer – you can see how they would feel that this is pretty silly, right? The thing that’s “yours” is nothing but rows and columns in Reddit’s databases13, a series of flags giving you the power to moderate. The only thing you have is set in Reddit’s systems, a permission to edit stuff under a certain scope a bit differently than other users, wowee aren’t you important. It’s not you who has a license to the user posts, it’s not you who controls anything but a tiny little square of grass Reddit let you mow. You’re gonna protest over that? The world at large already doesn’t understand why you might volunteer for this work, why you might care enough to do it unpaid – you seem like a schmuck to them, a victim.

    or a power tripper.

    I’ll admit that some mods probably are on a power trip. A clear example of “probably not, they have an actual reason to want to stay in power” is r/askhistorians, where you probably don’t want random people replacing people with lots of historical knowledge on a subreddit specifically about history that only allows informative replies complete with a works cited. They care about the online space they’ve built, not that they have a ban hammer and can wield it with prejudice. I’d imagine a lot of other mods are pretty similar. Knowledge about their niche community, though probably not as much as the people on r/askhistorians, a certain subreddit culture that they don’t want to collapse and fall apart… they’d rather preserve the online space they and many other people enjoy. Even if it just looks like free labor and power tripping to outsiders whenever they don’t want to just up and abandon Reddit.