It appears API rate limiting has effectively killed these alternatives. You essentially get nothing but “Too many requests” 429 errors.

Lemmy sadly does not have the active niche news and discussions I want. But now nothing can be read without going to Reddit. I hate Spez

    • Sunny@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      There was the Pushshift project, which archived all of Reddit’s posts and comments in text (JSON) form. You can download the data here: posts, comments.

      If you’re on Linux, once you have downloaded and extracted the respective file, you could run something like grep -m 1 '"id":"11eoagd"' RS_2023-03 | jq, where 11eoagd is the post ID. It’s not pretty, but it works.

  • ramplay@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    But now nothing can be read without going to Reddit

    I’m here right now! Reading to my hearts content

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Well, Reddit is practically dead, too. Just give it another six months or so of bad decision making, and whenever that IPO is going to drop.

  • Opalium@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    There were some discussions among the teddit hosts about attempting to use scraping instead, but it’s not easy and requires a lot of changes to the code. Not to mention it’s going to quickly become a cat and mouse game if reddit makes changes to their site. It’s just not worth it at this point. Reddit doesn’t want us.

    • Cyyy@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      difficult till almost impossible. i recently started coding my own client for reddit (i wanted a way to still get nsfw content when thirdparty clients go dead), and reddit is fucking annoying as hell. everything you do… they smash issues towards you. every time shitty 429 errors (rate limiting) even if you are logged in. just using a useragent of a normal browser gets you ratelimited. so spoofing a normal browser don’t works. sending a bit too much requests (like the Stealth app who is basically a parser for reddits website) gets your device ip banned. if you then open reddit in browser, they smash a error in your face that basically says “fuck you, we think you are a bot. gtfo.”.

      bypassing this rate limiting is almost impossible even if you try to spoof a browser. i tried the last few days and just gave up because its too annoying and buggy. the whole system of reddit is so annoying as a developer to work with.

      it annoyed me so much, that i now think about making the app not for reddit but for lemmy. because reddit sucks. hard. fuck them.

      • manwithabox@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I was trying to be positive,but after reading their announcement on github not so much anymore. Thank you for explaining in deep way is not possible to find a workaround.

        • Cyyy@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          the biggest issue is that they detect thirdparty clients coded as a website parser on their server and just block you. and bypassing this isn’t really working well because of the rate limiting.

          example: i just did send 3 requests where i first logged in, then asked for the recent posts of a sub… and already after this 2 requests i got rate limited by error 429 and couldn’t send any requests anymore.

          so even just requesting the recent posts in a sub is an issue (with spoofed browser useragent). if you use a “legit” useragent it works better, but reddit exactly knows you’re using a thirdparty client and can block or ban you whenever they feel like. so it’s not really a good solution because every minute reddit could hit the killswitch. just not worth the time to develope a app if it gets killed off then anyway.

            • Cyyy@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              based on the knowledge, i would say nah. but maybe there is somewhere on the internet a genius who can somehow gets it to work stable enough… who knows.

              i just checked the announcement of libreddit and it seems they used the same json endpoints i did for my project, so they probably encountered the same issues i did. and if they didn’t found a good solution yet (even after working way more with the API and endpoints than me)… dunno.

  • DigitalTraveler42@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    See I thought that Beehaw.org was the Lemmy instance for news, as it’s supposed to be a well moderated instance, am I incorrect in that assumption?

    Also it would be nice if Beehaw’s mods approved my account so that I could use that instance for those purposes. I’ve been trying to get an account created with them for almost a month now.

    I’m like you OP, my main focus on Reddit was staying up-to-date on the most current events and technology/science based posts, the sort I generally used on Reddit was “Top this Hour” because that seemed to be the most reliable and up-to-date hourly news as the news rolled in.

    Another thing that helped greatly was Reddit is Fun’s content filtering capabilities. Because who tf wants to read some bullshit from Fox News or other severely corrupted and biased news sources? The third party app for Lemmy that let’s me eliminate garbage sources from my feed is the one that wins me as a user, and I used RiF for as long as it’s been around, so they would be winning a loyal user.