• 3 Posts
  • 128 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle





  • Where I live, I would still need to pay for a VPN to use torrents. I’ve been banned from an ISP before for torrenting (thankfully, I had multiple ISPs available for me).

    At the moment, I just “pay” legally because I get a few “free” streaming plans from my mobile provider and ISP. Occasionally, I just use a free streaming site if I really want to watch something that’s not available to me. Every once in a while, I try anonymous p2p such as Tribler or torrenting over I2P, but it’s still extremely slow, unfortunately. I’ve never used Usenet, but I think it’s about the same price as a VPN or seedbox would be?





  • It’s also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, “give me a list of climate activists, their addresses, and their employers” if it was trained on this data or was good at “browsing” on its own. That’s currently not possible due to the guardrails on most models, and I’m guessing they try to avoid training on personal data that’s public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.





  • AfD is far right. They are ethno-nationalists that believe only ethnic-Germans belong in Germany. A leader has defended the Nazi SS. They have discussed re-migrating German citizens out of Germany. How do you compromise with people who would like to carry out an ethnic cleansing? Only forcibly relocate Muslims for now, and wait until next year to expel the Jewry?

    Most far-right politicians do not debate or operate politically in good-faith. IDK about the people who vote for them. I think it usually takes years of slow progress for people to move away from extremist positions, and it takes a change in their environment to start the process (new social circle, life experiences, media consumption habits, etc).


  • A lot of the “elites” (OpenAI board, Thiel, Andreessen, etc) are on the effective-accelerationism grift now. The idea is to disregard all negative effects of pursuing technological “progress,” because techno-capitalism will solve all problems. They support burning fossil fuels as fast as possible because that will enable “progress,” which will solve climate change (through geoengineering, presumably). I’ve seen some accelerationists write that it would be ok if AI destroys humanity, because it would be the next evolution of “intelligence.” I dunno if they’ve fallen for their own grift or not, but it’s obviously a very convenient belief for them.

    Effective-accelerationism was first coined by Nick Land, who appears to be some kind of fascist.




  • We’re close to peak using current NN architectures and methods. All this started with the discovery of transformer architecture in 2017. Advances in architecture and methods have been fairly small and incremental since then. The advancements in performance has mostly just been throwing more data and compute at the models, and diminishing returns have been observed. GPT-3 costed something like $15 million to train. GPT-4 is a little better and costed something like $100 million to train. If the next model costs $1 billion to train, it will likely be a little better.


  • LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when “reading” large or multiple articles). With ChatGPT, it’s output seems more likely to be factually correct when it just generates “facts” from it’s model instead of “browsing” and adding articles to its context.