• 0 Posts
  • 110 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle

  • Your description is how pre-llm chatbots work

    Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.

    Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.

    Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

    Emergent properties are literally the only reason llms work at all.

    No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.


  • No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.



  • If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”

    It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.

    And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.


  • AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.

    Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.


  • Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.

    It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.

    I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.



  • A progressive society does not need to retroactively change history, it can accept the imperfections of the past in the knowledge that we’ve already changed.

    How is pointing out the heinous shit changing history? If anything, it’s accepting the imperfections of the past and acknowledging we have changed by calling out the callousness of its prior implementation and calling out what to avoid… you are literally contradicting yourself.







  • It would work the way the internet worked before google and facebook monetised monitoring everyone to sell ads

    You mean the ads on the side of the screen that told you to play some interactive game in them so they could install malware? Ads of some form were always a thing on the internet, first in forum posts then to website ads then Google started essentially buying ad space on other websites, and paid you for it. I hate Google but when that first came out at least most ads weren’t filled with malware at that point.


  • The problem isn’t the funding it’s people’s reactions. Why slave away for someone else’s company even if it provides utility for your society if you can survive and even thrive creatively on UBI? What happens then, do we get worse class warefare then we have now? What happens when people realize most of what can be automated away at current levels are executive and CEO positions? When they leave with Golden parachutes are you gonna ask for UBI for them? No then we have set a precedent legally for those automated away jobs to not receive UBI or you just facilitated more capitalistic greed for those executives. Is UBI setup on a global scale? No then how do you enforce dual citizenship individuals from collecting UBI and working another job remotely from the second nation they are registered with creating inefficiencies in our program with could make it a target for regressive policies. Think Republicans constantly saying illegals are stealing our benefits so we should block them and cut funding to the programs, how do we defend against those attacks? I mean I can keep going, but the problem is how do we implement this without everything being automated and create a fair and equitable system for all involved? While it would be nice to just throw money at everyone you need to take into account individuals reactions to this. We aren’t in a vacuum and yet we isolate ourselves in echo chambers as if our perspectives are the only ones out there, we loose nuance by doing this and then get aggravated something isn’t done because the cause of that nuance isn’t even on our radar from lacking communication with other people who have differing views and opinions.


  • I would say it is sustainable IF it’s rolled out properly, if you are only just barely given enough to survive, your not going to take risks for creatiivities sake and end up going back to a grind of some sort to get that slightly more sustainable odds. The real big problem is how do we deal with the jobs that can’t be automated? How does society react to after spending decades training in order to specialize in something so they can survive cope with others who can now thrive without it? Do we see massive unemployment from critical organizations/companies as workers decide to indulge in their passions on UBI instead of slave away for a sustainable living? Do we need to wait until all jobs can be automated before this is even possible, or does the society we have today collaspe? These are some of the actual difficulties with rolling out UBI and a proper solution has to address them for them to be sustainable. As it sits I don’t know if we even are at a level to do much, most ai would be good for say being a CEO or high level executive looking at trends and creating a curve essentially to fit the data points those trends are creating. But how would people react to CEOs getting obsoleted and collecting UBI with their golden parachutes still? Probably pretty damn fucking badly, calling for UBI to be abolished or some shit and you wouldn’t see much resistance as the share holders can eventually reap in the profits when we created precedent for no UBI related to jobs that AI/automation took over. So you need protections there first but our governments are reactive and not proactive. Sure maybe an authoritarian regime could enforce it but now you have to hope you have a benevolent dictator, which is pretty much an oxymoron, and they would need the foresight to leave democracy in there absence. Not to mention that force would need to be a global government or other economies still based on capitalism ideals without UBI are going to take advantage of their position leading to unsustainablity and eventual collaspe. We have a lot of fucking work a head of us but if you were to compare hunter gathers to today’s societies and advancements it would almost seem impossible. I don’t expect UBI or full on automation to make it into our societies without some sort of societal collaspe first that allows us to rebuild with the understandings of our current systems failures clearly documented. I think we are many generations off from that rebuilt society even if we bear witness to our societies collapse in the upcoming generations. But I agree it would signify a huge advancement in humanity and probably give us the foundations to truly become a type 1 civilization and set the stages for possible advancements to a type 2 civilization. But we are not there yet, unfortunately.


  • I actually don’t agree that is is unsustainable, I was just pointing out the logical falicy. It’s a weird thing to say that “paying a person to do a newly unnecessary job is unsustainable”, especially in the context of AI. It doesn’t make sense to complain about something when the only proposed solution is doing the exact same thing in a more roundabout way.

    As the other person was getting at its not a logical fallacy. One is having wasted potential ( workers doing jobs that should be automated away ) the other is capitalizing on that new found potential by giving them the means to survive maybe even thrive if we actually get UBI right. One is unsustainable as you are paying to keep appearances up for no positive benefit, the other frees a market of labor to do creative and inventive tasks that can further humanity and provide even more benefit.