Formerly /u/Zalack on Reddit.e
Also Zalack@kbin.social
I think it depends on the project. Some projects are the author’s personal tools that they’ve put online in the off-chance it will be useful to others, not projects they are really trying to promote.
I don’t think we should expect that authors of repos go too out of their way in those cases as the alternative would just be not to publish them at all.
Yeah, actually moderating an online space with even modest activity is fucking hard and takes a shitton of time.
I think a lot of people underestimate the effort involved and quickly lose interest once it becomes apparent.
That’s a really interesting perspective I didn’t think I’ve seen before. Thanks for posting.
We’ll always DRR DRR !
Self driving cars could actually be kind of a good stepping stone to better public transit while making more efficient use of existing roadways. You hit a button to request a car, it drives you to wherever, you need to go, and then gets tasked to pick up the next person. Where you used to need 10 cars for 10 people, you now need one.
Why would you assume consciousness is a fundamental force rather than an emergent property of complex systems built on the forces?
More good options is always a good thing.
Atlas Nodded
Thatsthejoke.jpeg.zip
In many cases it should be fine to point them all at the same server. You’ll just need to make sure there aren’t any collisions between schema/table names.
Man, I really think you should either saddle up, don’t block ads, or use a free, non-ad-supported alternative.
Sync is made by a single dev who uses it as his main source of income. It’s not made by a corporation. Taking the fruits of someone’s labor, that they have priced to make it worth their time, feels kinda shitty to me.
If you really feel it’s so much better than the alternatives that you won’t even use them, then pay what the person making it feels they need to keep making it.
Take me HOOOAAAAAAMMMMME
I don’t know. This would dovetail well with a bunch of studies that have found verbal and physical abuse of retail workers at an all time high since the pandemic. Similar studies have found the same thing for road rage.
There has always been some fraction of poorly behaved people, but that fraction seems to have become larger since the pandemic, whatever the actual mechanism that caused it is.
Federation isn’t opt-in though. It would be VERY easy to spin up a bunch of instances with millions or billions of fake communities and use them to DDOS a server’s search function.
Searching current active subscriptions helps mitigate that vector a little.
While that’s true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.
I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and – maybe more importantly – start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I’m imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.
Could something like that become conscious without realizing it’s “communicating” with us? The program executing the LLM might reflexively process data without any concept that it’s text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn’t realize the data represents a link to other conscious beings.
As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn’t understand they were doing math even when they got it “right”, but they would still be sentient, if not sapient, despite that.
It’s the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.
But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it’s own bounds. Something that might not even recognize it’s executing a program the same way we aren’t consciously aware of the chemical reactions our brain is executing to make us think.
I don’t believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven’t started to be heavily layered and interconnected the way I think they’ll end up.
At the very least it makes for a fun Sci-fi premise.
Lol, Texas and Florida are doing a good enough job of knocking themselves down without help from me.
Except in a true free market zoning laws wouldn’t keep adorable, high density housing from being constructed to artificially boost housing prices.
Other than that I agree with you.
I agree with the other poster that you need to define what you even mean when you say free will. IMO, strict determinism is not incompatible with free will. It only provides the mechanism. I posted this in another thread where this came up:
The implications of quantum mechanics just reframes what it means to not have free will.
In classical physics, given the exact same setup you make the exact same choice every time.
In Quantum mechanics, given the same exact setup, you make the same choice some percentage of the time.
One is you being an automaton while the other is you being a flipped coin. Neither of those really feel like free will.
Except.
We are looking at this through an implied assumption that the brain is some mechanism, separate from “us”, which we are forced to think “through”. That the mechanisms of the brain are somehow distorting or restricting what the underlying self can do.
But there is no deeper “self”. We are the brain. We are the chemical cascade bouncing around through the neurons. We are the kinetic billiard balls of classical physics and the probability curves of quantum mechanics. It doesn’t matter if the universe is deterministic and we would always have the same response to the same input or if it’s statistical and we just have a baked “likelihood” of that response.
The way we respond or the biases that inform that likelihood is still us making a choice, because we are that underlying mechanism. Whether it’s deterministic or not it’s just an implementation detail of free will, not a counterargument.
Also: https://youtu.be/PmLH_M2B-bM?si=jvJ5UCFD9dlpZfc7