he/him

Alts (mostly for modding)

@sga013@lemmy.world

(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)

  • 10 Posts
  • 210 Comments
Joined 11 months ago
cake
Cake day: March 14th, 2025

help-circle

  • In my general socialist dream, all boring things should be non profit - education, healthcare, food (agriculture side, not th end product), water, electricity, land, communincations, transport (railway, busses), etc… Basically things where 2 people doing someething will likely not breed creativity, but only trying to beat the other will require doing things like advertising, and to maintain profits, you would have to cost cut. these things also happen to be necessities that everyone has, and so must be provided by govt, and covered by taxes.

    as for examples, atleast in us (and a lot of world), most governments do not operate telecoms (the cellular service), even though it is a boring thing. in india (where i live), there is bsnl and mtnl, and pprovides a basic service, which not only keeps the for profit companies at their toes (technically they are not non-profit, at least not registered one, but they basically do not aim to make profits). another example is agriculture - it is often subsidised, and but it is still for profit. over here, we have large cooperatives (they are also not non profits, and they actually do make profits - thy just invest back into the cooperative) for dairy and agri industry, which not only helps farmers havee stable incomes, but they can often fall back on their saved profits. railways are often partly (or completely) govt owned (might not be the case in land of free, but i know of amtrak) so that is already a well known example - try to use all profits to maintain rail cars and tracks.

    beyond these, things can be for profit (art or media for example), where 2 separate things can exist, do not (necessarily) eat each others business and often done for leisurely needs (not necessities)






  • reason for them not appearing is that xmpp is a largely relaxed platform, that is, all implementations are not equally strict. some may implement certain extensions, others may implement other. encryption (omemo) is a common one that most implement, but then client (the user apps like gajim) may or may not implement them correctly, or they may have a fallback (first communication between 2 clients maybe is not encrypted), and other different problems with encryption being flaky (firstly, it is not perfect forward secrecy, it is a bit prone to failure (messages unable to decrypt), etc.), hence it is not recommended much.



  • as someone who is doing some kind of science - titles are a lot more fancier and designd for absurdity. Often, the decision to perform something is a lot more logical than dciding random animals to test from. for example, some of the people from their group may already have been studying that specific frog line for some reason (maybe for it’s gut only), for example, they may have observed that these frogs live a long life or something, then they decided to find why is that, and may hav ecome to conclusion that it is this gut bacterium. or maybe they may hav eknown of this bacterium, and found out where they could source more of this.

    but sometimes, it is totally random luck, lik you accidentally messed up experiement, and spilled some unrelated gut juice from a frog from a separate experiment, and it just so have happened to worked, so you now studied it closely.

    I have absolutely no idea what may have happened in this one, and i am not a biologist, so do not know what is the usual way, but it is usually among these.



  • sga@piefed.socialtoLinux Phones@lemmy.mlAny Kiwix client?
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    You can install a browser eextension (i think name is kiwix only) which can load offline zim files. problem is that ux is very bad (you need to load a zim file each time, or manually change). on desktop, my answer was to manually unpack all zim files (using zimutils) and then arrange them in a controlled dir structure, then recompress them into a mountable file format, and separately, maintain a list of all files in side, and while using, i have something hand rolled to mount the archive, select suitable file, and open in browser - yes it is a lot of work, but i do kinda have a offline search engine now.






  • i rarly use it, mostly to do sentiment/grammar analysis for some formal stuff/legalese. I kinda rarely use llms (1 or 2 times a month)(i just do not have a usecase). As for how good, tiny models are not good in general, but that is because they do not have enough knowledge to store info, so my use case often is purely language processing. though i have previously used it to do some work demo to generate structured data from unstructured data. basically if you provide info, they can perform well (so you can potentially build something to fetch web search results, feed into context, and use it(many such projects are available, basically something like perplexity but open)).




  • further clarification - ollama is a distribution of llama cpp (and it is a bit commercial in some sense). basically, in ye olde days of 2023-24 (decades in llm space as they say), llama cpp was a server/cli only thing. it would provide output in terminal (that is how i used to use it back then), or via a api (an openai compatible one, so if you used openai stuff before, you can easily swap over), and many people wanted a gui interface (a web based chat interface), so ollama back then was a wrapper around llama cpp (there were several others, but ollama was relatively main stream). then as time progressed ollama “allegedly enshittified”, while llama cpp kept getting features (a web ui, ability to swap models during run time(back then that required a separate llama-swap), etc. also llama cpp stack is a bit “lighter” (not really, they both are web tech, so as light as js can get), and first party(ish - the interface was done by community, but it is still the same git repo) so more and more local llama folk kept switching to llama cpp only setup (you could use llama cpp with ollama, but at that point, ollama was just a web ui, and not a great one, some people prefered comfyui, etc). some old timers (like me) never even tried ollama, as plain llama.cpp was sufficient for us.

    as the above commenter said, you can do very fancy things with llama cpp (the best thing about llama cpp is that it works with both cpu and gpu - you can use both simulataneously, as opposed to vllm or transformers, where you almost always need gpu. this simultaneous thing is called offloading. where some of the layers are dumped in system meory as opposed to vram, hence the vram poor population used ram )(this also kinda led to ram inflation, but do not blame llama cpp for it, blame people), and you can do some of them on ollama (as ollama is wrapper around llama cpp), but that requires ollama folks to keep their fork up to date to parent, as well as expose the said features in ui.