• 0 Posts
  • 36 Comments
Joined 3 years ago
cake
Cake day: October 15th, 2023

help-circle
  • There’s a lot to cover here but I’ll try to touch on each point:

    The key requirement is fast memory that can be addressed by your GPU, and ideally a lot of it - hence the insane cost of this hardware right now.

    Remember that you need space for the model’s weights (think of this as its ‘knowledge base’) and the context window, which is basically the data needed for the LLM to keep track of your current conversation with it (effectively its short term memory).

    With smaller pools of VRAM (8-16gb) you will have to compromise and either have a more capable model that will lose context quickly and start hallucinating, or a less capable model that can maintain a session for a bit longer but overall less ‘smart’.

    For software - there are a couple of options for running the LLM itself, Llama.cpp is one of the more popular tools and is the one that I use. It has a web UI with the usual chat interface, and also exposes an API that you can plug other tools (e.g. opencode) into, depending on your use case.

    In terms of hardware recommendations, at 20GB+ of VRAM you do have a bit more headroom compared to more consumer grade GPUs, but to be honest the most cost effective way to get a shitload of VRAM is likely not with a dedicated GPU but actually using a system based around a recent APU.

    I got a Minisforum MS-S1 last year for exactly this purpose. It is based on AMD’s Strix Halo platform which it has in common with the Framework Desktop and a couple of other similar devices.

    It has 128gb of unified RAM which can be divided between the GPU and CPU however you like, so plenty of capacity for even fairly chunky models. It also uses a tiny amount of power compared to a more traditional system with a dedicated GPU, while also giving really reasonable performance for most AI workloads, more than enough for use in a homelab.

    For cloud rental - doable, but pricing is a factor, and of course this will not actually be running locally.

    Usability - manage your expectations, but overall for a lot of use cases and of course depending on the model that you are running and the resources you throw at it, it can be comparable with especially older iterations of ChatGPT, Gemini etc.

    But remember, you are not a Google or an Anthropic and do not have an infinite pool of compute to throw at your model, nor do you have access to the specific models they are using.




  • I was born in 89, so remember a good portion of the 90s. It was a much simpler time but obviously we tend to romanticise the fun memories and quietly ignore how vastly more inconvenient daily life was.

    Mobile phones were not really a thing yet so getting in touch with your friends required a combination of patience and sheer luck.

    The internet was a different place entirely and was experienced in 30 minute chunks of time, just long enough to download a song or two before being kicked off for tying up the landline.

    Daily entertainment was 4, maybe 5 analogue TV channels, plus a collection of VHS tapes which are all degrading by being rewatched constantly.

    Every piece of life admin that you would normally do online today was instead done with pen and paper.

    Honestly, I’m amazed we ever got anything done.




  • I have witnessed companies make this exact mistake before - they have a legacy system written in $LanguageA that they either cannot find developers to maintain, believe is badly written, or does not support some new feature they want to implement (or some combination of the three) - and decide to solve this by taking the existing codebase and porting/transpiling it to $LanguageB (which is more modern, performant, is easy to hire developers for, etc) - without actually rewriting or rearchitecting anything.

    What they are actually doing is substituting one kind of tech debt for another. The existing code that was poorly written and/or not well understood is now just bad code written in a different language. Fixing bugs or implementing new features now takes just as long, if not longer to account for the idiosyncrasies of how the code was ported.

    And now this is being done by AI with even less oversight than usual? Recipe for a maintenance disaster.


  • As others have said, 100% a leak.

    I would advise to stand on a chair or stepladder underneath the ceiling and check to see if it is still level. If you see an obvious deformation around the stain, this will be being caused by water pooling on top of the ceiling plasterboard. In which case, once the leak is sorted, you will likely need to drain the pooled water, cut out the damaged section, replace it, then replaster and repaint.

    We had exactly the same issue in our last house. It was in a difficult to see spot hidden behind our kitchen cabinets. We only realised the severity of the issue when the ceiling boards gave way and fell on my head.



  • I’ve switched both my laptop and desktop over to Linux (Bazzite and Fedora respectively) in the last 6 months.

    The last time I tried to daily Linux (over a decade ago) I ended up switching back eventually, but this time I really don’t think I’ll need to. All of the games I play most often work perfectly, the dev tooling is even better than it is on Windows, and the hardware compatibility side has been completely flawless.

    Gone are the days of having to hunt down obscure Linux drivers for your touchpad or webcam. Everything just works out of the box.









  • I have a Model 3 at the moment. I’ve had it for almost 5 years and it’s generally been great - cheap to run, quiet and comfortable on longer trips but still fun to drive on back roads.

    Recently it had its first major breakdown, and although Tesla service did manage to take care of it, it’s got me browsing for new EVs - but now, buying a Tesla is not the foregone conclusion it once might have been.

    First, they have been making some truly stupid design choices in their latest facelifts (deleting the indicator stalks and gear selector).

    Second, their CEO has now gone completely mask-off fascist.

    Third - after a few years for the competition to catch up, we now have genuine alternatives from other marques which are just as good if not better EVs than Tesla’s offerings.

    I think my next car will likely be a Polestar 2.