

Same, I set it up a few years ago and both me and my partner have been using it since then with no issues at all, it’s completely replaced Google Photos for us.
We’ve also set up immich-frame and repurposed an old Google Nest hub to use as a digital photo frame.








There’s a lot to cover here but I’ll try to touch on each point:
The key requirement is fast memory that can be addressed by your GPU, and ideally a lot of it - hence the insane cost of this hardware right now.
Remember that you need space for the model’s weights (think of this as its ‘knowledge base’) and the context window, which is basically the data needed for the LLM to keep track of your current conversation with it (effectively its short term memory).
With smaller pools of VRAM (8-16gb) you will have to compromise and either have a more capable model that will lose context quickly and start hallucinating, or a less capable model that can maintain a session for a bit longer but overall less ‘smart’.
For software - there are a couple of options for running the LLM itself, Llama.cpp is one of the more popular tools and is the one that I use. It has a web UI with the usual chat interface, and also exposes an API that you can plug other tools (e.g. opencode) into, depending on your use case.
In terms of hardware recommendations, at 20GB+ of VRAM you do have a bit more headroom compared to more consumer grade GPUs, but to be honest the most cost effective way to get a shitload of VRAM is likely not with a dedicated GPU but actually using a system based around a recent APU.
I got a Minisforum MS-S1 last year for exactly this purpose. It is based on AMD’s Strix Halo platform which it has in common with the Framework Desktop and a couple of other similar devices.
It has 128gb of unified RAM which can be divided between the GPU and CPU however you like, so plenty of capacity for even fairly chunky models. It also uses a tiny amount of power compared to a more traditional system with a dedicated GPU, while also giving really reasonable performance for most AI workloads, more than enough for use in a homelab.
For cloud rental - doable, but pricing is a factor, and of course this will not actually be running locally.
Usability - manage your expectations, but overall for a lot of use cases and of course depending on the model that you are running and the resources you throw at it, it can be comparable with especially older iterations of ChatGPT, Gemini etc.
But remember, you are not a Google or an Anthropic and do not have an infinite pool of compute to throw at your model, nor do you have access to the specific models they are using.