• 0 Posts
  • 205 Comments
Joined 4 months ago
cake
Cake day: March 8th, 2024

help-circle







  • I guess that depends on the use case and how frequently both machines are running simultaneously. Like I said, that reasoning makes a lot of sense if you have a bunch of users coming and going, but the OP is saying it’s two instances at most, so… I don’t know if the math makes virtualization more efficient. It’d pobably be more efficient by the dollar, if the server is constantly rendering something in the background and you’re only sapping whatever performance you need to run games when you’re playing.

    But the physical space thing is debatable, I think. This sounds like a chonker of a setup either way, and nothing is keeping you from stacking or rack-mounting two PCs, either. Plus if that’s the concern you can go with very space-efficient alternatives, including gaming laptops. I’ve done that before for that reason.

    I suppose it’s why PC building as a hobbyist is fun, there are a lot of balance points and you can tweak a lot of knobs to balance many different things between power/price/performance/power consumption/whatever else.


  • OK, yeah, that makes sense. And it IS pretty unique, to have a multi-GPU system available at home but just idling when not at work. I think I’d still try to build a standalone second machine for that second user, though. You can then focus on making the big boy accessible from wherever you want to use it for gaming, which seems like a much more manageable, much less finicky challenge. That second computer would probably end up being relatively inexpensive to match the average use case for half of the big server thing. Definitely much less of a hassle. I’ve even had a gaming laptop serve that kind of purpose just because I needed a portable workstation with a GPU anyway, so it could double as a desktop replacement for gaming with someone else at home, but of course that depends on your needs.

    And in that scenario you could also just run all that LLM/SD stuff in the background and make it accessible across your network, I think that’s pretty trivial whether it’s inside a VM or running directly on the same environment as everything else as a background process. Trivial compared to a fully virtualized gaming computer sharing a pool of GPUs, anyway.

    Feel free to tell us where you land, it certainly seems like a fun, quirky setup etiher way.


  • Yeah, but if you’re this deep into the self hosting rabbit hole what circumstances lead to having an extra GPU laying around without an extra everything else, even if it’s relartively underpowered? You’ll probably be able to upgrade it later by recycling whatever is in your nice PC next time you upgrade something.

    At this point most of my household is running some frankenstein of phased out parts just to justify my main build. It’s a bit of a problem, actually.



  • Alright, alright, just because I got myself excited. Top three gaming laptops, rating for sheer cool factor with no regard for practicality or value for money, but in no particular order:

    1- MSI GS65. It could be the Razer Blade, which is the OG, but the GS65 was legitimately the best of that first batch of thin and light gaming laptops that looked classy without looking tacky. It had a 1070 in it, it could run every contemporary game just fine and it made you look downright stylish working on a Starbucks. So cool.

    2- ASUS ROG Flow Z series. Asus put a dedicated GPU. In a tablet. Like, up to a 4070, you can get in one of these. It’s fat, it’s clunky, it’s underpowered for the hardware, it’s heavy, it sounds like the speaker in your first smartphone… but guys, 4070 in a tablet, are you kidding me? How cool is that?

    3- Framework Laptop 16. It’s a modular laptop with a dedicated GPU module and a bunch of random configuration options. Gaming laptop lego. Again, how cool is that?


  • I love both. And handhelds. And consoles.

    I just like videogames and things that can run videogames. Videogame tech is cool.

    I genuinely don’t get why people have such a grudge against gaming laptops. It’s like they got stuck regurgitating talking points from the mid 2000s. There have been so many super cool gaming laptops in the past couple of decades. Big, chonky powerhouses, sleek stealth workhorses, quirky nonsense builds… It’s awesome.



  • Before I had to try twice for Fedi reasons, I was mostly pushing it for the joke.

    But honestly, this is so on brand for MS. They came up with a superficially marketable idea, botched the execution, then botched the marketing even harder. Then Apple came up with the same feature and everybody liked it.

    The idea that this is them playing the long game is hilarious. Not only is that not how big software companies work, it is definitely not how MS works. People just want to sound worldly and cynical and instead come across paranoid and delusional. The idea that everybody working on this knew it sucked and they shipped it anyway is extremely plausible.

    Can they execute? Sure! But can they also get stuck failing to push back on a bad idea until they end up shipping something nobody likes? Often, objectively. And almost always subjectively because they also consistently suck at branding their stuff, both the good and the bad.



  • OK, but why?

    Well, for fun and as a cool hobby project, I get that. That is enough to justify it, like any other crazy hobbyist project. Don’t let me stop you.

    But in the spirit of practicality and speaking hypothetically: Why set it up that way?

    For self-hosting why not build a few standalone machines and run off that instead? The reason to do this large scale is optimizing resources so you can assign a smaller pool of hardware to users as they need it, right? For a home set of two or three users you’d probably notice the fluctuations in performance caused by sharing the resources on the gaming VMs and it would cost you the same or more than building a couple reasonable gaming systems and a home server/NAS for the rest. Way less, I bet, if you’re smart about upgrades and hand-me-downs.



  • I don’t think that’s correct. Recall will not draw any data from any app you don’t actively display onscreen. In fact it will not draw any data you don’t specifically display on screen. Apple’s Recall will know about data that is stored in applications whether you open it or not, as it’s been explained, but it will work with specific applications drawing from specific data (and it does also look at your screen, although it’s not clear if it does that constantly or on demand).

    Just to quote the current Apple Intelligence landing page. This is posted by Apple itself as promo materials:

    Apple Intelligence empowers Siri with onscreen awareness, so it can understand and take action with things on your screen. If a friend texts you their new address, you can say “Add this address to their contact card,” and Siri will take care of it.

    Awareness of your personal context enables Siri to help you in ways that are unique to you. Can’t remember if a friend shared that recipe with you in a note, a text, or an email? Need your passport number while booking a flight? Siri can use its knowledge of the information on your device to help find what you’re looking for, without compromising your privacy.

    Seamlessly take action in and across apps with Siri. You can make a request like “Send the email I drafted to April and Lilly” and Siri knows which email you’re referencing and which app it’s in. And Siri can take actions across apps, so after you ask Siri to enhance a photo for you by saying “Make this photo pop,” you can ask Siri to drop it in a specific note in the Notes app — without lifting a finger.

    That sure sounds to me like Siri now looks at you screen, logs your past activity, or at least searches through pre-existing system logs of your activity, and has access to and processes all your information.

    Again, Recall and “AppleI” will both draw different sets of data, but they are both drawing new data at the system level. And they’re both making context inferences on your data. Sure, the process is different, they each have issues the other doesn’t (MS’s 1.0 version had glaring security holes and it’s too human-readable, Apple’s version is sending your data to a server for processing, instead of being all on-device), but it’s fundamentally doing the same thing with the same startling access to your data. Both companies insist they’re not logging your data anywhere outside your device. To me, that’s not enough in either case.


  • The Giant Bomb site player specifically was way better than the contemporary Youtube player for a good long while. They were also better at prioritizing bitrate over resolution, since they weren’t obsessed with pretending they had a pixel count advantage over competitors while compressing contents down to mush. If anything it’s ironic that Youtube will now try to sell you bitrate as part of their subscription without cranking up the resolution, presumably because their creators no longer even try to upload 4K anymore.

    Sorry, now I’m bringing up legacy gripes from a different decade. Carry on.