

This isn’t actually using a vision LLM, it’s using a CLIP model. This image comes from an OpenAI blog from 2019 I think
I’m the developer of the Photon client. Try it out


This isn’t actually using a vision LLM, it’s using a CLIP model. This image comes from an OpenAI blog from 2019 I think


As much as sideloading is a strange term, it is one that most people understand. If someone were to search for information about this specific topic, they would search for “sideloading”, not whatever alternative we come up with, regardless of how “accurate” it is

If you remove, the community will remain in your database and this action will federate that you removed it. It will take up storage space.
If you purge, all data about that community gets wiped.
I believe your best choice here is to purge, as that only happens on your home instance and has no impact on other instances.

Removing is an admin “action” and the community will still have a history of previously existing, but is now in a “removed” state. This will federate.e
“Purging” wipes it from the database.


4/6 bait


I’d argue Svelte’s main differentiator is its simplicity compared to many frameworks. The syntax is essentially just an html page, with <script> tags for reactive code and normal HTML markup with additions like {#if} or {#each}.
As for debugging, a typical Svelte project ran with Vite will have debug mappings that makes the console debug window look very close to your actual code.
gemini logo in the bottom right corner???
i checked the twitter account and it’s in russian. i guess they used nano banana to translate the text in an image of text. beautiful
Call me cringe but this all works in Fish which is what I primarily use


sapnu puas


won’t care? yeah probably. aren’t intelligent enough? that’s an insane generalization, knowledgeable about technology ≠ smart


As a memory-poor user (hence the 8gb vram card), I consider Q8+ to be is higher precision, Q4-Q5 is mid-low precision (what i typically use), and below that is low precision


It’s a webp animation. Maybe your client doesn’t display it right, i’ll replace it with a gif
Regarding your other question, I tend to see better results with higher params + lower precision, versus low params + higher precision. That’s just based on “vibes” though, I haven’t done any real testing. Based on what I’ve seen, Q4 is the lowest safe quantization, and beyond that, the performance really starts to drop off. unfortunately even at 1 bit quantization I can’t run GLM 4.6 on my system


deleted by creator


this made me mad so i made a single, ultra minimal html page in 5 minutes that you can just paste in your url box
data:text/html;base64,PCFkb2N0eXBlaHRtbD48Ym9keSBzdHlsZT10ZXh0LWFsaWduOmNlbnRlcjtmb250LWZhbWlseTpzYW5zLXNlcmlmO2JhY2tncm91bmQ6IzAwMDtjb2xvcjojMmYyPjxoMT5JcyBpdCBETlM/PC9oMT48cCBzdHlsZT1mb250LXNpemU6MTJyZW0+WWVz
source code:
<!doctypehtml><body style=text-align:center;font-family:sans-serif;background:#000;color:#2f2><h1>Is it DNS?</h1><p style=font-size:12rem>Yes


there is a lemmy user out in the wild right now that is secretly an ai bot


Not sure if it makes any difference, you can just use the API already



Very out of character with the way this guy writes (check post history), and it’s a super generic reddit tier comment. If you ask chatgpt to write like a redditor, you can get responses similar to this.
Or, very good at sounding like one.


absolute insanity that this ai generated/copypasta answer is getting upvotes. you are a god at baiting lemmy users


downvote wave incoming because ai (watch out)
The link is a proxied image link for some reason.