If they’re not great, it’s your fault /thread 😅
- 1 Post
- 15 Comments
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·6 months agoI believe right now it’s also valid to ditch NVIDIA given a certain budget. Let’s see what can be done with large unified memory and maybe things will be different by the end of the year.
rkd@sh.itjust.worksto
Economy@lemmy.world•The Trouble With Trump’s Deal With Nvidia And AMD: It’s An Export Tax
11·6 months agochat is this socialism
rkd@sh.itjust.worksto
Technology@piefed.social•Judge says FTC investigation into Media Matters ‘should alarm all Americans’
3·6 months agosomebody do something any day now
rkd@sh.itjust.worksto
Ukraine@sopuli.xyz•European leaders including Starmer to join Zelenskyy in Washington for meeting with Trump
5·6 months agono more fokin ambushes
rkd@sh.itjust.worksto
Television@piefed.social•Conan O’Brien Says Late Night TV is Dying, but Stephen Colbert Is ‘Too Talented and Too Essential to Go Away’
526·6 months agoLet it die. A show this frequent can only end up being boring.
rkd@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•HP Z2 Mini G1a Review: Running GPT-OSS 120B Without a Discrete GPUEnglish
1·6 months agoFor some weird reason, in my country it’s easier to order a Beelink or a Framework than an HP. They will sell everything else, except what you want to buy.
rkd@sh.itjust.worksto
LocalLLaMA@sh.itjust.works•GPT-OSS 20B and 120B Models on AMD Ryzen AI ProcessorsEnglish
1·6 months agoRemind me of what are the downsides of possibly getting a framework desktop for christmas.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·6 months agoThat’s a good point, but it seems that there are several ways to make models fit in smaller memory hardware. But there aren’t many options to compensate for not having the ML data types that allows NVIDIA to be like 8x faster sometimes.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
1·6 months agoFor image generation, you don’t need that much memory. That’s the trade-off, I believe. Get NVIDIA with 16GB VRAM to run Flux and have something like 96GB of RAM for GPT OSS 120b. Or you give up on fast image generation and just do AMD Max+ 395 like you said or Apple Silicon.
rkd@sh.itjust.worksOPto
LocalLLaMA@sh.itjust.works•So image generation is where it's at?English
3·6 months agoI’m aware of it, seems cool. But I don’t think AMD fully supports the ML data types that can be used in diffusion and therefore it’s slower than NVIDIA.
it’s most likely math
rkd@sh.itjust.worksto
Games@sh.itjust.works•Nintendo-owned titles excluded from Japan’s biggest speedrunning event after organizers were told they had to apply for permission for each gameEnglish
13·6 months agoCongratulations Nintendo, you played yourself.


I can read minds and they’re thinking “we better get some money around here, otherwise we’re still blaming the immigrants”.