

You can probably do that right now, actually.
https://www.gsmarena.com/fairphone_5-12540.php + https://postmarketos.org/
or


You can probably do that right now, actually.
https://www.gsmarena.com/fairphone_5-12540.php + https://postmarketos.org/
or


Fair. I should have fed it a better article. OTOH, I’m confident that this quality of synthesis isn’t native to anything under 70B. So, if the tooling can uplift the reasoning ability of a 4B to that level, that’s pretty good in my book.


Hmm?
“…the EPA has long maintained that such pollution sources require permits under the Clean Air Act” and reiterated that policy on January 15th.
Buckheit is a former official commenting on enforcement failure, not the source of the permitting position. The nuance the model could have flagged better is the gap between EPA’s stated policy and its current enforcement posture under Trump? Those are different things.
Fair critique on the depth, but the attribution isn’t wrong, is it?


Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.
Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, which the model then builds context on. It funnels it in the right direction and the llm tends to stay in that lane. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to far fewer wrong answers.
EDIT: I got my threads mixed. Still same point but for context, see - https://lemmy.world/post/44805995


As was foretold in legend.
Twas but the working of a moment. I just added -
{“term”: “meat popsicle”, “category”: “fifth_element”, “definition”: “The Fifth Element (1997), 01:04:11. A police officer asks Korben Dallas (Bruce Willis): ‘Sir, are you classified as human?’ Reply: ‘Negative. I am a meat popsicle.’ Not an insult or irony - the straightest possible answer to a stupid question. Adopted as Gen X shorthand for acknowledging one’s own biological inconsequence with maximum economy of feeling.”, “source”: “static”, “confidence”: “high”, “tags”: [“fifth_element”, “pop_culture”, “gen_x”, “snark”, “bruce_willis”]}
– to one file and it was up to speed.
EDIT: dropped in a better definition.


Because sometimes, people deserve to have their faith rewarded when they go looking :)
Now go look at the about section or the “Some problems This Solves” on the repo, and enjoy the absurdity of sentient yeast :)
PS: Yes, please do try it
PPS: HAHA! You can run it on your phone RIGHT NOW. Well, you can run it on your PC and then access it on your phone via http://127.0.0.1:8088/ when you’re on the same LAN / WIFI. Given that tailscale exists, you could probably make that happen outside of your home too, firewall troubleshooting notwithstanding. (One of my personal use-cases for llama-conductor is exactly that).
Personally, I really like the below app myself (it’s what I use to access llama-conductor via my phone) and am considering forking it and making more streamlined.
https://github.com/Taewan-P/gpt_mobile
There’s an issue with it in that older (pre Android 12) version times out after 30 seconds. ##mentats triple pass can take longer than that on my shit-tier GPU, so I may need some jiggery-pokery. I tried forcing keep alive via llama-conductor but gpt_mobile just sort of ignored me.
Be aware this is not a multi-tenancy rig - it assumes 1 user at a time. You CAN have more people than 1 person access it of course, but stuff you add via !! they may be able to recall via ?? on their end, so don’t plan any extravagant murders in plain sight (!! DIE BART). That was an intentional design decision due to how gpt_mobile works. I’ll harden it once I fork that app; the piping is already in place.


o7
We green? We super green? Corbin Dallas my man?
PS: I know for sure >>fun mode pulls in a bunch of Firefly, Buffy and 5th element snark but I dunno if it will catch on meat popsicle. Maybe? Let me procrastinate uh, perform some urgent QC right now.
For sure it will once claude-in-a-can is done. I mean, what is the point of a LLM if it can’t shit talk you while helping you solve a problem?
https://bobbyllm.github.io/llama-conductor/blog/claude-in-a-can-1/


Thank you!
Done.
Also, go Team Codeberg.


Corporations are people too! /s


Heh. Mass.gravel as the github repo always makes me do a double take.


I don’t understand the M$ endgame with Win 11.
Like, it would be very easy to paint recent gaffs as intentional… but as Halon’s sez “never attribute to malice that which is adequately explained by stupidity”.
Putting aside low hanging fruit (end stage capitalism, ai bad etc)…y u do this, Microsoft? You have good people there, right? Top. Men. Right?
I’d love to read something on this topic from a M$ insider / ex-pat. I’m trying to understanding why M$ is doing the equivalent of Sideshow Bob stepping on garden rakes.
What’s up over there?


AI already trains on Wikipedia.


Done
I’ll give you the noob safe walk thru, assuming starting from 0
git clone https://codeberg.org/BobbyLLM/llama-conductor.gitcp docker.env.example .env (Windows: copy manually)docker compose up -ddocker compose --profile webui up -dIncluded files:
docker-compose.ymldocker.env.exampledocker/router_config.docker.yamlNoob-safe note for older hardware:
Docs:


Nice :) I have it on right now. Might need a touch more reverb, though that could just be the track (“Silence Between Thunder and Lightning”). Definitely in the ballpark. Cheers for that.
I had an idea for you driving home, though it may introduce scope creep.
Have you considered a hybrid station mode where the user can supply their own music library and Synapse FM intermingles it with the generated tracks? For example, maybe the user uploads a playlist manifest plus files or points Synapse at a Google Drive / Dropbox folder containing MP3s and an .m3u playlist. Then the system could:
So instead of pure AI radio, it becomes something closer to:
“your own music taste, extended infinitely”
That feels like a pretty compelling hook to me…and might actually protect your from the haters.
Set it up so tracks are either played directly, or used as “station DNA” for selection / matching / transitions. Or both.
Or (and this is my preference) you could have it so that the scheduler inserts user tracks every N songs.
You could even allow users to tip the balance, user side:
I’m handwaving away a lot here but even as a local/private beta feature for you alone, it seems like a genuinely interesting direction.
Again - scope creep / you might see it differently than I do. Still, even if you just play with it at home, try it and see if the idea works,
Just wanted to share, one journeyman to another.
It’s a good project and you SHOULD post the URL here (I won’t / am respecting your privacy).
Be proud of it, it’s good work.
EDIT: Just caught the jingle between songs - well done! Exactly right.


Nice! Sing out (ha ha) when it’s done so we can try it.


Do it :) It would add a lot, I think. Though it introduces some complexity on your end if you have to geo-tag canonical feeds per user, per each location, to extract from; a few set ones (technology, science, world news etc) per each station might be easier…but then have the DJ announce in the voice of whatever that IP address is from?
Dunno. You’re clearly more than capable of working it out, so I look forward to seeing what you do.


Yes, it would be useful, I think. You could for example source something from a RSS feed to turn into a news cast - just 2-3 items - as part of the the station support Jingles. You’d have to maybe ask Claude etc for some ideas (perhaps pulling different RSS feeds to match the station? The synthwave one might pull in arstechnia or something).
I’ll keep and eye out for Cloud 9 Chill.
Is there a blog or some such you use to discuss the architecture of SynapseFM? Would be curious to know more.


Yeah, I just caught the tail end of a DJ announcement on the Island Vibes station. This is a great idea…but you will get murdered by the Lemmy / Reddit “this is AI slop” hivemind. I suspect those people haven’t turned on the radio in their car any time recently; I’d rather listen to this tbh
PS: submitted a request for something like ambient LoFi girl.
PPS: if you’ve got AI DJs…can we expect AI podcasts or short segments at some stage? Lean into the whole Three Dogs (Fallout 3) vibe.


Cheers! I will take a look. Weird how hostile Lemmy is to ai - especially LocalLLaMa. Think you got brigaded.
EDIT: Holy shit dude - that’s amazing. Well done!
“…specifically crafted to demonstrate tasks that humans complete easily”
Motherfucker, I can’t work out Minesweeper. I got zero fucking chance with your mystery box bloop game.