• 6 Posts
  • 205 Comments
Joined 7 months ago
cake
Cake day: August 27th, 2025

help-circle




  • Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.

    The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.

    Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.

    Instead, what happens is the deterministic elements fire first. The model gets the answer, which the model then builds context on. It funnels it in the right direction and the llm tends to stay in that lane. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to far fewer wrong answers.

    EDIT: I got my threads mixed. Still same point but for context, see - https://lemmy.world/post/44805995


  • As was foretold in legend.

    Negative

    Twas but the working of a moment. I just added -

    {“term”: “meat popsicle”, “category”: “fifth_element”, “definition”: “The Fifth Element (1997), 01:04:11. A police officer asks Korben Dallas (Bruce Willis): ‘Sir, are you classified as human?’ Reply: ‘Negative. I am a meat popsicle.’ Not an insult or irony - the straightest possible answer to a stupid question. Adopted as Gen X shorthand for acknowledging one’s own biological inconsequence with maximum economy of feeling.”, “source”: “static”, “confidence”: “high”, “tags”: [“fifth_element”, “pop_culture”, “gen_x”, “snark”, “bruce_willis”]}

    – to one file and it was up to speed.

    EDIT: dropped in a better definition.


  • Because sometimes, people deserve to have their faith rewarded when they go looking :)

    Now go look at the about section or the “Some problems This Solves” on the repo, and enjoy the absurdity of sentient yeast :)

    PS: Yes, please do try it

    PPS: HAHA! You can run it on your phone RIGHT NOW. Well, you can run it on your PC and then access it on your phone via http://127.0.0.1:8088/ when you’re on the same LAN / WIFI. Given that tailscale exists, you could probably make that happen outside of your home too, firewall troubleshooting notwithstanding. (One of my personal use-cases for llama-conductor is exactly that).

    Personally, I really like the below app myself (it’s what I use to access llama-conductor via my phone) and am considering forking it and making more streamlined.

    https://github.com/Taewan-P/gpt_mobile

    There’s an issue with it in that older (pre Android 12) version times out after 30 seconds. ##mentats triple pass can take longer than that on my shit-tier GPU, so I may need some jiggery-pokery. I tried forcing keep alive via llama-conductor but gpt_mobile just sort of ignored me.

    Be aware this is not a multi-tenancy rig - it assumes 1 user at a time. You CAN have more people than 1 person access it of course, but stuff you add via !! they may be able to recall via ?? on their end, so don’t plan any extravagant murders in plain sight (!! DIE BART). That was an intentional design decision due to how gpt_mobile works. I’ll harden it once I fork that app; the piping is already in place.









  • Done

    I’ll give you the noob safe walk thru, assuming starting from 0

    1. Install Docker Desktop (or Docker Engine + Compose plugin).
    2. Clone the repo: git clone https://codeberg.org/BobbyLLM/llama-conductor.git
    3. Enter the folder and copy env template: cp docker.env.example .env (Windows: copy manually)
    4. Start core stack: docker compose up -d
    5. If you also want Open WebUI: docker compose --profile webui up -d

    Included files:

    • docker-compose.yml
    • docker.env.example
    • docker/router_config.docker.yaml

    Noob-safe note for older hardware:

    • Use smaller models first (I’ve given you the exact ones I use as examples).
    • You can point multiple roles to one model initially.
    • Add bigger/specialized models later once stable.

    Docs:

    • README has Docker Compose quickstart
    • FAQ has Docker + Docker Compose section with command examples

  • Nice :) I have it on right now. Might need a touch more reverb, though that could just be the track (“Silence Between Thunder and Lightning”). Definitely in the ballpark. Cheers for that.

    I had an idea for you driving home, though it may introduce scope creep.

    Have you considered a hybrid station mode where the user can supply their own music library and Synapse FM intermingles it with the generated tracks? For example, maybe the user uploads a playlist manifest plus files or points Synapse at a Google Drive / Dropbox folder containing MP3s and an .m3u playlist. Then the system could:

    • randomly select from the user’s own tracks
    • blend them into the generated station flow
    • use metadata / embeddings / simple tagging to keep tonal consistency
    • optionally let the AI DJ introduce those tracks as part of the same station identity

    So instead of pure AI radio, it becomes something closer to:

    “your own music taste, extended infinitely”

    That feels like a pretty compelling hook to me…and might actually protect your from the haters.

    Set it up so tracks are either played directly, or used as “station DNA” for selection / matching / transitions. Or both.

    Or (and this is my preference) you could have it so that the scheduler inserts user tracks every N songs.

    You could even allow users to tip the balance, user side:

    • AI only
    • Mostly AI
    • Balanced
    • Mostly my library

    I’m handwaving away a lot here but even as a local/private beta feature for you alone, it seems like a genuinely interesting direction.

    Again - scope creep / you might see it differently than I do. Still, even if you just play with it at home, try it and see if the idea works,

    Just wanted to share, one journeyman to another.

    It’s a good project and you SHOULD post the URL here (I won’t / am respecting your privacy).

    Be proud of it, it’s good work.

    EDIT: Just caught the jingle between songs - well done! Exactly right.



  • Do it :) It would add a lot, I think. Though it introduces some complexity on your end if you have to geo-tag canonical feeds per user, per each location, to extract from; a few set ones (technology, science, world news etc) per each station might be easier…but then have the DJ announce in the voice of whatever that IP address is from?

    Dunno. You’re clearly more than capable of working it out, so I look forward to seeing what you do.


  • Yes, it would be useful, I think. You could for example source something from a RSS feed to turn into a news cast - just 2-3 items - as part of the the station support Jingles. You’d have to maybe ask Claude etc for some ideas (perhaps pulling different RSS feeds to match the station? The synthwave one might pull in arstechnia or something).

    I’ll keep and eye out for Cloud 9 Chill.

    Is there a blog or some such you use to discuss the architecture of SynapseFM? Would be curious to know more.


  • Yeah, I just caught the tail end of a DJ announcement on the Island Vibes station. This is a great idea…but you will get murdered by the Lemmy / Reddit “this is AI slop” hivemind. I suspect those people haven’t turned on the radio in their car any time recently; I’d rather listen to this tbh

    PS: submitted a request for something like ambient LoFi girl.

    PPS: if you’ve got AI DJs…can we expect AI podcasts or short segments at some stage? Lean into the whole Three Dogs (Fallout 3) vibe.