• 4 Posts
  • 119 Comments
Joined 2 months ago
cake
Cake day: February 11th, 2026

help-circle
  • Ah, I’ll put in a zoom feature, that’s a good idea!

    Remind me of the hardware you’re running on? 22 hours for a 4k HDR movie sounds about in the ballpark for converting on CPU. I’ve just switched to Linux (Mint, not Cachy) and I think there’s an issue with detecting GPU on Linux, so this’d track (or you have Precision Mode enabled) - if you see “libx265” or “libx264” in the top right, you’re on CPU. I’m looking into this one.

    Can I ask which version you downloaded? I’ll look into the DVTools/MP4box issue.

    Also, yes, I removed the codec and container selection boxes - it’s HEVC/MKV by default unless you go for “Compatibility Mode” in which case you get H.264/MP4. “Preserve AV1” of course preserves AV1 which is incompatible with MP4 so they’re mutually exclusive.





  • When Reddit started, Spez and Aaron made “sock puppet” accounts to make the site seem more active than it was, because you have to have what looks like an active user base to attract more users.

    Now, after Reddit has gone public, they need the appearance of lots of users to attract advertising dollars and keep their stock price high, and there’s no need to operate sock puppets by hand anymore because of LLMs - they can be the sock puppets, and if you have enough of them acting human enough, it doesn’t even matter if some people realise or if one gets called out as a bot.

    This also has the extremely useful benefit of steering society slowly towards the ideology of the billionaire, by having those bots normalise hate in the tsunami if messages they post.

    I think the account you interacted with was an LLM, not even a real troll.
















  • Ah-ha, thanks for the update on Docker! Saves me going down that rabbit hole 😅

    On the files on the NAS: yep, that’s by design. My files are across the WAN, not LAN, so I built it to stage remote files locally before transcoding. It currently pulls a file, transcodes it, and moves it wherever you chose for output. This does mean that going over a network is slow, because you have to wait for the staging and cleanup before doing another file. That’s deliberately conservative though; I wanted to avoid saturating networks in case the network operator takes exception to that sort of thing. A secondary benefit is that the disk space required for operations is just twice the size of the source file - very low chance of having to pause a job because the disk monitoring detected there’s no room.

    I’ll look at putting in an override that disregards the network and treats remote files as local for you!