

I use a raspberry pi 3b for running home assistant. even though it’s not officially supported it runs quite well for my uses.
In the past when I had a public ip I had it run a small personal website and a wireguard vpn server to allow me to always ssh into my desktop at home (it was joining the vpn-connected device into the LAN, to the point where it was even getting the IP from my router’s DHCP server…).









I looked at some examples of what it generates, and it has some of the same problems of AI video: disfigured human crowds and it “forgets” the environment you are in, which sometimes results in the room around you changing from when you looked last time (in addition to these “games” being basically walking simulators without any interactivity or animations). This for now makes very boring “games” where nothing really matters because there’s no object persistency, and I don’t see how they are going to solve this issue because it looks like an inherent flaw of current AI technology (managing context windows is challanging also for LLMs).
IMHO this can be used at best a rendering technology, or to make a photo become explorable in 3d (with some made-up parts), but not games.
In case you want to also have a look at more than the cherry-picked examples posted by google, i found this video that has no commentary (not made by me - I hope it’s ok by the rules to post youtube links?) https://www.youtube.com/watch?v=jCTWCx8UPlI