

Depends, index alone (no trackers, no controllers) is Quite a bit cheaper


Depends, index alone (no trackers, no controllers) is Quite a bit cheaper


I personally expect that they will want to cover hardware cost, while being willing to subsidize development cost with the steam store.
So I think the low storage models will make a slight loss for valve in the beginning, while the higher storage models will get that back to even overall for valve.


The most disappointing part is the rumoured pricing of “aiming to be under $1000”.
$1 is “under $1000”.
at $999, even for the 1TB model, this is a really tough sell.
I’m not sure I’d get one, even though I love what they’re doing, and want to support it.
I really hope, that the “under $1000” is a misunderstanding from the “cheaper than index”, which currently sells for 539€ (~$625, incl tax) without controller and base stations.
That would be a great price.
I can just hope it’s nearer to that than the $1000.


I think all of these are nice, if priced correctly
Steam Frame needs to compete with quest, so prices over $800 are a really tough sell.
Steam Machine needs to compete with consoles, PS5 (non-pro) and Series S, so prices over ~$700 will become really tough.
Prices start becoming really good, if they manage to come it at ~$600 for Steam Frame and ~$500 for Steam Machine.
But with current hardware prices, Valve being valve and no-one can know if they want to make money on the hardware, or if they are willing to sell at cost, or if they are willing to subsidize, who knows where we will land.


Prusa is way more open, but significantly more expensive, especially when buying assembled.
If youbwant multicolor/multimaterial their current (fairly soon to be replaced) solution is not considered as user-friendly as the current bambu-solution.
Yes, when the build volume is 10x10x10 you can print things within that volume, but of course it still has to be a printable shape.
A T shape for example would be difficult to print, printer print layer by layer and as the “Arms” on the top would have nothing to be “stuck on”, so you’d need what is called “supports”, a printed shape just there to support the actual object that you want to print. Usually were support meets object the surface quality of the print suffers to some degree.
In the case of a T shape, just print it upside down then ;)


I just want to note that they recently did take functionality away. Home Assistant Integration and Panda touch and orca slicer for example, all of those have to either use lan mode, not update the firmware or jump through more hoops than before


Gameyfin exists as well


Not even that though, just that his opinions are not always wrong, and his investments sometimes right


Well, reusable rockets are worth it, LEO satellite Internet being worthwhile, electric cars being a thing.
Stuft like that
You do realize that you lose quality with wach encode, right?
It’s not AS bad when bitrates are high, but it’s still there.


But that’s exactly the point.
if the virtual map they’re building from cameras is complete, correct and stable (and presumably some other criteria that I didn’t think of from the top of my head), then the cameras would be sufficient.
The underlying neural decision network can still fuck things up from a correct virtual world map.
Now, how good is the virtual world map in real world conditions?


Well, do we know what the blockers are for Tesla?
I feel like when I watch videos of FSD on cars, the representation of the world on the screen is rather good.
Now given this datapoint of me watching maybe 30minutes of video in total, is the issue in:
a) creating the distance to obstacles in the surroundings from cameras or in:
b) reading street signs, road markings, stop lights etc, or in:
c) doing the right thing, given a correct set of data about the surroundings?
Lidar / Radar / Sonar would only help for a).
Or is it combination of all of them, and the (relatively) cheap sensor would at least eliminate a), so one could focus on b and c?
I’ve always wanted to try dipping a fdm printed mini into some craft UV curing resin.
In my imagination that makes everything better ;), but I’ve never gotten around to try


Because you don’t train your self-hosted LLM.
As a result you only pay for the electricity of computing your tokens (your request), this can be especially reasonable if the same machine also does local game streaming and or transcoding, and thus already has the requirements to host a LLM.
If you don’t have rather unreasonable means, your local LLM is just very much more limited in parameters (size), and will not be as good as other, much larger models.
Privacy, Ethics and personal interest usually are the largest drivers from what I can tell.


When asked about Nintendo’s solution for backwards compatibility with Switch games and the GameCube classics available on the system, the developers confirmed these games are actually emulated. (This is similar to what Xbox does with backwards compatibility).
“It’s a bit of a difficult response, but taking into consideration it’s not just the hardware that’s being used to emulate, I guess you could categorize it as software-based,” Sasaki said of the solution.
They are (mostly?) talking about Gamecube right?..
right?
Or is that the reason for the Switch-Emulator-Witchhunt, they actually “bought” the tech?
I’ve had partial clogs that manifest like that.
Cold pulls (several) ended up resolving my issue.
my best explanation was, that there was some debris in the nozzle, which would sometimes (nearly) seal the nozzle, and at other times be retracted with the filament, get stuck somewhere else and filament flows freely.


The whole idea is they should be safer than us at driving. It only takes fog (or a painted wall) to conclude that won’t be achieved with cameras only.
Well, I do still think that cameras could reach “superhuman” levels of safety.
(very dense) Fog makes the cameras useless, A self driving car would have to slow way down / shut itself off. If they are part of a variety of inputs they drop out as well, reducing the available information. How would you handle that then? If that would have to drop out/slow down as much, you gain nothing again /e: my original interpretation is obviously wrong, you get the additional information whenever the environment permits.
And for the painted wall. Cameras should be able to detect that. It’s just that Tesla presumably hasn’t implemented defenses against active attacks yet.
You had a lot of hands in this paragraph. 😀
I like to keep spares on me.
I’m exceptionally doubtful that the related costs were anywhere near this number.
cost has been developing rapidly. Pretty sure several years ago (about when tesla first started announcing to be ready in a year or two) it was in the tens of thousands. But you’re right, more current estimations seem to be more in the range of $500-2000 per unit, and 0-4 units per car.
it’s inconceivable to me that cameras only could ever be as safe as having a variety of inputs.
Well, diverse sensors always reduce the chance of confident misinterpretation.
But they also mean you can’t “do one thing, and do it well”, as now you have to do 2-4 things (camera, lidar, radar, sonar) well. If one were to get to the point where you have either one really good data-source, or four really shitty ones, it becomes conceivable to me.
From what I remember there is distressingly little oversight for allowing self-driving-cars on the road, as long as the Company is willing to be on the hook for accidents.


well, Apollo was not part of the Saturn program, was it?
Rocket did fine, even during Apollo 13 it wasn’t the Saturn.


They talk about the safety record of Saturn rockets without mentioning that using those isn’t currently possible
And that, at least from my memory, multiple people in the Saturn Program considered it to have been extremely good luck to not have had a failure which led to deaths.
Well, he could have backed out. (Which would have been safer)
With this ruling FIA says that that’s not required.