Yes…but compression
And with csv you just gotta pray that you’re parser parses the same as their writer…and that their writer was correctly implemented…and they set the settings correctly
Yes…but compression
And with csv you just gotta pray that you’re parser parses the same as their writer…and that their writer was correctly implemented…and they set the settings correctly


Allegedly…he hasn’t admitted to anything afaik… and as far as I’ve seen the evidence is flimsy …why would he escape and then hang out in a fast food restaurant with the disposable murder weapon?


Hmm, not so sure. He produced a digital signal, who’s spectrogram happened to be an image, and then played that digital signal to a bird. Dunno if a analogue spectrogram really even makes sense as a concept. The only analogue part of the chain would be the birds vocalisations, right?


Well, guess I can’t deny such compelling evidence


As much of a prick as this guy is, I don’t think that’s true. The behind the bastards episode on him couldn’t substantiate it at least


Reverse proxy with mTLS in front might be a simple solution depending on your setup


Or like looking at the early days of semiconductors and extrapolating that CPU speed will double every 18 months …smh these people
They were invented *by 9k bc :)

Can you go into a bit more details on why you think these papers are such a home run for your point?
Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them
These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply
These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)
These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable
These papers do not consider multimodal systems.
You talked about permeance, does a RAG solution not overcome this problem?
I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic


Unfortunately not, here is a little kitchen sink type demo though https://myst-nb.readthedocs.io/en/latest/authoring/jupyter-notebooks.html
Myst-nb is probably the place to start looking btw - forgot to mention it in previous post


I use sphinx with Myst markdown for this, and usually plotly express to generate the js visuals. Jupyterbook looks pretty good as well


Feeding the troll 🤷♂️ “agenda driven” what does that even mean 😆
No one said other languages aren’t allowed. Submit a patch and prepare yourself for years of painstaking effort.


Devils advocate: Splatting, dlss, neural codecs to name a few things that will change the way we make games


Privacy preserving federated learning is a thing - essentially you train a local model and send the weight updates back to Google rather than the data itself…but also it’s early days so who knows what vulnerabilities may exist


You need rebase instead. Merge just creates useless commits and makes the diffs harder to comprehend (all changes are shown at once, but with rebase you fix the conflicts in the commit where they happened)
Then instead of your branch of branch strat you just rebase daily into main and you’re golden when it comes time to PR
For me the infinity subscription bypass stopped working so I finally made the switch
Nahh your nitpicking there, large csvs are gonna be compressed anyways
In practice I’ve never met a Json I cant parse, every second csv is unparseable