• 0 Posts
  • 28 Comments
Joined 5 年前
cake
Cake day: 2020年7月8日

help-circle
  • Just use your own brain and eyeballs.

    We’re going to need something a bit more robust than that I’m afraid. People who fall for conspiracy after conspiracy are using their own brain and eyeballs. Even schizophrenics use their own brain and eyeballs.

    If someone’s childhood photos show snow-free Christmases, is that enough for them to declare climate change a hoax?

    People need to understand the scientific method better. What a hypothesis is, the importance of falsifiability. There is no truth in the universe, you can’t determine what is true and what is false by looking inside yourself, or by trusting your gut. There is no truth. That is the universe we live in.




  • The merits are real. I do understand the deep mistrust people have for tech companies, but there’s far too much throwing out of the baby with the bath water.

    As a solo developer, LLMs are a game-changer. They’ve allowed me to make amazing progress on some of my own projects that I’ve been stuck on for ages.

    But it’s not just technical subjects that benefit from LLMs. ChatGPT has been a great travel guide for me. I uploaded a pic of some architecture in Berlin and it went into the history of it, I asked it about some damage to an old church in Spain - turned out to be from the Spanish civil war, where revolutionaries had been mowed down by Franco’s firing squads.

    Just today, I was getting help from an LLM for an email to a Portuguese removals company. I sent my message in English with a Portuguese translation, but the guy just replied back with a single sentence in broken English:

    “Yes a can , need tho mow m3 you need delivery after e gif the price”

    The first bit is pretty obviously “Yes I can” but I couldn’t really be sure what he was trying to say with the rest of it. So I asked ChatGPT who responded:

    It seems he’s saying he can handle the delivery but needs to know the total volume (in cubic meters) of your items before he can provide a price. Here’s how I’d interpret it:

    “Yes, I can [do the delivery]. I need to know the [volume] in m³ for delivery, and then I’ll give you the price.”

    Thanks to LLMs, I’m able to accomplish so many things that would have previously taken multiple internet searches and way more effort.




  • I certainly am not surprised that OpenAI, Google and so on are overstating the capabilities of the products they are developing and currently selling. Obviously it’s important for the public at large to be aware that you can’t trust a company to accurately describe products it’s trying to sell you, regardless of what the product is.

    I am more interested in what academics have to say though. I expect them to be more objective and have more altruistic motivations than your typical marketeer. The reason I asked how you would define intelligence was really just because I find it an interesting area of thought which fascinates me and has done long before this new wave of LLMs hit the scene. It’s also one which does not have clear answers, and different people will have different insights and perspectives. There are different concepts which are often blurred together: intelligence, being clever, being well educated, and consciousness. I personally consider all of these to be separate concepts, and while they may have some overlap, they nevertheless are all very different things. I have met many people who have very little formal education but are nonetheless very intelligent. And in terms of AI and LLMs, I believe that an LLM does encapsulate some degree of genuine intelligence - they appear to somehow encode a model of the universe in their billions of parameters and they are able to meaningfully respond to natural language questions on almost any subject - however an LLM is unquestionably not a conscious being.


  • You’re right that we need a clear definition of intelligence if we are to make any predictions about achieving AGI. The researchers behind this article appear to mean “human-level cognition” which doesn’t seem to be a particularly objective or useful yardstick. To begin with, which human are we talking about? If they’re talking about an idealised maximally intelligent human, then I don’t think we should be surprised that we aren’t about to achieve that. The goal is not to recreate human cognition as if that’s some kind of holy grail. The goal is to make intelligent systems which can give results which are at least as good as what would be produced by a skilled and well-trained human working on the same problem.

    Can I ask you how you would define intelligence? And in particular, how would you - if you would at all - differentiate intelligence from being clever, or from being well educated?



  • What do you think evolved first - verbal communication or thoughts? Presumably we were able to think before we could speak, no? The words we have in our language are like pointers to internal concepts, and it seems to me that those internal concepts would have existed before language was a thing. The mouth-sounds as you put it are not the thoughts themselves, rather just labels for specific concepts. It might be possible and even convenient to think in mouth-sounds but it’s not necessary for logical thought.




  • Tramways and Light Rails are much more silent

    From inside, maybe? Berlin, where I live, has lots of trams all over the city. I admit I rarely use them as I much prefer my bicycle, but they are seriously noisy. During the day the noise is somewhat lost in the general cacophony of city life, but in the evenings you can hear them rattling and crashing along from streets away. And if you live on a road with a tramline, you just have to accept this horrible metal-on-metal screeching and rattling at almost all hours.







  • just fancy phone keyboard text prediction.

    …as if saying that somehow makes what chatGPT does trivial.

    This response, which I wouldn’t expect from anyone with true understanding of neural nets and machine learning, reminds me of the attempt in the 70s to make a computer control a robot arm to catch a ball. How hard could it be, given that computers at that time were already able to solve staggeringly complex equations? The answer was, of course, “fucking hard”.

    You’re never going to get coherent text from autocomplete and nor can it understand any arbitrary English phrase.

    ChatGPT does both those things. You can pose it any question you like in your own words and it will respond with a meaningful and often accurate response. What it can accomplish is truly remarkable, and I don’t get why anybody but the most boomer luddite feels this need to rubbish it.


  • > I cannot wait until architecture-agnostic ML libraries are dominant and I can kiss CUDA goodbye for good

    I really hope this happens. After being on Nvidia for over a decade (960 for 5 years and similar midrange cards before that), I finally went AMD at the end of last year. Then of course AI burst onto the scene this year, and I’ve not yet managed to get stable diffusion running to the point it’s made me wonder if I might have made a bad choice.