Comment Re:Backlash or opinion drifting towards the scienc (Score 0) 22
General AI still has no chance against a Grand Master (and probably below). The beating was done by a specialized automaton that cannot do anything else.
General AI still has no chance against a Grand Master (and probably below). The beating was done by a specialized automaton that cannot do anything else.
It _did_ search the web. It did not find any sources. Seriously. Congratulations, you are just an asshole making assumptions.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftech.yahoo.com%2Fai%2Farti...
""Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," Altman said at Sequoia Capital's AI Ascent event earlier this month."
Not a surprise then that a lot of Slashdotters (who tend to be on the older side) emphasize search engine use.
Insightful video on other options for using AI:
"Most of Us Are Using AI Backwards -- Here's Why"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Takeaways
1. Compression Trap: We default to using AI to shrink information--summaries, bullet points, stakeholder briefs--missing opportunities for deeper insight.
2. Optimize Brain Time: The real question isn't "How fast can I read?" but "When should I slow down and let ideas ferment?" AI can be tuned to extend, not shorten, our cognitive dwell-time on critical topics.
3. Conversational Partnership: Advanced voice mode's give-and-take cadence keeps ideas flowing, acting like a patient therapist and sharp colleague rolled into one.
4. Multi-Model Workflow: I pair models deliberately--4o voice for live riffing, O3 for distilling a thesis, Opus 4 for conceptual sculpting--to match each cognitive phase.
5. Naming the Work: Speaking thoughts aloud while an AI listens helps "name" the terrain of a project, turning vague hunches into navigable coordinates.
6. AI as Expander: Used thoughtfully, AI doesn't replace brainpower; it amplifies it, transforming routine tooling into a force-multiplier for deep thinking."
Other interesting AI Videos:
"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Is AI Apocalypse Inevitable? - Tristan Harris"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
See also an essay by Maggie Appleton: "The Dark Forest and Generative AI: Proving you're a human on a web flooded with generative AI content"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmaggieappleton.com%2Fai-...
Talk & video version: "The Expanding Dark Forest and Generative AI: An exploration of the problems and possible futures of flooding the web with generative AI content"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmaggieappleton.com%2Ffor...
On what Star Trek in the 1960s had to say about AI and becoming "Captain Dunsel" and also the risk of AI reflecting its obsessive & flawed creators,:
"The Ultimate Computer
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
An insightful Substack post (which I replied to) on that theme of flawed creators making a flawed creation, mentioning the story of the Krell from Forbidden Planet:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fsubstack.com%2F%40bernsh%2Fn...
"In Forbidden Planet the Krell built a machine of unimaginable power, designed to materialize thought itself -- but were ultimately destroyed because it also materialized their unconscious, primitive, destructive impulses, which they themselves did not fully understand or control.
They also mention other stories there (perhaps generated from an LLM), including The Garden of Eden, Pandoraâ(TM)s Box, The Tower of Babel, The Icarus Myth, and Prometheus. I my response I mentioned some other sci-fi stories that touch on related themes for that and my sig on the irony of tools of abundance misused by scarcity-minded people.
Inspired by that first video on using AI to help refine ideas, a few days ago I used llama3.1 to discuss an essay I wrote related to my sig ( "Recognizing irony is key to transcending militarism" https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni... ). The most surprisingly useful part was when I asked the LLM to list authors who had written related things (most of whom I knew of), and then, as a follow-up, what those authors might have thought about the essay I wrote. The LLM included for each author what parts of the essay they would have praised and also what was missing from the essay from that author's perspective.
That requires skill and insight. And hence all advantages are lost.
I also dipped in so-called "vibe coding" using commercial offers (my small 12B model would not have been fair in that regard. I spent a few hours trying to make something I would consider both basic, easy to find many example of, and relatively useful: a browser extension that intercept a specific download URL and replace it with something else. At every step of the way, it did progress. However, it was a mess. None of the initial suggestion were ok by themselves; even the initial scaffolding (a modern browser extension is made of a json manifest and a mostly blank script) would not load without me putting more info in the "discussion". And even pointing the issues (non-existent constants, invalid json properties, mismatched settings, broken code) would not always lead to a proper fix until I spelled it out. To make it short: it wasn't impressive at all. And I'm deeply worried that people find this kind of fumbling acceptable. I basically ended up telling the tool "write this, call this, do this, do that", which is in no way more useful than writing the stuff myself. At best it can be an accessibility thing for people that have a hard time typing, but it's not worth consideration if someone's looking at a "dev" of some sort.
Disappointing but expected. And generic URL repacer is not even a "hard" project by any means. I recently talked to somebody with a similar experience. First, the model omitted 8 of the 12 steps the solutuion would have needed and, when asked, claimed that this was correct. And after finally and laborously coerced in solving the full problem and then asked for test code, it provided test code for the first 2 (!) of the 12 steps and then claimed this was 100% test coverage. In the end, tis may be somehwt helpful, completely useless or a waste of time in the hands of an expert. In the hands of a more average coder (or worse), this is dangerous and will very likely backfire badly. Remember that about 80% or so of the coding time in a larger project is fixes and later additions. Make these harder and you lose any and all gains on initial code creation. You may even end up with somthing you can just throw away after a bit of time.
Mostly, though, I try to avoid the brain atrophe device.
Indeed. These things are dangerous. Use only when needed and then carefully is the name of the game.
Indeed. Hallucinations are a primary characteristic of any LLM and _cannot_ be avoided.
Hmm. Sarcasm, troll or moron? Hard to tell.
Exactly. That is why the result was useless (or worse): High probability of hallucination and no way to verify.
It was for something that I had been looking unsuccessfuly for a while. So all ChatGPT did was add to my frustration. I do not even know whether this was a hallucination or the truth it gave me. Pathetic.
As for Wikipedia, quality is usually high and you get tons of sources if you do not trust it.
Little problem with that: The religious are routinely more cruel than atheists, often much more so because they hallucinate that they have God on their side. They want people to suffer.
Yep, from yourself. Ideally they will kill you "lawfully" or protected by immunity so that you cannot harm yourself! Total safety and total genius!
The generation of random numbers is too important to be left to chance.