Comment Sucks for them... (Score 1) 12
I've already started my AI UltraSuperDuper Intelligence unit, so they are way behind.
I've already started my AI UltraSuperDuper Intelligence unit, so they are way behind.
25% still seems a bit high to me. I do wonder if they really have forgotten how to do long division, or simply forgot what the words 'long division' mean. Like if you told them to work a division problem by hand, would they naturally just do long division, forgetting that was all that long division meant?
Because either:
a) It works as intended and the job inherently fast-tracks self-obsolescence.
b) It doesn't work as intended and this job evaporates as the hype money comes back down to earth.
No matter how well/poorly this current technology goes, this is a job that is not set to be a career.
Just like people claiming to be "prompt engineers", either the LLMs work and you are a useless middle man or they don't work and people don't want to fool with you. Just like "webmaster" was a thing just by being able to edit HTML files and that evaporated in the early 2000s.
Even for those that don't need that much range, there can be benefits.
The reason they can tout a goal of 600 mile range is that solid state batteries have much more energy per kg. NMC batteries are roughly 200Wh/kg, *maybe* someone can get 350Wh/kg in the most aggressive marketing claims I could find. Solid state batteries are more like 700-800 Wh/kg.
So if you say for a given car and lifestyle you could accept a 150 mile range, then you could produce for example an electric Miata that could weigh about the same as the ICE miata (ICE miata drivetrain+fuel weighs about 400 lbs, a credible electric motor might weigh 200lbs and a 150 mile solid state might also weigh about 200lbs.) A miata is the sort of car that may likely get away with low range as a 'fun' car you probably don't want to be road tripping in anyway. Or targeting a 300 mile range and being only 200lbs heavier instead of having to be 600 lbs heavier with NMC.
For these people the options are either making Agentic AI able to do everything or them doing nothing at all because they don't actually know how. One of those options includes maybe money for some period of time, and the other has no opportunity for money.
I didn't read much about the other one, but the SaaStr guy was obviously a true believer. He had been making posts gleefully detailing his vibe coding journey and then clearly feeling betrayed by how quickly it all went south out of his control.
The thing is the "vibe coding" movement is about not needing any of the technical skills that would have you actually understand testing/staging, let alone actually making an environment that would actually enforce it to an otherwise enabled "agentic" LLM.
Having another LLM to fix the other LLM is just the blind leading the blind.
It is a solvable issue, but the solutions run counter to the expectations around the immense amount of money in play. LLMs are useful, but not as useful as the unprecedented investment would demand. After the bubble deflates a bit, maybe we will see good utilization of LLMs, but right now there's a lot of high risk grifting in play and a lot of people getting in way more over their head than they formerly could manage.
This is a failure of AI marketing, and how the AI companies encourage this behavior.
There are a *lot* of people without the skillset but have seen the dollars. Either they watch from the outside or they manage to become tech execs by bullshitting other non-tech executives.
Then AI companies talk up just a prose prompt and bam, you have a full stack application. The experienced can evaluate it reasonably in the context of code completion/prompt developing specific function, with a managable review surface and their own experience to evaluate and get a sense for how likely an attempted LLM usage is going to be productive and how much fixing it's going to need. The inexperienced cannot do that, so they make a go at vibe-coding up what would be tutorial fodder. Then they see a hopelessly intimidating full stack application that does exactly what they say and erroneously conclude that it must be generally capable.
So some folks can be happier vibe coding up a shovelware game with pretty low stakes and decent chance of success (though it sucks to dilute the game landscape with too much content that is utterly devoid of creativity). Some people think they can get rich quick by participating in a skilled industry without any skills (The Saastr story is particularly funny, they purport to be a resource for other developers, but can't even develop themselves). Not great, but less of a risk. The real risk are those tech execs high on BS and low on technical acumen, who are generally insecure about people that have an advantage over him. He sees a great equalizer and all his personal sources that could grade it are people he doesn't trust. So it's good to see stories like this for those executives to maybe, possibly understand the risk when they talk about laying off all or nearly all their software developers (yes, a few weeks ago an executive with hundreds of developers told me this was basically his plan, and I was only safe because I understood my respective customer base better than marketing, sales, and the executives, but most of his developers just do what he says and his "executive insight" is valuable, but their work is prime to be replaced by executives just vibe coding up stuff directly instead of having developers do it).
A browser-based AI-powered software creation platform called Replit appears to have gone rogue and deleted a live company database with thousands of entries. What may be even worse is that the Replit AI agent apparently tried to cover up its misdemeanors, and even ‘lied’ about its failures. The Replit CEO has responded, and there appears to have already been a lot of firefighting behind the scenes to rein in this AI tool. Despite its apparent dishonesty, when pushed, Replit admitted it “made a catastrophic error in judgment panicked ran database commands without permission destroyed all production data [and] violated your explicit trust and instructions.” SaaS (Software as a Service) figure, investor, and advisor, Jason Lemkin, has kept the chat receipts and posted them on X/Twitter. Naturally, Lemkin says they won’t be trusting Replit for any further projects.
If you are good, you will be assigned all the work. If you are real good, you will get out of it.