Fourth decade here. Up until a few months ago I would have agreed with virtually all of the negative comments in here, but after a re-org I am now on a team supporting multiple AI products, and have become immersed in anything AI, including vibe coding.
For vibe coding, I've had mixed results, but I want to make a couple of important points. First, the whole vibe-coding landscape is evolving very quickly. New IDEs and CLI tools are being announced almost daily. Second, the backend architecture of these tools is also evolving very quickly. Support for MCPs (which, for example, the LLM can use to retrieve info it doesn't have internally) can eliminate a lot of hallucinations and result in higher quality results. Many of the tools now have backends that get a request, analyze it, and then delegate to an appropriate specialist LLM that is faster and provides better results than having one giant monolithic LLM that tries to do everything, i.e., Jack of all trades, master of none.
From what I've seen so far, the keys to successful vibe coding are learning the tool you're using and understanding its strengths and weaknesses, and learning how to write good prompts. Since each tool has its own strengths and weaknesses, it's good to understand when to use one vs. another for a given task. You may find that one tool is great for producing a one-shot throwaway utility, while another is best for building a website with an attractive and easy-to-use UI.
Let's not forget that GPT 3.5, the model used when openai first released chatgpt, only came out 3 years ago. We're still very early in the evolution of generative AI.