1. Even the AI systems (and I've checked with Claude, three different ChatGPT models, and Gemini) agree that AGI is not possible with the software path currently being followed.
2. This should be obvious. Organic brains have properties not present within current neural nets (localised, regionalised, and even globalised feedback loops within an individual module of a brain (the brain doesn't pause for inputs but rather mixes any external inputs with synthesised inputs, the and brain's ability to run through various possible forecasts into the future and then select from them along the brain's ability to perform original synthesis between memory constructs and any given forecast to create scenarios for which no input exists, to produce those aforementioned synthesised inputs). There simply isn't a way to have multi-level infinite loops in existng NN architectures.
3. The brain doesn't perceive external inputs the way NNs do - as fully-finished things - but rather as index pointers into memory. This is absolutely critical. What you see, hear, feel, etc -- none of that is coming from your senses. Your senses don't work like that. Your senses merely tell the brain what constructs to pull, and the brain constructs the context window entirely from those memories. It makes no reference to the actual external inputs at all. This is actually a good thing, because it allows the brain to evolve and abstract out context, things that NNs can't do precisely because they don't work this way.
Until all this is done, OpenAI will never produce AGI.