we are basically still as far away from AGI as we ever were
Nonsense.
No one knows how far we are from AGI, and anyone who tells you they do is either deluded or lying. It's impossible to know until either (a) we achieve it or (b) we have a sufficiently well-developed theory of intelligence that we can explain it. And, actually, even knowing whether we've built AGI is difficult without the explanatory theory, because without the theory we can't even define what AGI is.
We might be decades away, or we might have already done it and just not noticed yet.
About the only thing you can say for certain is that there is no logical reason to believe that we won't build AGI eventually. Unguided evolution, which is just random variation and competitive selection, achieved it. Our own knowledge creation processes are also variation and selection, but because they operate at an abstract level without the need to modify a physical genotype and wait for phenotypic expression and outcome, they run many orders of magnitude faster. So we will succeed at creating AGI unless we collectively decide not to, and collectively decide to be very serious about enforcing a ban on AI research.
There similarly is no reason to believe that AI won't become superintelligent. Silicon-based intelligence has obvious advantages over the much less-capable substrate that evolution cobbled together. And even if that weren't the case, we would just devise better options. So, the only logical argument against superintelligence is that there is some law of physics that dictates an upper bound to intelligence, and that the peak levels of human intelligence have already achieved it. And even if there is an upper limit on intelligence, and we're it, we should absolutely expect our AIs to reach the same level BUT be orders of magnitude faster than we are, thanks to better miniaturization and faster signal propagation. Imagine the smartest people in the world, but make them able to think and communicate 1000 times faster. Could we even distinguish that from superhuman intelligence? And it seems far more likely that there is no upper bound on intelligence.
The author of TFA may be right that some people are using discussion of AGI and ASI as a way to amass political power now, but that doesn't change the underlying reality that AGI and ASI are almost certainly coming, even if we have absolutely no idea when. Personally, I think it's more likely that the author is uncomfortable thinking about the implications of the arrival of AGI and ASI and prefers to retreat into political theories that keep humans in the pre-eminent position, maintaining the comfortable view that we only have to be concerned about what humans do to each other.