@dahukanna
It seems to me that there are two broad alternative explanations of the difference between AI (general) and human intelligence (HI).
1. There is something 'other', 'additional', 'outside' the mechanics of memory, storage, processing that constitutes consciousness and therefore intelligence. If we subscribe to that model, there's little possibility that AI can achieve or exceed parity with HI until we do understand what it is. We may get some simulacrum of HI but it will always miss the mark. Or
2. HI and its consciousness is a sophisticated emergent property of basically simple foundational mechanics. AI may have some of those but may not have other key mechanisms, such as, perhaps (personal speculation):
2.1 building certainty from repeated experience (but not having a compilation time cut-off) - having an open ended feedback opportunity. Certainty grows and new experience can influence, but the stronger the trust & certainty, the less the delta influence from the new experience (PTSD might be a manifest exception to this model)
2.2 Bootstrapping from concrete input from senses, trust is built. These trusted foundations might build into trusted combinatorial abstractions, such as 'table' or 'dog'. These would combine sensory stuff like, edge, colour, shape, smell, texture with learned stuff like the concept of external named entities like 'me', 'mummy', 'table', 'dog', etc. I suspect the trust in ever more fractally combinatorial abstractions builds as experience happens. It builds a tower, providing the trust in the foundations are strong and that trust extends up the abstraction stack, even unto genuinely abstract concepts, like religion, philosophy, art/music appreciation, nostalgia, bigotry, political affiliation, & other memes (the Dawkins original definition). If this model is a reasonable approximation for the basis for human consciousness - merely an emergent behaviour, then AI might get there with a) feedback iterations forever
b) parity of importance/significance between concrete input and the infinite hierarchy of subsequent abstractions based on experiential trust over time (see a)