When we say artificial intelligence, we mean a conscious being. That requires self-awareness, a sense of body ownership, and sense of self-agency and therefore separation from environment and others (sense of self). For human level intelligence, you need a high level of abstraction, and the capacity for meta-cognition. The neural correlates of all of these are being actively studied today.
Humans have introspective blindness sometimes (not knowing the origins of an idea, say) and... therefore an AI with human-level (or greater) cannot be created? Not sure I follow your reasoning. You are a conscious being, cognitive biases aside, rest assured, however cobbled together your sense of a cohesive identity is.
Forgive me, I suspect you've never built a machine learning model. I have, and a list of algorithms isn't something that's impressive. ML is a type of learning that isn't explicitly programmed, it is not "hard coded". (That's already a little bit cool, but it's hardly all we're capable of currently.)
We also have DeepLearning (uses layered
neural networks), which is somewhat like the our very own neural system. A cursory look finds me GPT-3, which is capable of "performing DIVERSE tasks without specific training". Not too shabby. That's where we are at currently, at a fairly advanced AI level, but not AGI yet.
Both studies of consciousness and AI are farther ahead than you seem to believe.
However, my article wasn't about current technology or understanding of consciousness, it's about the future, and what experts in the field think: "In 2009, 21 AI experts participating in AGI-09 conference were surveyed. Experts believe AGI will occur around 2050, and plausibly sooner."
AGI is not something beyond the realm of possibility according to the experts, but they obviously cannot predict an exact date.
Tell you what, if it's achieved within our lifetime, you owe me $20.