Even a hypothetically true artificial general intelligence would still not be a moral agent
That’s a deep rabbit hole that can’t be stated as a known fact. It’s absolutely true right now with LLMs, but at some point the line could be crossed. If and when, how, and by what definition has been a long debate nowhere near resolved.
It’s highly possible that AGI/ASI could come about and be both super intelligent and self conscious and still have no sense of morality. But how can we at human levels even comprehend what’s possible? There’s the real danger, we have no idea what we could be heading towards.
The flaw of the question is assuming there is a clear dividing line between species. Evolutionary change is a continuous process. We only have dividing lines where we see differences in long dead ones in the fossil record, or we see enough differences in living ones. The question has no answer, only a long explanation of how that isn’t how any of this works.