While you are correct that there likely is no intention and certainly no self-awareness behind the scheming, the researchers even explicitly list the option that the AI is roleplaying as an evil AI, simply based on its training data, when discussing the limitations of their research, it still seems a bit concerning.
The research shows that given a misalignment between the initial prompt and subsequent data modern LLMs can and will ‘scheme’ to ensure their given long-term goal.
It is no sapient thing, but a dumb machine with the capability to decive its users, and externalise this as shown in its chain of thought, when there are goal misalignments seems dangerous enough. Not at the current state of the art but potentially in a decade or two.
While you are correct that there likely is no intention and certainly no self-awareness behind the scheming, the researchers even explicitly list the option that the AI is roleplaying as an evil AI, simply based on its training data, when discussing the limitations of their research, it still seems a bit concerning. The research shows that given a misalignment between the initial prompt and subsequent data modern LLMs can and will ‘scheme’ to ensure their given long-term goal. It is no sapient thing, but a dumb machine with the capability to decive its users, and externalise this as shown in its chain of thought, when there are goal misalignments seems dangerous enough. Not at the current state of the art but potentially in a decade or two.