You must log in or register to comment.
Of course it is. It is trained on past decisions, which were predominantly terrible. As such, future decisions it will take will be, too.
As a result it works exactly as intended, making the same horrible decisions at a fraction of the cost of underpaid humans.