• Chais@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Of course it is. It is trained on past decisions, which were predominantly terrible. As such, future decisions it will take will be, too.
    As a result it works exactly as intended, making the same horrible decisions at a fraction of the cost of underpaid humans.