• HauntedCupcake@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    An insurer is an interesting one for sure. They’d have the stats of how many times that AI model makes mistakes and be able to charge accordingly. They’d also have the funds and evidence to go after big corps if their AI was faulty.

    They seem like a good starting point, until negligence elsewhere can be proven.