Interesting framing-- how do you think about the concept of the need for blame and/or liability? Both the moral and institutional tests approach an answer: was a decision responsible and defensible, but dance around the question of who is responsible for any harm created. Is there a meaningful distinction between an AI decision and an AI action? What happens when the decision is correct but the execution fails?
An AI decision is an opinion on what should happen. Whether that judgement then enters into the real world in the form of action is a different construct. I touch on this in my article on Clinical Judgement.
Interesting framing-- how do you think about the concept of the need for blame and/or liability? Both the moral and institutional tests approach an answer: was a decision responsible and defensible, but dance around the question of who is responsible for any harm created. Is there a meaningful distinction between an AI decision and an AI action? What happens when the decision is correct but the execution fails?
If the AI acts, the system is accountable.
If the doctor acts, the doctor is responsible.
An AI decision is an opinion on what should happen. Whether that judgement then enters into the real world in the form of action is a different construct. I touch on this in my article on Clinical Judgement.