When a person makes a decision, and that decision results in harm, often that person can be called in to court to explain their actions. What happens when an algorithmic decision results in harm? One option is to create new legal structures to govern algorithms; but another option is to create algorithms capable of meeting existing legal standards of proof. This talk presents three modes by which algorithmic can be understood - transparency, statistical analysis, and explanation - and describes how each could fit into the Anglo-American legal tradition.
Mason Kortz is a clinical instructional fellow at the Harvard Law School Cyberlaw Clinic, part of the Berkman Klein Center for Internet & Society. His areas of interest include online speech and privacy and the use of data products (big or small) to advance social justice. Mason has worked as a data manager for the Scripps Institution of Oceanography, a legal fellow in the Technology for Liberty Project at the American Civil Liberties Union of Massachusetts, and a clerk in the District of Massachusetts. He has a JD from Harvard Law School and a BA in Computer Science and Philosophy from Dartmouth College. In his spare time, he enjoys cooking, reading, and game design.
Source: https://cyber.harvard.edu/people/mkortz
Formation d’un dispensateur reconnu aux fins de la formation continue obligatoire du Barreau du Québec pour une durée de 1 heure et 30 minutes. Une attestation de participation représentant 1 heure et 30 minutes de formation sera aussi transmise aux notaires.