top of page

[Presentation]2020.01.20 Harvard University-KU Joint Conference "AI, Ethics & Data Governan

International Conference with specially invited Speakers

*AI, Ethics & Data Governance : From International trends to Korea's New Law

*Date & Time : January 20, 2020 (Mon) 10:30 ~ 16:30

*Location : Korea University, Veritas Hall, B1F, Seong-buk gu, Anam ro 145 Seoul

As powerful artificial intelligence (AI) and algorithmic technologies are deployed, people around the globe are asking where and why lines are drawn around key issues of ethics and privacy:

  • Google is using natural language processing algorithm to ‘read’ emails and suggest quick responses to them, and sometimes to report on child porn pushers, but stopped suggesting advertisers relevant to the contents of your emails, for instance, suggesting Mexican restaurants when your email asks your friend for dinner at one. Do we feel more or less infringed if machines make decisions as opposed to humans? Does this change our value judgments about upload filters or intermediary liability safe harbors?

  • Microsoft refused to install its face-recognition technology for American law enforcement agencies’ street monitoring equipment while providing the same to correctional facilities in China involving a much smaller number of face subjects. Is the dividing line between consent to collection versus consent to comparison, or is there any difference between the two? Does the consent-based framework deal efficiently with the US government’s impending plan to use facial recognition for border checks?

  • Amazon shut down its hiring algorithm when it could not fairly judge female applicants. Does a solution require adding more women into the training base that the software is trained upon? This could mean less privacy, at least at the collection stage, even if it is later anonymized. Facial recognition is being criticized for not recognizing racial minorities but some call it “a feature not a bug”. Is “inclusive” AI necessarily good? How do you make “good” inclusive AI?

The Global Network of Internet and Society Research Centers (NOC, in short) has conducted a series of conferences and seminars under the theme of AI and Inclusion. Digital Asia Hub has made efforts to bridge the gap between the Global South and the Global North from Asian perspectives. This year, Korea University Law School’s American Law Center and Open Net Korea, a civil society organization that has worked on the technology and rights issues, join forces with Harvard University’s Berkman Klein Center for Internet & Society, to bring NOC and Digital Asia Hub at one place in Seoul, Korea, on January 20, 2020.

One of the centerpieces of the conference will be the launch of the Principled AI Project, a white paper and data visualization mapping prominent AI ethics principles, presented by Jessica Fjeld of Harvard University’s Cyberlaw Clinic.

Also, in focus will be data governance and AI and how data protection law and open data initiatives affect the inclusiveness of AI. As long as the current development of AI is taking place along the lines of machine learning, governance on the training data to be fed into machine learning will have a decisive impact on AI’s contribution to sustainable and equitable development.

We will also debate how Korea’s three data laws (the Personal Information Protection Act, the Information and Communications Network Act, and the Credit Information Act) that were amended on January 9 would affect AI and data governance in Korea. The main purpose of the amendments is to enable the processing of pseudonymous data for statistics and scientific research without the consent of the data subject.

Session 1_Taking Stock of Ethics on AI : Launch of Mapping AI Ethics Principles

  • Moderator: 박경신, 고려대학교 법학전문대학원 교수, 오픈넷 이사 (KS Park, Professor, Korea University; Director, Open Net Korea)

  • Speaker 1: 제시카 필드, 하버드로스쿨 사이버법클리닉 교수 – “원칙에 입각한 인공지능: 윤리 및 권리에 기반한 AI원칙들에서 나타나는 합의점들” (Jessica Fjeld, Lecturer on Law, Harvard University Law School) (Download) (Principled AI Executive Summary)

  • Speaker 2: 허버트 버커트, 스위스 성갈렌대학교 법대 교수 – “AI 윤리의 윤리학” (Herbert Burkert, Professor, St. Galen University) (Download)

  • Speaker 3: 마르셀로 톰슨, 홍콩대학교 법대 교수 -“노력, 설계와 책임” (Marcelo Thompson, Professor, Hong Kong University) (Download)

  • Discussants:

  • 말라비카 자야렘, 디지털아시아허브 소장 (원격참여) (Malavika Jayaram, Director, Digital Asia Hub (remote)) (Download)

  • 이상욱, 한양대학교 철학과 교수 (Sang-Wook Yi, Professor of Philosophy, Hanyang University)

  • 최은필, 카카오 연구위원 (Eunpil Choi, Research Fellow, Kakao)

  • 카를로스 아폰소 데 수자, 리오 기술과사회연구소 소장 (Carlos Affonso Souza, Director, ITS Rio) (Download)

Session 2_AI and Data Governance (Click here to watch video)

  • Moderator: 박경신, 고려대학교 법학전문대학원 교수, 오픈넷 이사 (KS Park, Professor, Korea University; Director, Open Net Korea)

  • Speaker 1: 그레이엄 그린리프, 호주 뉴사우스웨일스 대학교 법·정보시스템학과 교수(원격 참여) – “연구, 통계, 기록보존 목적의 가명정보 처리: 한국이 EU회원국 입장이라면?” (Graham Greenleaf, Professor, UNSW) (remote)) (Download 1, Download 2)

  • Speaker 2: 클라우디오 루세나, 브라질 파라이바 주립대학교 법대 교수 – “프라이버시 친화적 데이터연계 방식과 GDPR” (Claudio Lucera, Professor, Paraiba State University) (Prezi, Download)

  • Speaker 3: 오병일, 진보네트워크센터 대표 – “데이터3법의 문제점” (Byung-il Oh, President, Jinbonet) (Download)

  • Discussants:

  • 이대희, 고려대학교 법학전문대학원 교수 (Dae Hee Lee, Professor of School of Law, Korea University) (Download)

  • 이호영, 정보통신정책연구원 지능정보사회정책센터 센터장 (Hoyeong Lee, Director of Center for AI & Social Policy, KISDI)

  • 김가연, 오픈넷 변호사 (Kelly Kim, Legal Counsel, Open Net Korea


최근 게시물
보관
태그 검색
아직 태그가 없습니다.
공식 SNS 페이지
bottom of page