A.I. TRAPS The 16th conference of the Disruption Network Lab (Berlin) curated by Tatiana Bazzichelli
1256

annamaria monteverdi/ Giugno 2, 2019/ artificial intelligence, conference, NEWS

June 14: 16.00-20.45; June 15: 15.30-20-30
Location
: Studio 1, Kunstquartier Bethanien, Mariannenplatz 2, 10997 Berlin.
Partner Venues: Kunstraum Kreuzberg/Bethanien, STATE Studio.
Curated by Tatiana Bazzichelli. In cooperation with: Transparency International.

Tatiana Bazzichelli

Funded by: Hauptstadtkulturfonds (Capital Cultural Fund of Berlin), Reva and David Logan Foundation (grant provided by NEO Philanthropy), Checkpoint Charlie Foundation. Supported [in part] by a grant from the Open Society Initiative for Europe within the Open Society Foundations. In partnership with: Friedrich Ebert Stiftung.

In collaboration with: Alexander von Humboldt Institute for Internet and Society (HIIG), r0g agency. Communication Partners: Sinnwerkstatt, Furtherfield. Media partners: taz, die tageszeitung, Exberliner.
In English language.

2-Day Online-Ticket: 14€  1-Day Ticket: 8€
1-Day Solidarity-Ticket: 5€ (only available at the door)
Disruption Network Lab aims for an accessible and inclusive conference by providing a discounted solidarity ticket. This will only be available at the door.


SCHEDULE

Friday, June 14 · 2019

15:30 – DOORS OPEN

16:00 – INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 – PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

17:45-19:00 – PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

19:15-20:45 – KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

Saturday, June 15 · 2019

15:00 – DOORS OPEN

15:30-16:30 – INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

16:45-18:15 – KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

18:30-20:30 – PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).


INSTALLATION

WE NEED TO TALK, AI

A Comic Essay on Artificial Intelligence by Julia Schneider and Lena Kadriye Ziyal. www.weneedtotalk.ai


AI TRAPS: Automating Discrimination

The Art of Exposing Injustice – Part 2

A close look at how AI & algorithms reinforce prejudices and biases of its human creators and societies, and how to fight discrimination.

The notion of Artificial Intelligence goes back decades ago and became a field of research in the United States in the mid-1950s. During the 1990s AI was also at the core of many debates around digital culture, cyberculture and the imaginary of future technologies. In the current discussion around big data, deep learning, neural networks, and algorithms, AI has been used as a buzzword for proposing new political and commercial agendas in companies, institutions and the public sector.

This conference does not want to address the concept of AI in general, but it aims to focus on concrete applications of data science, machine learning, and algorithms and the “AI traps” that might follow when the design of these systems reflects, reinforces and automates current and historical biases and inequalities of society.

The aim is to foster a debate on how AI and algorithms impact our everyday life as well as culture, politics, institutions and behaviours, reflecting inequalities based on social, racial and gender prejudices. Computer systems can be influenced by implicit values of humans involved in data collection, programming and usage. Algorithms are not neutral and unbiased, and the consequence of historical patterns and individual decisions can be embedded in search engine results, social media platforms and software applications reflecting systematic and unfair discrimination.

The analysis of algorithmic bias implies a close investigation of network structures and multiple layers of computational systems. The consequences of digitalisation are not just bounded to technology, but are affecting our society and culture at large. AI bias could be increased by the way machines work, taking unpredictable directions as soon as software are implemented and run by their own.

By connecting researchers, writers, journalists, computer scientists and artists, this event wants to demystify the conception of artificial intelligence being pure and logical, focusing instead on how AI suffers from prejudices and biases of its human creators, and how machine learning may produce inequality as a consequence of mainstream power structures that overlook diversity and minorities.

This conference aims to provoke awareness by reflecting on possible countermeasures that come from the artistic, technological and political framework, critical reflecting on the usage and implementation of AI technology.

Curated by Tatiana Bazzichelli.
The Disruption Network Lab series The Art of Exposing Injustice is developed in cooperation with the Berlin-based International Secretariat of
Transparency International, the global coalition against corruption, celebrating this year its 25th year jubilee.


FULL PROGRAM

Friday, June 14 · 2019

15:30 – DOORS OPEN

16:00 – INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 – PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

Examples abound of AI creating harmful effects and it is good to track and understand these examples. But the next step is harder: how do the practitioners of AI improve what comes next and avoid ever worse and ever bigger “AI catastrophes”? Adam Harvey will present a very concrete art and research project, investigating the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies. He will present the latest developments of his project megapixels.cc, showcasing new research on how images posted to Flickr have been used by academic, commercial, and even defense and intelligence agencies around the world for research and development of facial recognition technologies. Drawing on her teaching work on AI ethics and cultural diversity, Sophie Searcy will discuss the fundamental problems underlying the design and implementation of AI and consider how those problems impact to real-world case studies. She will stress out the importance to foster an equal and representative data science community filled with individuals of all technical, educational, and personal backgrounds, understanding the implications of the effect of data science on society at large.

17:45-19:00 – PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

This panel wants to investigate possible solutions to the known challenges related to AI bias, and to discuss the opportunities AI might bring to the public. What can the role of companies, institutions and universities be on how to deal with AI responsibly for the common good? How are public institutions dealing with the ethical usage of AI and what is actually happening on the ground? Maya Indira Ganeshwill focus on the seductive idea that we can standardise and manage well-being, goodness, and ethical behaviour in this algorithmically mediated moment. Her talk will examine typologies of policy, computational, industrial, legal, and creative approaches to shaping ‘AI ethics’ and bias-free algorithms; and critically reflect on the breathless enthusiasm for principles, boards and committees to ensure that AI is ethical. Slava Jankin will reflect on how machine learning can be used for common good in the public sector, focusing on Artificial Intelligence and data science in public services and reflecting on possible products and design implementations.

19:15-20:45 – KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

What kind of new socio-political imaginary can be instantiated by attempting to design a Feminist Internet? How can feminist methods and values inform the development of less biased technologies? What is a feminist AI? In this keynote, Charlotte Webb will discuss how a collection of artists, designers and creative technologists have been using feminisms, creative practice and technology to explore these questions. She will discuss the challenges of designing a ‘Feminist Alexa’, which Feminist Internet has been attempting in response to the ways biased voice technologies are saturating markets and colonising homes across the globe. She will discuss how the use of feminist design standards can help ensure that technologies do not knowingly or unknowingly reproduce bias, and introduce the audience to Feminist Internet’s most recent project – a feminist chatbot that aims to teach people about AI bias.

Saturday, June 15 · 2019

15:00 – DOORS OPEN

15:30-16:30 – INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

AI, algorithms, deep learning, big data – barely a week goes by without a new revelation about our increasingly digital future. Computers will cure cancer, make us richer, prevent crime, decide who gets into the country, determine access to services, map our daily movements, take our jobs away and send us to jail. Successive innovations spark celebration and concern. While new developments offer enticing economic benefits, academics and civil society sound warnings over corporate accountability, the intrusiveness of personal data and the ability of legal frameworks to keep pace with technological challenges. These concerns are particularly acute when it comes to the use of digital technology by governments and the public sector, which are compiling ever larger datasets on citizens as they move towards an increasingly digitised future. Questions abound about what governments are doing with data, who they are paying to do the work, and what the potential outcomes could be, especially for society’s most vulnerable people. In May, Crofton Black and Cansu Safak of The Bureau of Investigative Journalism published a report, ‘Government Data Systems: The Bureau Investigates’, examining what IT systems the UK government has been buying. Their report looks at how to use publicly available data to build a picture of companies, services and projects in this area, through website scraping, automated searches, data analysis and freedom of information requests. In this session Crofton Black will present their work and their findings, and discuss systemic problems of transparency over how the government is spending public money. Report: How is government using big data? The Bureau Investigates.

16:45-18:15 – KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

To many, the questions posed to Mark Zuckerberg, during the Facebook Congressional Hearings displayed U.S. House and Senate Representatives’ lack of technical knowledge. However, legislative officials rely on the expertise of their legislative teams to prepare them for briefings. What the Facebook hearings actually proved were low levels of digital literacy among legislative staffers. Mutale Nkonde will address the epistemological journey taken by congressional staffers about the impact of AI technologies on wider society. Working with low income black communities in New York City, who are fighting the use of facial recognition in public housing, she targets staffers of the Congressional Black Caucus with the goal to advocate for the fair treatment of Black Americans in the United States. She aims to make congressional staffers concerned how police jurisdictions, public housing landlords, retailers, and others have proposed using facial recognition technology as a weapon against African Americans and other people of colour. This talk explores how a conscious understanding of racial bias and AI technology should inform the work of policy makers and the society at large while building up the future of civil rights.

18:30-20:30 – PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).

This panel focuses on the political aspects of AI and reflects on the unresolved injustices of our current system. What do we need to take into consideration when calling for a just AI? Do we need to implement changes in how we design AI, or shall we rather adopt a larger perspective and reflect on how human society is structured? According to Dan McQuillan, AI is political. Not only because of the question of what is to be done with it, but because of the political tendencies of the technology itself. Conversations around AI bias tend to discuss differences in outcome between demographic categories, within notions of race, gender, sexuality, disability or class. Rather than treat these attributes as universal and unquestioned, opening up space for cultural imperialism and presumption in how we “fix” bias, Os Keyes will reflect on how contextually shape these issues. They will discuss how not to fall into the trap of universalise such concepts, arguing that a truly “just” or “equitable” AI must not just be “bias-free” – it must also be local, contextual and meaningfully shaped and controlled by those subject to it. Alongside, Dia Kayyali will present ways in which AI is facilitating white supremacy, nationalism, racism and transphobia. They will focus on the ways in which AI is being developed and how it is being deployed, in particular for policing and content moderation – two seemingly disparate but politically linked applications. Finally, focusing on machine learning and artificial neural networks, also known as deep learning, Dan McQuillan will reflect on how developing an antifascist AI, influencing our understanding of what is both possible and desirable, and what ought to be.