Archivi categoria: artificial intelligence

A.I. TRAPS The 16th conference of the Disruption Network Lab (Berlin) curated by Tatiana Bazzichelli
1252

June 14: 16.00-20.45; June 15: 15.30-20-30
Location
: Studio 1, Kunstquartier Bethanien, Mariannenplatz 2, 10997 Berlin.
Partner Venues: Kunstraum Kreuzberg/Bethanien, STATE Studio.
Curated by Tatiana Bazzichelli. In cooperation with: Transparency International.

Tatiana Bazzichelli

Funded by: Hauptstadtkulturfonds (Capital Cultural Fund of Berlin), Reva and David Logan Foundation (grant provided by NEO Philanthropy), Checkpoint Charlie Foundation. Supported [in part] by a grant from the Open Society Initiative for Europe within the Open Society Foundations. In partnership with: Friedrich Ebert Stiftung.

In collaboration with: Alexander von Humboldt Institute for Internet and Society (HIIG), r0g agency. Communication Partners: Sinnwerkstatt, Furtherfield. Media partners: taz, die tageszeitung, Exberliner.
In English language.

2-Day Online-Ticket: 14€  1-Day Ticket: 8€
1-Day Solidarity-Ticket: 5€ (only available at the door)
Disruption Network Lab aims for an accessible and inclusive conference by providing a discounted solidarity ticket. This will only be available at the door.


SCHEDULE

Friday, June 14 · 2019

15:30 – DOORS OPEN

16:00 – INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 – PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

17:45-19:00 – PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

19:15-20:45 – KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

Saturday, June 15 · 2019

15:00 – DOORS OPEN

15:30-16:30 – INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

16:45-18:15 – KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

18:30-20:30 – PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).


INSTALLATION

WE NEED TO TALK, AI

A Comic Essay on Artificial Intelligence by Julia Schneider and Lena Kadriye Ziyal. www.weneedtotalk.ai


AI TRAPS: Automating Discrimination

The Art of Exposing Injustice – Part 2

A close look at how AI & algorithms reinforce prejudices and biases of its human creators and societies, and how to fight discrimination.

The notion of Artificial Intelligence goes back decades ago and became a field of research in the United States in the mid-1950s. During the 1990s AI was also at the core of many debates around digital culture, cyberculture and the imaginary of future technologies. In the current discussion around big data, deep learning, neural networks, and algorithms, AI has been used as a buzzword for proposing new political and commercial agendas in companies, institutions and the public sector.

This conference does not want to address the concept of AI in general, but it aims to focus on concrete applications of data science, machine learning, and algorithms and the “AI traps” that might follow when the design of these systems reflects, reinforces and automates current and historical biases and inequalities of society.

The aim is to foster a debate on how AI and algorithms impact our everyday life as well as culture, politics, institutions and behaviours, reflecting inequalities based on social, racial and gender prejudices. Computer systems can be influenced by implicit values of humans involved in data collection, programming and usage. Algorithms are not neutral and unbiased, and the consequence of historical patterns and individual decisions can be embedded in search engine results, social media platforms and software applications reflecting systematic and unfair discrimination.

The analysis of algorithmic bias implies a close investigation of network structures and multiple layers of computational systems. The consequences of digitalisation are not just bounded to technology, but are affecting our society and culture at large. AI bias could be increased by the way machines work, taking unpredictable directions as soon as software are implemented and run by their own.

By connecting researchers, writers, journalists, computer scientists and artists, this event wants to demystify the conception of artificial intelligence being pure and logical, focusing instead on how AI suffers from prejudices and biases of its human creators, and how machine learning may produce inequality as a consequence of mainstream power structures that overlook diversity and minorities.

This conference aims to provoke awareness by reflecting on possible countermeasures that come from the artistic, technological and political framework, critical reflecting on the usage and implementation of AI technology.

Curated by Tatiana Bazzichelli.
The Disruption Network Lab series The Art of Exposing Injustice is developed in cooperation with the Berlin-based International Secretariat of
Transparency International, the global coalition against corruption, celebrating this year its 25th year jubilee.


FULL PROGRAM

Friday, June 14 · 2019

15:30 – DOORS OPEN

16:00 – INTRO

Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE) & Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

16:15-17:30 – PANEL

THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science

Adam Harvey (Artist and Researcher, US/DE), Sophie Searcy (Senior Data Scientist at Metis, US). Moderated by Adriana Groh (Head of Program Management, Prototype Fund, DE).

Examples abound of AI creating harmful effects and it is good to track and understand these examples. But the next step is harder: how do the practitioners of AI improve what comes next and avoid ever worse and ever bigger “AI catastrophes”? Adam Harvey will present a very concrete art and research project, investigating the ethics, origins, and individual privacy implications of face recognition image datasets and their role in the expansion of biometric surveillance technologies. He will present the latest developments of his project megapixels.cc, showcasing new research on how images posted to Flickr have been used by academic, commercial, and even defense and intelligence agencies around the world for research and development of facial recognition technologies. Drawing on her teaching work on AI ethics and cultural diversity, Sophie Searcy will discuss the fundamental problems underlying the design and implementation of AI and consider how those problems impact to real-world case studies. She will stress out the importance to foster an equal and representative data science community filled with individuals of all technical, educational, and personal backgrounds, understanding the implications of the effect of data science on society at large.

17:45-19:00 – PANEL

AI FOR THE PEOPLE: AI Bias, Ethics & The Common Good

Maya Indira Ganesh (Research coordinator, AI & Media Philosophy ‘KIM’ Research Group, Karlsruhe University of Arts and Design; PhD candidate, Leuphana University, Lüneburg, IN/DE), Slava Jankin (Professor of Data Science and Public Policy at the Hertie School of Governance, UK/DE). Moderated by Nicole Shephard (Researcher on Gender, Technology and Politics of Data, UK/DE).

This panel wants to investigate possible solutions to the known challenges related to AI bias, and to discuss the opportunities AI might bring to the public. What can the role of companies, institutions and universities be on how to deal with AI responsibly for the common good? How are public institutions dealing with the ethical usage of AI and what is actually happening on the ground? Maya Indira Ganeshwill focus on the seductive idea that we can standardise and manage well-being, goodness, and ethical behaviour in this algorithmically mediated moment. Her talk will examine typologies of policy, computational, industrial, legal, and creative approaches to shaping ‘AI ethics’ and bias-free algorithms; and critically reflect on the breathless enthusiasm for principles, boards and committees to ensure that AI is ethical. Slava Jankin will reflect on how machine learning can be used for common good in the public sector, focusing on Artificial Intelligence and data science in public services and reflecting on possible products and design implementations.

19:15-20:45 – KEYNOTE

WHAT IS A FEMINIST AI? Possible Feminisms, Possible Internets

Charlotte Webb (Co-founder, Feminist Internet & Even Consultancy, UK). Moderated by Lieke Ploeger (Community Director, Disruption Network Lab, NL/DE).

What kind of new socio-political imaginary can be instantiated by attempting to design a Feminist Internet? How can feminist methods and values inform the development of less biased technologies? What is a feminist AI? In this keynote, Charlotte Webb will discuss how a collection of artists, designers and creative technologists have been using feminisms, creative practice and technology to explore these questions. She will discuss the challenges of designing a ‘Feminist Alexa’, which Feminist Internet has been attempting in response to the ways biased voice technologies are saturating markets and colonising homes across the globe. She will discuss how the use of feminist design standards can help ensure that technologies do not knowingly or unknowingly reproduce bias, and introduce the audience to Feminist Internet’s most recent project – a feminist chatbot that aims to teach people about AI bias.

Saturday, June 15 · 2019

15:00 – DOORS OPEN

15:30-16:30 – INVESTIGATION

HOW IS GOVERNMENT USING BIG DATA?

Crofton Black (Researcher, Journalist & Writer, The Bureau of Investigative Journalism, UK). Moderated by Daniel Eriksson (Head of Technology, Transparency International, SE/DE).

AI, algorithms, deep learning, big data – barely a week goes by without a new revelation about our increasingly digital future. Computers will cure cancer, make us richer, prevent crime, decide who gets into the country, determine access to services, map our daily movements, take our jobs away and send us to jail. Successive innovations spark celebration and concern. While new developments offer enticing economic benefits, academics and civil society sound warnings over corporate accountability, the intrusiveness of personal data and the ability of legal frameworks to keep pace with technological challenges. These concerns are particularly acute when it comes to the use of digital technology by governments and the public sector, which are compiling ever larger datasets on citizens as they move towards an increasingly digitised future. Questions abound about what governments are doing with data, who they are paying to do the work, and what the potential outcomes could be, especially for society’s most vulnerable people. In May, Crofton Black and Cansu Safak of The Bureau of Investigative Journalism published a report, ‘Government Data Systems: The Bureau Investigates’, examining what IT systems the UK government has been buying. Their report looks at how to use publicly available data to build a picture of companies, services and projects in this area, through website scraping, automated searches, data analysis and freedom of information requests. In this session Crofton Black will present their work and their findings, and discuss systemic problems of transparency over how the government is spending public money. Report: How is government using big data? The Bureau Investigates.

16:45-18:15 – KEYNOTE

RACIAL DISCRIMINATION IN THE AGE OF AI: The Future of Civil Rights in the United States

Mutale Nkonde (Tech Policy Advisor and Fellow at Data & Society Research Institute, US). Moderated by Rhianna Ilube (Writer, Curator and Host at The Advocacy Academy, UK/DE)

To many, the questions posed to Mark Zuckerberg, during the Facebook Congressional Hearings displayed U.S. House and Senate Representatives’ lack of technical knowledge. However, legislative officials rely on the expertise of their legislative teams to prepare them for briefings. What the Facebook hearings actually proved were low levels of digital literacy among legislative staffers. Mutale Nkonde will address the epistemological journey taken by congressional staffers about the impact of AI technologies on wider society. Working with low income black communities in New York City, who are fighting the use of facial recognition in public housing, she targets staffers of the Congressional Black Caucus with the goal to advocate for the fair treatment of Black Americans in the United States. She aims to make congressional staffers concerned how police jurisdictions, public housing landlords, retailers, and others have proposed using facial recognition technology as a weapon against African Americans and other people of colour. This talk explores how a conscious understanding of racial bias and AI technology should inform the work of policy makers and the society at large while building up the future of civil rights.

18:30-20:30 – PANEL

ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism

Os Keyes (Ada Lovelace Fellow, Human-Centred Design & Engineering, University of Washington, US), Dia Kayyali (Leader of the Tech & Advocacy program at WITNESS, SY/US/DE), Dan McQuillan (Lecturer in Creative & Social Computing at Goldsmiths, University of London, UK).
Moderated by Tatiana Bazzichelli (Founding Director, Disruption Network Lab, IT/DE).

This panel focuses on the political aspects of AI and reflects on the unresolved injustices of our current system. What do we need to take into consideration when calling for a just AI? Do we need to implement changes in how we design AI, or shall we rather adopt a larger perspective and reflect on how human society is structured? According to Dan McQuillan, AI is political. Not only because of the question of what is to be done with it, but because of the political tendencies of the technology itself. Conversations around AI bias tend to discuss differences in outcome between demographic categories, within notions of race, gender, sexuality, disability or class. Rather than treat these attributes as universal and unquestioned, opening up space for cultural imperialism and presumption in how we “fix” bias, Os Keyes will reflect on how contextually shape these issues. They will discuss how not to fall into the trap of universalise such concepts, arguing that a truly “just” or “equitable” AI must not just be “bias-free” – it must also be local, contextual and meaningfully shaped and controlled by those subject to it. Alongside, Dia Kayyali will present ways in which AI is facilitating white supremacy, nationalism, racism and transphobia. They will focus on the ways in which AI is being developed and how it is being deployed, in particular for policing and content moderation – two seemingly disparate but politically linked applications. Finally, focusing on machine learning and artificial neural networks, also known as deep learning, Dan McQuillan will reflect on how developing an antifascist AI, influencing our understanding of what is both possible and desirable, and what ought to be.

Il nuovo libro di Lev Manovich su AI -STRELKA PRESS
1240

AI plays a crucial role in the global cultural ecosystem. It recommends what we should see, listen to, read, and buy. It determines how many people will see our shared content. It helps us make aesthetic decisions when we create media. In professional cultural production, AI has already been adapted to produce movie trailers, music albums, fashion items, product and web designs, architecture, etc. In this short book, Lev Manovich offers a systematic framework to help us think about cultural uses of AI today and in the future. He challenges existing ideas and gives us new concepts for understanding media, design, and aesthetics in the AI era.
ABOUT THE AUTHOR

Lev Manovich. Photo by Mikhail Goldenkov / Strelka Institute

Dr. Lev Manovich is the author and editor of 13 books including Theories of Software Culture, Instagram and Contemporary Image, Software Takes Command, and The Language of New Media which was described as “the most suggestive and broad ranging media history since Marshall McLuhan.” He was included in the list of “25 People Shaping the Future of Design” in 2013 and the list of “50 Most Interesting People Building the Future” in 2014. Manovich is a Professor of Computer Science at The Graduate Center, CUNY, and a Director of the Cultural Analytics Lab that pioneered analysis of visual culture using computational methods.

https://strelka.com/en/press/books/lev-manovich-ai-aesthetics

Un estratto di “AI Aesthetics” dal sito di STRELKA PRESS

LEV MANOVICH

In the original vision of AI in the 1950s-60s, the goal was to teach a computer to perform a range of cognitive tasks. According to this projection, a computer would simulate many operations of a single human mind. They included playing chess, solving mathematical problems, understanding written and spoken language, and recognizing the content of images. Sixty years later, AI has become a key instrument of modern economies, deployed to make them more efficient, secure, and predictable by automatically analyzing medical images, making decisions on consumer loans, filtering job applications, detecting fraud, and so on. AI is also seen as an enhancer of our everyday lives, saving us time and effort. A good example of this is the use of voice interface instead of typing.

Frieder Nake. Hommage à Paul Klee. The screenprint created from a plotter drawing produced by a computer program written by Nake. 1965

But what exactly is “Artificial Intelligence” today? Besides original tasks that defined AI such as playing chess, recognizing objects in a photo, or translating between languages, computers today perform endless “intelligent” operations. For example, your smartphone’s keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not very glamorous, operations at work in phones, computers, web servers, and other parts of the IT universe.

Therefore, in one sense, AI is now everywhere. While some AI roles attract our attention – such as Google’s Smart Reply function that suggests automated email replies (used for 10% of all answers in Google’s Inbox app in 2017) – many others operate in the gray everyday of digital society.

[…]

Will AI replace professional cultural creators – media, industrial, and fashion designers, photographers and cinematographers, architects, urban planners, and so on? Will countries and cities worldwide compete as to who can more quickly and better automate their creative industries? Will countries and cities (or separate companies) that figure out how best to combine AI and human skills and talents get ahead of the others?

Today AI gives us the option to automate our aesthetic choices (via recommendation engines), assists in certain areas of aesthetic production such as consumer photography, and automates other cultural experiences (for example, automatically selecting ads we see online). But in the future, it will play a larger part in professional cultural production. Its use of helping to design fashion items, logos, music, TV commercials, and works in other areas of culture is already growing. But currently, human experts usually make the final decisions or do actual production based on ideas and media generated by AI.

The well-known example of Game of Thrones is a case in point. The computer suggested plot ideas, but the actual writing and the show’s development was done by humans. We can only talk about fully AI-driven culture where AI will be allowed to create the finished design and media from beginning to end. In this future, humans will not be deciding if these products should be shown to audiences; they will just trust that AI systems know best – the way AI is already fully entrusted to choose when and where to show particular ads, as well as who should see them.

Harold Cohen. Amsterdam Suite C. Lithograph from a drawing generated by a computer program written by Cohen. 1977

We are not there yet. For example, in 2016 IBM Watson created the first “AI-made movie trailer” for the feature film Morgan (Mix, 2016). However, AI only chose various shots from the completed movie that it “thought” were suitable to include in the trailer, and a human editor did the final selection and editing. In another example, to create a system that would automatically suggest suitable answers to the emails users receive, Google workers first created a dataset of all such answers manually. AI chooses what answers to suggest in each possible case, but it does not generate them. (The head of Google’s AI in New York explained that even one bad mistake in such a scenario could generate bad press for the company, so Google could not risk having AI come up with suggested answer sentences and phrases on its own.)

It is logical to think that any area of cultural production which either follows explicit rules or has systematic patterns can be in principle automated. Thus, many commercial cultural areas such as TV dramas, romance novels, professional photography, music video, news stories, website and graphic design, and residential architecture are suitable for automation. For example, we can teach computers to write TV drama scripts, do food photography, or compose news stories in many genres (so far, AI systems are only used to automatically compose sports and business stories). So rather than asking if any such area will be automated one day or not, we need to assume that it will happen and only ask “when.”

This sounds logical, but the reality is not so simple. Starting in the 1960s, artists, composers, and architects used algorithms to generate images, animations, music, and 3D designs animations, music, and 3D designs (“Computer Art,” n.d.). Some of these works have entered the cultural canons. They display wonderful aesthetic inventiveness and refinement. However, in most cases they are abstract compositions with interesting and complex patterns, but without direct references to the human world. Think of such classics as abstract geometric images by Manfred Mohr (1969-1973) (Mohr, n.d.), John Whitney’s computer animation Arabesque (1975), or Iannis Xenakis’s musical compositions Atres and Morsima-Amorsima (1962) (Maurer, 1999). There is no figuration in these algorithmically-generated works, no characters like in novels, and no shots of the real world edited together into narratives like in feature films.

Now compare these abstract algorithmic classics with current attempts to automatically synthesize works that are about human beings, their worlds, their interests, emotions and meanings. For example, today Google Photos and Facebook offer users automatically created slideshows and videos edited from their photos. The results are sometimes entertaining, and sometimes useful, but they can’t be yet compared to professionally-created media. The same applies to images generated by Google engineers using DeepDream neural net (2015-) and later by others who used the same technology (DeepDream, n.d.). These AI creations in my view are more successful than the automatically-generated slideshows of user photos, but this is not because DeepDream is a better AI. The reason is that 20th century visual art styles tolerate more randomness and less precision than, for example, a photo narrative about a trip that has distinct conventions and restrictions on what and can be included and when. Thus, in the case of DeepDream, AI can create artistically plausible images which do refer to the human world because we consider it “modern art” and expect big variability. But in the case of automatically edited slideshows, we immediately know that the computer does not really understand what it is selecting and editing together.

You can buy the e-book version here.

AI

DISCRETE FIGURES Realtime AR + AI Dance Performance by Daito Manabe/Rhizomatiks×ELEVENPLAY
1233

Manabe Daito

Manabe Daito (ph
Shizuo Takahashi. )

https://research.rhizomatiks.com/s/works/discrete_figures/credit.html

Tokyo-based artist, interaction designer, programmer, and DJ.
Launched Rhizomatiks in 2006. Since 2015, has served alongside Motoi Ishibashi as co-director of Rhizomatiks Research, the firm’s division dedicated to exploring new possibilities in the realms of technical and artistic expression with a focus on R&D-intensive projects. Specially-appointed professor at Keio University SFC.
Manabe’s work in design, art, and entertainment takes a new approach to everyday materials and phenomenon. However, his end goal is not simply rich, high-definition realism by recognizing and recombining these familiar elemental building blocks. Rather, his practice is informed by careful observation to discover and elucidate the essential potentialities inherent to the human body, data, programming, computers, and other phenomena, thus probing the interrelationships and boundaries delineating the analog and digital, real and virtual.
A prolific collaborator, he has worked closely with a diverse roster of artists, including Ryuichi Sakamoto, Björk,OK GO, Nosaj Thing, Squarepusher, Andrea Battistoni, Mansai Nomura, Perfume and  sakanaction. Further engagements include groundbreaking partnerships with the Jodrell Bank Center for Astrophysics in Manchester, and the European Organization for Nuclear Research (CERN), the world’s largest particle physics laboratory.
He is the recipient of numerous awards for his multidisciplinary contributions to advertising, design, and art. Notable recognitions include the Ars Electronica Distinction Award, Cannes Lions International Festival of Creativity Titanium Grand Prix, D&AD Black Pencil, and the Japan Media Arts Festival Grand Prize.

[ART ACTIVITIES]
Manabe is an innovator in data analysis and data visualization. Notable artist collaborations run the gamut from “Sensing Streams” installation created with Ryuichi Sakamoto; performances of the ancient Japanese dance “Sanbaso” with prominent actor Mansai Nomura; and Verdi’s opera “Othello” as conducted by Andrea Battistoni. As a recent example, Manabe was selected for a flagship commission and residency program in 2017 at the Jodrell Bank Center for Astrophysics, a national astronomy and astrophysics research center housed at the University of Manchester. His close partnership with researchers and scientists concretized in “Celestial Frequencies,” a groundbreaking data-driven audiovisual work projected onto the observatory itself.

[MUSIC/DJ ACTIVITIES]
In 2015, Manabe developed the imaging system for Björk’s music video “Mouth Mantra”, and oversaw the production of AR/VR live imaging for her “Quicksand” performance.
In performance with Nosaj Thing, Manabe has appeared at international music festivals including the Barcelona Sónar Festival 2017 and Coachella 2016. Having also directed a number of music videos for Nosaj Thing, his work on “Cold Stares ft. Chance the Rapper + The O’My’s” was recognized with an Award of Distinction in the Prix Ars Electronica’s Computer Animation/Film/VFX division. Further directorial work includes the music videos of artists such as Squarepusher, FaltyDL, and Timo Maas.
As a DJ with over two decades of experience, Manabe has opened for international artists such as Flying Lotus and Squarepusher during their Japan tours. His wide repertoire spans from hip-hop and IDM to juke, future bass, and trap. Manabe has also been invited to perform at numerous music festivals around the globe.

[PERFORMING ARTS ACTIVITIES]
Manabe’s collaborations on dance performances with MIKIKO and ELEVENPLAY have showcased a wide array of technology including drones, robotics, machine learning, and even volumetric projection to create 3D images in the air from a massive volume of rays. Additional data-driven performances have explored innovative applications of dance data and machine learning. These collaborations have been performed at major festivals including Ars Electronica, Sónar (Barcelona), Scopitone (Nantes), and MUTEK (Mexico City) to widespread media acclaim (WIRED, Discovery Channel, etc.)

[EDUCATION ACTIVITIES]
Manabe is actively involved in the development and implementation of media artist summits (notably, the Flying Tokyo lecture series) as well as other educational programs (media art workshops for high school students, etc.) designed to cultivate the next generation of creators.

GRAYAREA

discrete figures’ explores the interrelationships
between the performing arts and mathematics,
giving rise to mathematical entities
that engage with the bodies of human dancers onstage.
DAITO MANABE

Alan Turing applied mathematics to disembody the brain from its corporeal host. He sought to expand his body, transplanting his being into an external vessel. In a sense, he sought to replicate himself in mechanical form. Turing saw his computers as none other than bodies (albeit mechanical), irrevocably connected to his own flesh and blood. Although onlookers would see a sharp delineation between man and machine, in his eyes, this progeny did not constitute a distant Other. Rather, he was the father of a “living machine,” a veritable extension of his own body, and a mirror onto the act of performing living, breathing mathematics.

―Daito Manabe

Foto di TOMOYA YAKESHITA
GRAYAREA

Making of from the official site
https://research.rhizomatiks.com/s/works/discrete_figures/en/technology.html

History Scene

Music 01

Using Openpose, we analyzed publicly available stage footage and poses from movie scenes, collected pose data, and developed a neighborhood search system that analyzes that data and pose data obtained from analyzing the dancer’s movements. Drawing from the actual choreography footage for this piece, we attempted to create a production utilizing video material with the closest pose on a per-frame level.

1 audience scene

Music 09

We set up a booth in the venue lobby and filmed the audience. Analyzing the participants’ clothing characteristics and movements on multiple remote servers right until the performance, we managed to feature the audience as dancers using that analytical data and motion data from the ELEVENPLAY dancers.

2 dimentionality reduction scene

Music 09

Using dimension reduction techniques, we converted the motion data to two-dimensional and three-dimensional ones and visualized it.

AI Dancer scene

Music 10

We were interested in dance itself, on the different types of dancers and styles, or how the musical beats connect to improvizational dances. To explore further we worked together with Parag Mital to create a network called dance2dance:
https://github.com/pkmital/dance2dance

This network is based on Google’s seq2seq architecture. It is similar to char-rnn in that it is a neural network architecture that can be used for sequential modeling.https://google.github.io/seq2seq/https://github.com/karpathy/char-rnn

Using the motion capture system, approximately 2.5 hours worth of dance data was captured about 40 times at 60 fps. Each session involved the dancers improvizing under 11 themes including joyful, angry, sad, fun, robot, sexy, junkie, chilling, bouncy, wavy, swingy. To maintain a constant flow, the dancers were given a beat of 120 bpm.

Background movie

This time we generated the background movie with “StyleGAN”, which was introduced in the following paper “A Style-Based Generator Architecture for Generative Adversarial Networks” by NVIDIA Research.
http://stylegan.xyz/paper

“StyleGAN” became open source. The code is available on GitHub.
https://github.com/NVlabs/stylegan

We trained this “StyleGAN” on NVIDIA DGX Station using the data we had captured from dance performance.
https://www.nvidia.com/ja-jp/data-center/dgx-station/

Hardware

Drone

For hardware, we used five palm-sized microdrones. Due to its small size, it is safer and more mobile compared to older models. Because of its small body, it provides a visual effect as if a light ball is floating on stage. The drones’ positions are measured externally via motion capture system. They are controlled in real time by 2.4 GHz spectrum wireless communication. The drone movements are produced with motion capture data that has already been analyzed and generated.

Frame

The frame serves an important role in projection mapping onto the half screen and AR synthesis. It contains seven infra-red LEDs, built-in batteries, and the whole structure is recognized as a rigid body under the motion capture system. Visible to the naked eye, retroflective markers are usually not that suitable for stage props use. However, we designed and developed a system using infrared LEDs and diffusive reflectors that allow for stable tracking invisible to the naked eye.


https://www.youtube.com/watch?time_continue=38&v=mHpLOO

htiks Research
https://research.rhizomatiks.com/

EIA – Exploring Artificial Intelligence in Art-Parco tecnologico della Sardegna 15/19-07-2019
1223

Dal 15 al 19 luglio si svolgerà nella sede di Pula (CA) del Parco tecnologico della Sardegna la scuola scientifica “EIA – Exploring Artificial Intelligence in Art”.

La scuola è organizzata dal CRS4 ed è realizzata anche grazie ai fondi del programma “Scientific School 2018-2019” promosso da Sardegna Ricerche. 

L’obiettivo della scuola è indagare il rapporto tra intelligenza artificiale e arte, per formare figure professionali fortemente competenti nelle tematiche trattate durante la scuola. L’intento è riunire ricercatori che si occupano di intelligenza artificiale, creativi, umanisti e quanti operano nel mondo dell’arte generativa e digitale, per fornire una esauriente visione del mondo dell’arte legato alla creatività delle macchine. 

Durante la scuola saranno affrontati temi come l’analisi e la sperimentazione di algoritmi e procedure per la creazione di contenuti artistici, coinvolgendo ricercatori impegnati nella definizione di modelli generativi (GAN, RL, ecc.), creativi che si misurano con i sistemi e le tecniche dell’intelligenza artificiale e umanisti che si interrogano sulle implicazioni sociali e culturali di questa stagione artistica contemporanea. 

Iscrizioni
La partecipazione alla scuola è gratuita e il numero massimo di partecipanti ammessi è pari a 30. Sono inclusi nell’iscrizione materiale didattico, coffee break e pranzo. I trasferimenti e i pernottamenti saranno invece a carico dei partecipanti.

Per candidarsi è necessario inviare entro il 15 giugno 2019 una richiesta di iscrizione, accompagnata da un breve testo motivazionale e da un curriculum vitae di massimo 2 pagine, al seguente indirizzo email: 
eia@crs4.it

Indirizzo email

L’ammissione sarà confermata entro il 22 giugno 2019 sempre tramite email. 

Indirizzo
Sardegna Ricerche 
Parco tecnologico della Sardegna 
Edificio 2, Località Piscinamanna 
09010 Pula (CA) – Italia 

False Positives / Esther Hovers (NL)
1124

Beautiful project by ESTHER HOVERS seen at ARS electronica 2018

The project False Positives is about intelligent surveillance systems. These are cameras that are able to detect deviant behavior within public space. False Positives revolves around the question of normal behavior. It aims to raise this question by basing the project on eight different ‘anomalies’. These so-called anomalies are signs of body-language and movement that could indicate criminal intent. It is through these anomalies the algorithms are built and cameras are able to detect deviant behavior. The eight different anomalies were pointed out to me by several intelligent surveillance experts with whom I collaborated for this project. The work consists of several approaches, photographs and pattern drawings. All together, these form an analysis of different settings in and around the business district of the de facto European capital: Brussels.

Prolonged pausing, groups of people suddenly splitting up, a woman coming to stop exactly at the corner of the street, a man running through a slow-moving crowd. All of these can be classified as deviant behavior within the context of public space. To find out what constitutes deviance we first need to ask ourselves the question: What is considered normal?
Public security is a growing concern throughout Europe. To the eye of the camera every person is a possible suspect, every person a possible perpetrator.
Will intelligent surveillance help us to safeguard our need for security?

This video is mimicking the interface of an intelligent surveillance system. It was made in the business district of Brussels in 2015.

© 2015 Esther Hovers

 

Trevor Paglen’s Sight machine with Kronos Quartet and Obscura Digital
1065

Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. Among his chief concerns are learning how to see the historical moment we live in and developing the means to imagine alternative futures. Paglen’s work has had one-person exhibitions at Vienna Secession, Eli & Edythe Broad Art Museum, Van Abbe Museum, Frankfurter Kunstverein, and Protocinema Istanbul, and participated in group exhibitions the Metropolitan Museum of Art, the San Francisco Museum of Modern Art, the Tate Modern, and numerous other venues. He has launched an artwork into distant orbit around Earth in collaboration with Creative Time and MIT, contributed research and cinematography to the Academy Award-winning film Citizenfour, and created a radioactive public sculpture for the exclusion zone in Fukushima, Japan.

The Cantor Center for Visual Arts at Stanford University came to OBSCURA DIGITAL and proposed a collaboration with artist Trevor Paglen, whose work addresses topics like government secrecy and surveillance, exposing the vast apparatus of machines, systems and algorithms that monitor virtually every aspect of our lives. Paglen’s “Sight Machine”project would demonstrate to a live audience how machines “see” the world — in this case, a performance by the renowned Kronos Quartet.

Obscura digital worked with Paglen’s team to develop the computer and video systems to take a live video feed of the string quartet’s performance, run it through actual off-the-shelf artificial intelligence surveillance algorithms (over a dozen of them in total), and project what the AIs see and how they interpret it onto a screen above the musicians.

These AIs — whether for facial recognition, object identification or threat detection — are designed to communicate with their machine counterparts, not to provide human-readable output. Making that possible in realtime required Obscura’s systems engineers to maximize throughput in a Herculean research and development effort.

http://video.wired.com/watch/the-unsettling-performance-that-showed-the-world-through-ai-s-eyes