Archivi categoria: interactive theatre

DISCRETE FIGURES Realtime AR + AI Dance Performance by Daito Manabe/Rhizomatiks×ELEVENPLAY

Manabe Daito

Manabe Daito (ph
Shizuo Takahashi. )

Tokyo-based artist, interaction designer, programmer, and DJ.
Launched Rhizomatiks in 2006. Since 2015, has served alongside Motoi Ishibashi as co-director of Rhizomatiks Research, the firm’s division dedicated to exploring new possibilities in the realms of technical and artistic expression with a focus on R&D-intensive projects. Specially-appointed professor at Keio University SFC.
Manabe’s work in design, art, and entertainment takes a new approach to everyday materials and phenomenon. However, his end goal is not simply rich, high-definition realism by recognizing and recombining these familiar elemental building blocks. Rather, his practice is informed by careful observation to discover and elucidate the essential potentialities inherent to the human body, data, programming, computers, and other phenomena, thus probing the interrelationships and boundaries delineating the analog and digital, real and virtual.
A prolific collaborator, he has worked closely with a diverse roster of artists, including Ryuichi Sakamoto, Björk,OK GO, Nosaj Thing, Squarepusher, Andrea Battistoni, Mansai Nomura, Perfume and  sakanaction. Further engagements include groundbreaking partnerships with the Jodrell Bank Center for Astrophysics in Manchester, and the European Organization for Nuclear Research (CERN), the world’s largest particle physics laboratory.
He is the recipient of numerous awards for his multidisciplinary contributions to advertising, design, and art. Notable recognitions include the Ars Electronica Distinction Award, Cannes Lions International Festival of Creativity Titanium Grand Prix, D&AD Black Pencil, and the Japan Media Arts Festival Grand Prize.

Manabe is an innovator in data analysis and data visualization. Notable artist collaborations run the gamut from “Sensing Streams” installation created with Ryuichi Sakamoto; performances of the ancient Japanese dance “Sanbaso” with prominent actor Mansai Nomura; and Verdi’s opera “Othello” as conducted by Andrea Battistoni. As a recent example, Manabe was selected for a flagship commission and residency program in 2017 at the Jodrell Bank Center for Astrophysics, a national astronomy and astrophysics research center housed at the University of Manchester. His close partnership with researchers and scientists concretized in “Celestial Frequencies,” a groundbreaking data-driven audiovisual work projected onto the observatory itself.

In 2015, Manabe developed the imaging system for Björk’s music video “Mouth Mantra”, and oversaw the production of AR/VR live imaging for her “Quicksand” performance.
In performance with Nosaj Thing, Manabe has appeared at international music festivals including the Barcelona Sónar Festival 2017 and Coachella 2016. Having also directed a number of music videos for Nosaj Thing, his work on “Cold Stares ft. Chance the Rapper + The O’My’s” was recognized with an Award of Distinction in the Prix Ars Electronica’s Computer Animation/Film/VFX division. Further directorial work includes the music videos of artists such as Squarepusher, FaltyDL, and Timo Maas.
As a DJ with over two decades of experience, Manabe has opened for international artists such as Flying Lotus and Squarepusher during their Japan tours. His wide repertoire spans from hip-hop and IDM to juke, future bass, and trap. Manabe has also been invited to perform at numerous music festivals around the globe.

Manabe’s collaborations on dance performances with MIKIKO and ELEVENPLAY have showcased a wide array of technology including drones, robotics, machine learning, and even volumetric projection to create 3D images in the air from a massive volume of rays. Additional data-driven performances have explored innovative applications of dance data and machine learning. These collaborations have been performed at major festivals including Ars Electronica, Sónar (Barcelona), Scopitone (Nantes), and MUTEK (Mexico City) to widespread media acclaim (WIRED, Discovery Channel, etc.)

Manabe is actively involved in the development and implementation of media artist summits (notably, the Flying Tokyo lecture series) as well as other educational programs (media art workshops for high school students, etc.) designed to cultivate the next generation of creators.


discrete figures’ explores the interrelationships
between the performing arts and mathematics,
giving rise to mathematical entities
that engage with the bodies of human dancers onstage.

Alan Turing applied mathematics to disembody the brain from its corporeal host. He sought to expand his body, transplanting his being into an external vessel. In a sense, he sought to replicate himself in mechanical form. Turing saw his computers as none other than bodies (albeit mechanical), irrevocably connected to his own flesh and blood. Although onlookers would see a sharp delineation between man and machine, in his eyes, this progeny did not constitute a distant Other. Rather, he was the father of a “living machine,” a veritable extension of his own body, and a mirror onto the act of performing living, breathing mathematics.

―Daito Manabe


Making of from the official site

History Scene

Music 01

Using Openpose, we analyzed publicly available stage footage and poses from movie scenes, collected pose data, and developed a neighborhood search system that analyzes that data and pose data obtained from analyzing the dancer’s movements. Drawing from the actual choreography footage for this piece, we attempted to create a production utilizing video material with the closest pose on a per-frame level.

1 audience scene

Music 09

We set up a booth in the venue lobby and filmed the audience. Analyzing the participants’ clothing characteristics and movements on multiple remote servers right until the performance, we managed to feature the audience as dancers using that analytical data and motion data from the ELEVENPLAY dancers.

2 dimentionality reduction scene

Music 09

Using dimension reduction techniques, we converted the motion data to two-dimensional and three-dimensional ones and visualized it.

AI Dancer scene

Music 10

We were interested in dance itself, on the different types of dancers and styles, or how the musical beats connect to improvizational dances. To explore further we worked together with Parag Mital to create a network called dance2dance:

This network is based on Google’s seq2seq architecture. It is similar to char-rnn in that it is a neural network architecture that can be used for sequential modeling.

Using the motion capture system, approximately 2.5 hours worth of dance data was captured about 40 times at 60 fps. Each session involved the dancers improvizing under 11 themes including joyful, angry, sad, fun, robot, sexy, junkie, chilling, bouncy, wavy, swingy. To maintain a constant flow, the dancers were given a beat of 120 bpm.

Background movie

This time we generated the background movie with “StyleGAN”, which was introduced in the following paper “A Style-Based Generator Architecture for Generative Adversarial Networks” by NVIDIA Research.

“StyleGAN” became open source. The code is available on GitHub.

We trained this “StyleGAN” on NVIDIA DGX Station using the data we had captured from dance performance.



For hardware, we used five palm-sized microdrones. Due to its small size, it is safer and more mobile compared to older models. Because of its small body, it provides a visual effect as if a light ball is floating on stage. The drones’ positions are measured externally via motion capture system. They are controlled in real time by 2.4 GHz spectrum wireless communication. The drone movements are produced with motion capture data that has already been analyzed and generated.


The frame serves an important role in projection mapping onto the half screen and AR synthesis. It contains seven infra-red LEDs, built-in batteries, and the whole structure is recognized as a rigid body under the motion capture system. Visible to the naked eye, retroflective markers are usually not that suitable for stage props use. However, we designed and developed a system using infrared LEDs and diffusive reflectors that allow for stable tracking invisible to the naked eye.

htiks Research

Lino Strangis VR live al KLANG di Roma: Perdersi nella Realtà Virtuale (e dare vita ad altri mondi)

Lino Strangis uno dei  migliori rappresentanti della Computer Art in Italia, persegue da anni in modo rigoroso la sua idea estetica di “intermedialità” che comprende oggi sculture 3D, Realtà Virtuale, installazioni immersive collocandole in azioni performative che “estendono” e “espandono” al limite della fantascienza, i confini della realtà. E’ l’esempio della straordinaria e originalissima performance di Azione sonora Altre musiche per Altri mondi al  KLANG di Roma del 4 aprile legata all’omonimo album di musica elettronica sperimentale di prossima uscita.

E in questa realtà parallela, formata da una costellazione scenografica digitale totalizzante fatta di sculture in forma di aggregato corallino 3D di note scappate dalla partitura per mineralizzarsi in altre vite – totemici relitti di una memoria umana- la performer (Veronica D’Auria) si immerge e si perde, indossando una “maschera” Oculus Rift che la rende al tempo stesso cieca e testimone di visioni surreali che dona al pubblico.

Lei è dentro forme in trasformazione, prigioniera di sogni/incubi plasmati in diretta in un firmamento digitale altrui:  Il suo corpo diventa segno dentro altri segni digitali che insegue suoni liquidi. Nulla accade prima dell’epifania rappresentata dall’atto di indossare la maschera-Oculus nelle cui cavità si ritrovano mondi fantastici creati in diretta dal “plasmatore” Lino Strangis.  La Realtà virtuale quale piattaforma teatrale e insieme maschera evocativa di mondi in divenire. E’ la materializzazione di utopie avanguardiste, a detta dell’autore in cui suono e immagine si sublimano a vicenda, definendo territori drammaturgici complessi, improvvisati e  ogni momento rinnovati e rinati dal vuoto e dalla luce.

Così Strangis ci racconta la funzione centrale della musica (e dei suoni) nella performance, vero centro gravitazionale di tutte le arti messe in campo: Sono scritture musicali per libere improvvisazioni, tramite le quali ho cercato di proseguire quella linea di ricerca che ha visto la scrittura musicale diventare altro, altre arti. Mi interessa molto spingere la performance sonora-musicale fino alla fondazione di una scena che ricomponga le drammaturgie sonore a quelle di gesti, luci e visioni in una versione minima e odierna dell’opera totale.

La performance al KLANG anticipa una serie di eventi, incontri, presentazioni di libri in cui l’opera di STRANGIS sarà al centro di molte attività ed esperienze multisensoriali per il pubblico prossimamente al MACRO di ROMA.

Qua il sito rinnovato dell’autore

Interview to Rosa Sanchez and Alain Baumann (Koniclab) about theatre and technologies. The costs of the media theatre.

Interview to KONICLAB

Kònic thtr is a Barcelona based artistic platform focusing on contemporary creation at the border between art and new technologies. Its main center of activity is the application of interactive technology to artistic projects. Kònic thtr is internationally renowned for the use and incorporation of interactive technology in creative projects. Their work have been shown in Spain, Europe, America, Asia and Africa. Through the research processes undertaken by Kònic Thtr since the beginning of the 1990’s, the company has developed a unique and personal language in this field of contemporary creation.

Rosa Sánchez. Multidisciplinary and multimedia artist, performer & choreographer. Artistic director and co founder of Konic thtr. Alain Baumann. Musician, multimedia artist. He is in charge of the interactive systems used by Konic thtr.

1. When you start an artistic multimedia theater project, do you guarantee yourself in advance having the necessary budget through residencies or external funding from Theatres or you usually work in self-production? And what are the real costs of your technological production as example?

KONIC Depending on the project we have in mind and the availability of funding, we work with different timings and budgets. Over the (many) years that we have dedicated to theatre and technology, we have acquired skills and knowledge that permit us to produce works with a very small team of four or five collaborators and auto-produce works when the economy is not buoyant. This means that we can do small scale project, with a limited amount of pre-booking for around 20-25K€. On the other hand, we can do large scale projects, with larger team of people participating in the company. When we work with non-readily available technology, such as our recent works with high speed networks, the budget is partly taken in charge by the technological partner who will provide the high bandwidth connection and team to operate it. This is an interesting way for us to work, as it means we are collaborating with engineers and exchange with them, and at the same time they support the work so that the budget for the piece stays reasonable.

2. In a case of a “call” for producing multimedia theater to stimulate the work of new authors or for a creation of a new work, which do you think is an adequate amount that a Theater should allocate?

This is a difficult question. It depends on the type of residency, if it does include a technical team at disposal to do research on technologies, video, light etc. The main problem is not so much the amount than the precarity of the work in general. It is becoming more and more difficult (especially in Spain, but we feel it is a general problem) to find co-producers to make good productions. The problem is that the precarity is also affecting the residencies, and a lot of times the spaces that offer residency have a very limited amount of funding, and rely more on the availability of space and knowledge that they can offer to the artist, than on the ability to remunerate the research that is taking place in their center.

3. Can the residency system be useful for the creation of multimedia theater? What does a residence imply for this theater and in the case, what should a theater / festival offer a residence for an entire crew or company?

We think that the residencies should be oriented towards research, a part of the development of the creative work, the pre-production of the work, that is not covered by productions. To support the pre-production processes is what we would expect from the residency centers, especially when the work implies the use of technology. This should also help to generate a cultural tissue, giving artists the tools to develop new ideas and praxis, and connect them to other artists.

4.The fact that funding is always so low as we observe in this period, does not perhaps oblige the artist to “break up” the work in too many segments of work that are likely to lose continuity and novelty to the work itself? How can we direct a certain theatrical cultural policy to a greater investment by making the theaters understand the complexity (and costs) of this theater?

Even if this is not the best condition, creating a work in a modular way is possible and can be interesting, as long as the work is coherent. As we work with diverse media, the work will evolve with time in the various facets of the piece as they get developed even if in segments. On the other hand, the problem arises when one is obliged to do so because of poor financing, so it is not a choice but the only alternative.

In these times of austerity, an additional problem for unconventional stage proposals (such as for example the ones that rely on technology) is that they are more affected by the reduction of funding than more conventional theatre. The producers, the exhibitors are wary to take ‘risk’ (yes, they see this kind of proposal as a risk…) and therefore there is even less funding going to these proposals.

We have to convince the theatres that they need to show technological proposals, and to make the ideas these proposals convey accessible to the audience. From our point of view, it is important to focus on the content, which is what theatre is about. The collectives working in this field are bringing content to the stage through innovation. Contents related to our technological era using languages that make culture a living and evolving entity.

The cost is inherently related to the design of the work. It is difficult to evaluate, but possibly the idea of higher cost of a piece relying on technology compared to conventional theatre is not so true anymore. New professionals are appearing who specialize in technology for the stage, and many software and hardware solutions, that would have been very expensive only a few years ago because they would need time to develop specifically, are now readily available.

5. Is there an ideal formula for creating this kind of show? What situation do you know that would correspond to a kind of “good practice” (residential or production) linked to the technological theater?

Our best experiences, both in production and residency where in fact initiatives from technological partners rather than from theatres. Somehow there is more interest for people who work in the technological research to develop cultural content that can serve them to test their technology in a context that can also give them visibility, than theatres showing interest in introducing technology into their contents. There are examples of using high velocity internet to transmit opera in real time, or to give masterclasses with a teacher in a city and the students in another city. Or using 360º cameras to film the opera etc.

So these companies or research centers are there and have some interest in showing to culture what their technology can do. They have the skills and the equipment available and if you have the opportunity to collaborate with them, they can be very open to new ideas.

A research agreement to collaborate directly with technology specialists and or research centers is in our experience a way to explore new formats and to be able to have interesting feedback in the way we use technology. In the frame of such agreements, we have had the opportunity to collaborate for instance with the department in artificial intelligence from the higher council for scientific research in Spain, and develop several projects in which we used artificial intelligence for pattern recognition in the use of sensors for dancers. More recently, we are working with specialists in the field of high bandwidth internet. Their work normally consists in connecting scientific research centers from all over the world with extremely fast internet. Working with them in order to develop artistic projects is very challenging, because neither them nor us do really know what the outcome will be. We have successfully showed the results of such collaboration in theatres were part of the audience was interested in the technology and the other part in the artistic content – the project Near in the Distance that was shown in Viena (2015) and in Linz (2017). There need to be mutual trust between the teams and this is a good start for making good projects.

6. In your opinion, is there any new technology that has not yet been explored and that would be useful to a “new format” of technological theater? For example, Robotics or AI?

Artists are very curious people and probably all technologies have been explored ! There is a need for many artists to explore the possibilities that are infusing our everyday life, but in our opinion, it will take some time until these elements are fully understood and the technology made available to artists so that new ways and new format really start to happen.

We cannot compete with technology in order to get the attention of an audience that is nowadays immersed in social networks. These social networks have at their disposal the very latest development in big data analysis and artificial intelligence software that they constantly adapt and develop in order to offer the best experience to their audience. In some ways they compete with theatre, and from our perspective we need to preserve the important part of theatre which is the contents. The contents that can be brought to the stage by using contemporary technologies are where we need to focus as artists.

7. The consideration that there are few productions or few groups that propose innovative ways in terms of theatrical narration creates a limitation for a theorical analysis of the phenomenon of the so-called ” intermediality?

From our point of view, innovation is created by specific projects who differ from others with proposals that may be a little ahead of their time and bring something qualitatively different, linked to the concept of innovation. They are never many projects, but they are models. These models are the ones that can be analyzed and studied to differentiate them from others. They might create a trend or not, but they are unique/different and can be studied.

It is correct that with the lack of funding, there is an increased difficulty to create such works since they require longer research times, and also the lack of opportunity to show transmedial works in theatres, many artists who dedicated part of their creation to new media have opted to dedicate less time to this type of shows. There is less production, but at the same time younger artists introduce technology in their work in a more informal manner using readily available technologies on stage and this is positive in our opinion.

Trevor Paglen’s Sight machine with Kronos Quartet and Obscura Digital

Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. Among his chief concerns are learning how to see the historical moment we live in and developing the means to imagine alternative futures. Paglen’s work has had one-person exhibitions at Vienna Secession, Eli & Edythe Broad Art Museum, Van Abbe Museum, Frankfurter Kunstverein, and Protocinema Istanbul, and participated in group exhibitions the Metropolitan Museum of Art, the San Francisco Museum of Modern Art, the Tate Modern, and numerous other venues. He has launched an artwork into distant orbit around Earth in collaboration with Creative Time and MIT, contributed research and cinematography to the Academy Award-winning film Citizenfour, and created a radioactive public sculpture for the exclusion zone in Fukushima, Japan.

The Cantor Center for Visual Arts at Stanford University came to OBSCURA DIGITAL and proposed a collaboration with artist Trevor Paglen, whose work addresses topics like government secrecy and surveillance, exposing the vast apparatus of machines, systems and algorithms that monitor virtually every aspect of our lives. Paglen’s “Sight Machine”project would demonstrate to a live audience how machines “see” the world — in this case, a performance by the renowned Kronos Quartet.

Obscura digital worked with Paglen’s team to develop the computer and video systems to take a live video feed of the string quartet’s performance, run it through actual off-the-shelf artificial intelligence surveillance algorithms (over a dozen of them in total), and project what the AIs see and how they interpret it onto a screen above the musicians.

These AIs — whether for facial recognition, object identification or threat detection — are designed to communicate with their machine counterparts, not to provide human-readable output. Making that possible in realtime required Obscura’s systems engineers to maximize throughput in a Herculean research and development effort.




KONIC THTR a Belgrado con #14 Skyline: quando la (video)danza esplora la vita (e la filosofia).

I Konic di Barcellona ovvero Alain Baumann e Rosa Sanchez tra i protagonisti della scena digitale catalana, hanno esplorato nella loro lunga e importante carriera internazionale, ogni anfratto delle tecnologie e lo hanno depositato con cura amorevole sul palcoscenico dandogli una forma magica e sorprendente, amichevole e insieme profonda. Dall’arte interattiva al videomapping alla telematic dance (di cui sono gli indiscussi fondatori) hanno davvero sperimentato ogni genere di tecnologia, disseminando per il mondo le loro proposte artistiche; non a caso sono tra gli artisti più ricercati per giganteschi progetti transfrontalieri finanziati dall’UE come il famoso progetto IAM che li ha visti coinvolti con il Comune di Alghero nel triennio 2012-2015 in una proposta artistica complessa di realtà aumentata e videomapping per i beni culturali in Libano- Egitto- Palestina e Tunisia. Recentemente il loro telematic project Espai No tàctil ha avuto una rappresentazione fisica a Barcellona con una diretta telematica creativa con Santiago del Cile e Strasburgo. La rete viene intesa da Konic non solo come comunicazione ma come prolungamento creativo dello spettacolo.

Ogni loro lavoro teatrale o di digital art è una riflessione sull’umano e sulla trasformazione dell’individuo grazie alle tecnologie, una trasformazione che può ampliare gli orizzonti in modo smisurato offrendo libere  e insolite prospettive  ma solo se non ci limitiamo a essere “giocati” dalle tecnologie, cioè a subire le manipolazioni della scienza e della tecnologia in atto. Vale la pena ricordare la teorizzazione dei Konic, in epoca di piena euforia di videomapping, sul significato di una “mediaturgia” del mapping

Di ritorno dal Bangladesh dove erano stati invitati per un meeting su Cultural Transformation in Digital Ecosystem a Dhaka, li abbiamo incontrati a luglio al Theatre World Congress  di IFTR a Belgrado dove, dopo una lecture sul loro lavoro all’interno degli affollati General panel, hanno proposto una serata speciale con il loro più recente spettacolo di danza interattiva e network performance #14Skyline .

Lo spettacolo intrattiene con il pubblico una relazione inusuale, quasi rituale: non a caso è la musica, la poesia e il canto ad avvicinare nel prologo, l’orecchio e lo sguardo dello spettatore in modo intimista, quasi sussurrato, facendolo entrare delicatamente nella materia teatrale che si snoda attraverso percorsi astratti, suggestioni visive, frammenti di parole e atmosfere sonore immersive. Tutta la tecnologia è live, gestita, manipolata, ricreata e proiettata dal vivo.

Tre sono le postazioni: una scenografia che si eleva come una torre tronca a vortice fatta d listelle metalliche videomappate alla perfezione all’interno della quale Rosa Sanchez si muove interagendo con suoni e immagini, una seconda con un tavolino dove la performer “opera” davanti a un miniproiettore creando maschere video per il suo volto e una terza è lo spazio di azione di Alain Baumann, presente non solo come “tecnico” ma anche come performer a fotografare, riprendere e manipolare immagini proiettandole sui fondali, cambiando continuamente, appunto lo “skylyne” del palcoscenico con un cellulare.

Ogni momento della performance è rivolta a una  specifica sensorialità, ma è il corpo a “geolocalizzare” le coordinate per una sua immersione creativa (e interattiva) nella totalità spaziale. Lo spettacolo dei Konic sembra incarnare al meglio la nozione di Maurice Merleau-Ponty di “teoria del corpo come teoria della percezione”: la nostra esistenza come esperienza primariamente e incondizionatamente spaziale, che vive, connessa con la sua corporeità, lo spazio-immagine che ci circonda. Le forme, vissute in questo frammento di ispirata danza digitale, uniscono definitivamente l’uomo al suo ambiente. Un vero e proprio “approccio fenomenologico” che permette di ri-tracciare continuamente la realtà adeguandola alle immagini della nostra esperienza ma che si sommano, che si confondono anche con le immagini della nostra coscienza.

Il mondo che percepiamo in questo spettacolo è fatto di rimandi all’arte (dai viluppi delle sculture cubo futuriste di Tatlin, ai frammenti astratti futuristi, fino alla glitch art), ma il vero tema è proprio il legame tra coscienza e realtà che ci circonda. Lo spettacolo sembra suggerire come potrebbe diventare la città e il suo “skyline”,  videomappandola con il nostro occhio interiore, sovrapponendola a strati di colorate piazze abitate e vissute, scannerizzando non solo la superficie fisica degli edifici ma anche la nostra memoria, la nostra interiorità psicologica.

La traccia-guida, che è poi il tema portante dello spettacolo, è appunto il magnifico prologo di Rosa Sanchez che invita a “sentire” l’inaudibile, dare forma a pensieri, un momento che è anche un omaggio alla migliore arte d’avanguardia. La danza capta l’invisibile, i sensori colgono aneliti di trasformazione, generando immagini e identità multiple e il nostro volto indossa volontariamente maschere di ciò che siamo o che vorremmo essere, facendoci diventare, nell‘instagrammismo dei nostri tempi, immagini-filtro ovvero magnifici zombie digitali. Il teatro è uno spazio sensoriale frammentato e insieme uno spazio di connessione condivisa, uno skyline digitale che ci contiene con tutti i nostri desideri, sogni, passioni.

Uno spettacolo #14 Skyline di grande forza espressiva e di grande pregnanza concettuale, un concentrato di tutte le tecnologie – quelle che oggi, come ben sappiamo, stanno in qualche APP del nostro cellulare- dove però si insinua il fondamentale messaggio che, in un mondo che vive costantemente sugli schermi,  l’arte (e la vita) stiano piuttosto, nello spazio liminale dell’errore, della latenza, dell’imperfezione, esattamente ciò che la macchina non può e non sa programmare.

Anna Maria Monteverdi

Prossimi appuntamenti in cui sarà possibile vedere #14Skyline:

7 SEPTEMBER : 4th ‘Jornadas Escena Digital’. Barcelona. Spain

28 SEPTEMBER : XV Int. Festival VIDEOMOVIMIENTO / Cuerpo Multimedia. Bogota. Colombia

20 OCTOBER : Festival IDN+ / Mercat de les Flors. Extended version. Barcelona.


Call for Participation: Mask and Avatar: Adventures in Live Performance Capture


Mask and Avatar


adventures in live performance capture

  University of Warwick 
in association with Warwick Arts Centre

Millburn House, University of Warwick,

Millburn Hill Road, Coventry CV4 7HS 
23rd March 2018, 1pm-6pm


We warmly invite you to join us for this day of workshop and exchange, exploring the development and presentation of theatre and performance in and through digital technologies.


Part of a larger research project undertaken by Labex Arts-H2H and Université Paris 8 entitled La scène augmenté (The Augmented Stage), Mask and Avatar investigates the principles, solutions and applications of motion capture (now established in film, TV and games production, but still in its infancy with regard to live performance). It takes the research of performance and digital technologies into a specific engagement with the Perception Neuron motion capture system – an accessible, geo-spatial (non-camera-based) system. The project has involved collaboration between computer engineers, software programmers, digital designers and theatre makers; and interaction between avatars and masked actors, bringing together old and new technologies of corporeal representation.


This Engagement Day, supported by Warwick’s Industrial Strategy fund, presents diverse performance outcomes as a way of exemplifying the possibilities of this system.



Companies, technologists, artists and researchers who are interested in sharing demos of their work are invited to apply for a slot in the industry showcase. The format is flexible (our starting point is something like a trade fair), and we welcome ideas from those working with masks, motion capture technologies, virtual/augmented reality, and experimental or multimedia performance practice.


Please submit a one-page outline of your work and what you would like to show/share, along with a 100-word biog of you/your company, by 2 March 2018. Please bear in mind that all demos will need to be set up on the morning of 23 March in order to be ready to share in the afternoon, and are likely to be in a shared studio space (although we do have room to manoeuvre).

To submit a proposal, or for further information, please email us at


The day will include two short (c.25-min) performances and a post-show discussion:


LA VIE EN ROSE. From clinic to eternity


by Boris Dymny et Giulia Filacanapa


Direction – Giulia Filacanapa


Masks – Stefano Perocco di Meduna


Performers – Ethel De Sousa, Boris Dymny, Léandre Ruiz


AGAMEMNON REDUX.A mask and mocap experiment in three scenes


From Agamemnon by Aeschylus


Direction – Andy Lavender


Movement direction – Ita O’Brien


Technical Direction – Tim White


Music – Théo Semet


Performers – Alexandra Beraldin, Cécile Roqué-Alsina



Lighting design – Ian O’Donoghue

Production and Digital Programming – Georges Gagneré




If you would like to attend, please go to this link to register for the event:




Project Directors:
Georges Gagneré (Université Paris 8)

Andy Lavender (University of Warwick)

Tim White (University of Warwick)



David Coates
Nese Ceren Tosun
Ellie Chadwick

La scène-image. Extrait de Bardiot Clarisse, Arts de la scène et technologies numériques : les digital performances, Ouvrage hypermédia

La scène-image (une histoire des images animées sur scène). Clarisse Bardiot

Extrait de Bardiot Clarisse, Arts de la scène et technologies numériques : les digital performances, Ouvrage hypermédia, Collection Les Basiques, Leonardo/Olats, juin 2013. Collection dirigée par Annick Bureaud.

September 18, 2014

Depuis la fin du XIXe siècle, les images envahissent l’espace scénique, jusqu’à devenir aujourd’hui un phénomène très répandu. À la différence des « écrans sur la scène », pour reprendre le titre d’un ouvrage dirigé par Béatrice Picon-Vallin, la scène-image est la conversion de la scène en image, comme si la scène et l’écran se superposaient jusqu’à se confondre. Les acteurs sont immergés dans des images sans écran, en 3D, grâce à divers stratagèmes optiques, à des machines de vision. Le numérique, avec l’interactivité, permet d’agir sur ces images, de les modifier, en temps réel ou en temps différé. Dans la scène-image, les images deviennent des espaces habitables.

Ainsi, il ne s’agit plus de changer de décors, les uns à la suite des autres, mais de créer un espace qui soit en mouvement perpétuel, souple et malléable, où l’image réponde instantanément à l’action et offre une espèce de voyage immobile, dirigé soit par un régisseur, soit directement par l’acteur. Cependant, même si sa taille est démesurée, il s’agit toujours d’un objet en 2D – l’écran – inséré dans un espace en 3D – le plateau. L’emploi de la réalité virtuelle, associée à des stratagèmes optiques qui offrent une vision en relief, permet d’établir un pont entre ces deux types d’espace.

Certaines mises en scène de Mark Reaney, Denis Marleau, Teatro Cinema, Victor Pilon et Michel Lemieux, Adrien Mondot, Joris Mathieu, le Théâtre de Complicité, ou chorégraphies de Merce Cunningham, Trisha Brown, Dumb Type (pour ne citer que quelques exemples tant ils sont nombreux) sont caractéristiques de cette démarche. Ces spectacles rejoignent un phénomène ancien : faire de la scène une image. Ce phénomène remonte à la naissance de la perspective, qui unifie l’espace, et ce faisant propose des lois optiques favorisant la scène-tableau.

Dans les tentatives de donner chair aux images, de donner l’illusion de leur tridimensionnalité, le cube scénique, et sa mise à distance de la salle, ne sont pas remis en cause. Pour que l’artifice opère, le spectateur doit se tenir à distance de la scène – à la bonne distance. Ce qui faisait illusion hier, et provoquait la stupeur ou l’émerveillement, apparaît parfois aujourd’hui comme un bien pauvre artifice. Dans cette fascination pour les effets de l’illusion, il ne s’agit jamais que d’adapter les techniques contemporaines aux nouveaux modes de perception du spectateur. J’examine ci-dessous les principaux dispositifs utilisés dans les digital performances : écran panoramique, tulle, Pepper’s ghost, projection d’images en 3D, réalité virtuelle.

1. L’écran panoramique

Dans certains dispositifs, on observe le même phénomène dans les arts de la scène que dans les installations interactives : la recherche de l’immersion en agrandissant l’écran au maximum pour qu’il remplisse le champ de vision du spectateur.

  • Un cyclorama hémisphérique constitue la base de la scénographie de [Or] de la compagnie japonaiseDumb Type (1997). Les corps qui viennent perturber cet espace d’un blanc immaculé sont comme épinglés dans le dispositif. L’œil ne peut s’échapper, d’autant plus qu’il est soumis à des effets de persistance rétinienne déclenchés par des flashs lumineux très puissants.

Dumb Type, [Or] (1997)

  • Quasiment toutes les œuvres de Granular Synthesis recourent à cette invasion du champ scopique, par la multiplication des écrans, souvent disposés à plus de 120°. L’immersion – sonore et visuelle – est encore accrue dans 360 (2002) : les spectateurs sont placés au centre d’un cercle recouvert d’écrans sur 360°, comme dans un panorama, entouré par des enceintes très puissantes.

Granular Synthesis, 360 (2002)

  • Ciels, la dernière pièce de la Tétralogie de Wajdi Mouawad Le Sang des promesses, créée au Festival d’Avignon en 2009, est un dispositif immersif. Les spectateurs sont placés au centre d’un espace rectangulaire blanc. Les quatre « murs » sont des écrans de projection. Au spectateur, assis sur un tabouret rotatif, de choisir son point de vue dans ce théâtre à 360°.

Wajdi Mouawad, Le Sang des promesses (2009)

 2. Le tulle

La projection d’eimages sur un tulle en avant scène, le regard devant traverser l’écran pour percevoir ce qui a lieu sur le plateau, crée un espace ambigu, entre 2D et 3D. C’est le procédé le plus simple pour tenter de faire passer l’image projetée de la deuxième à la troisième dimension en créant l’illusion de la profondeur.

·       Ce procédé a été utilisé avec succès par Merce Cunningham en 1999 pour l’une de ses œuvres les plus célèbres, BIPED, réalisée en collaboration avec Paul Kaiser et Shelley Eshkar (openendedgroup). BIPEDest l’œuvre qui a porté à la connaissance du grand public les créations en danse et technologie.De petites boules blanches sont disposées sur les articulations des danseurs et filmées par dix caméras. Un logiciel reconstitue alors, grâce aux points blancs, chaque mouvement qui peut ainsi être transposé sur un squelette virtuel ou des objets en 3D. Les mouvements captés sur les danseurs investissent, habitent, à la fois des formes abstraites et des figures anthropomorphes. Les différentes séquences réalisées, de quinze secondes à quatre minutes, sont ensuite montées au hasard. Fidèle à sa méthode, afin de faire en sorte que chorégraphie, décor et musique soient complètement autonomes, Cunningham les assemble au dernier moment. Projetées sur un tulle en avant-scène, devant les danseurs, les silhouettes semblent évoluer avec eux, dans le même espace. Leur partition chorégraphique offre un contrepoint à celle des danseurs. Plus denses, plus vives, ou plus fragiles selon l’éclairage, les figures de Kaiser et Eshkar surgissent, grandissent, s’évanouissent, telles des ombres numériques, des traces évanescentes, des souvenirs de corps de chair.

Merce Cunningham, BIPED (1999)

·       Le metteur en scène, informaticien et jongleur Adrien Mondot, dans son premier spectacle, Convergence 1.0 (2005), jongle avec des balles réelles et des balles virtuelles, projetées sur un tulle placé à l’avant-scène. Il est souvent difficile de distinguer les unes des autres, sauf lorsque le comportement des balles virtuelles s’écarte des lois de la gravité ou bien lorsqu’elles se multiplient à l’infini, en un jonglage impossible et rêvé.

Adrien Mondot, Convergence 1.0 (2005)

  • La compagnie chilienne Teatro Cinema (fondée par d’anciens membres de La Troppa), grâce à un savant jeu de superpositions d’écrans opaques et de tulles, mêle dans Sin Sangre (2007) théâtre et cinéma, image et plateau, de telle sorte qu’il devient impossible de distinguer ce qui relève du décor réel ou du décor virtuel. Les acteurs, dont les gestes et les déplacements sont calés avec une grande précision, viennent « habiter » ces images, ces espaces qui se succèdent les uns aux autres comme par magie : vieille maison, bar-restaurant, voiture circulant sur une autoroute…

Teatro Cinema, Sin Sangre (2007)

3. Le Pepper’s Ghost

Certains stratagèmes optiques pour créer des images flottantes, sans support, existent depuis des siècles, comme par exemple les « théâtres optiques » du père Athanase Kircher au XVIIe siècle. Conçus à base de miroirs, ces stratagèmes connaissent un renouveau actuellement. Ainsi du Pepper’s Ghost, inventé au XIXesiècle. Un miroir sans tain, incliné à 45°, est disposé à l’avant-scène. Une scène, invisible du public, se reflète sur le miroir sans tain : on a l’impression que l’image de la scène escamotée « flotte » derrière le miroir (ce sont souvent des spectres qui sont représentés), partageant le même espace que les comédiens évoluant sur le plateau. L’écran disparaît, du moins sous sa forme courante, celle d’une toile blanche qui reçoit des images projetées.

Aujourd’hui, la scène cachée est remplacée par un vidéo-projecteur, ce qui permet de faire apparaître des personnages ou encore des créatures imaginaires. Souvent, lorsqu’il est question d’hologrammes au théâtre, c’est du Pepper’s Ghost dont il s’agit. Qu’elle n’a pas été la surprise des fans du rappeur Tupac, de le voir ressurgir d’entre les morts lors d’un concert posthume le 15 avril 2012…. Il semblerait que le numérique donne une nouvelle vie au Pepper’s ghost, tant les spectacles ayant recours à ce stratagème se multiplient ces dernières années. Quelques exemples :

  • Les metteurs en scène canadiens Victor Pilon et Michel Lemieux (dont la compagnie se nomme 4d art sur ont fait de ce dispositif un principe scénographique qui est devenu la marque de fabrique de leurs spectacles.

Lemieux Pilon 4D Art, La Belle et la Bête (2011)

·       Le metteur en scène français Jean Lambert-Wild, dans Orgia (2001), a recours à cette méthode pour donner l’impression qu’acteurs et organismes marins évoluent dans le même espace.

Jean Lambert-Wild, Orgia (2001)

  • La compagnie Haut et Court, dirigée par Joris Mathieu, a également eu recours au Pepper’s Ghost pourDes Anges Mineurs (2006), Le Bardo (2010) ou encore Ubik/Orbik (2011). Ce procédé de « théâtre optique » permet au metteur en scène d’interroger les liens entre illusion et réalité, de mettre à distance le réel.

Compagnie Haut et Court, Des Anges Mineurs (2006)

4. Images de synthèse

La relative démocratisation du coût de production des images de synthèse est l’une des raisons de leur prolifération actuelle sur les scènes de théâtre et de danse. Le phénomène a commencé à se déployer au milieu des années 1990. L’intérêt de ces images de synthèse est qu’elles peuvent être dotées de mouvements de caméras virtuels (qui les rapprochent des jeux vidéo) et qu’elles peuvent réagir à l’action scénique (mouvement des interprètes par exemple) grâce à des dispositifs interactifs Ö. Lorsque des images de synthèse réalisées en 3D sont projetées simultanément sur toute la hauteur de la cage de scène et sur le sol, et lorsqu’elles sont accompagnées d’une création lumière adaptée, elles sont propres à créer des sensations de percée, de plongée, voire de vertige :

  • Parmi les exemples, citons l’utilisation d’images de synthèse (réalisées par Luke Halls) dans Le Maître et Marguerite mis en scène par la compagnie anglaise Le Théâtre de Complicité dans la cour d’honneur du Palais des Papes à l’occasion du festival d’Avignon 2012. Sur toute la hauteur du mur du Palais des Papes, la projection de plans de Moscou avec un zoom type google earth permet littéralement de plonger dans le paysage urbain. Le spectacle s’achève sur la superposition de l’image reconstituée en 3D du Palais des Papes sur le palais lui-même. Lorsque les pierres du palais virtuel s’effondrent, les murs réels semblent disparaître au profit du réalisme de l’illusion projetée.

Théâtre de Complicité, Le Maître et Marguerite (2011)

D’autres exemples de spectacles intégrant des images numériques fixes ou génératives :

Dumb Type, Voyage, (2002)

Troika Ranch, 16 [R]evolutions, 2006

Adrien Mondot, Cinématique, 2010

5. Projection 3D

Une solution, toujours dans l’objectif de donner une vision en 3D d’un décor en 2D, consiste à équiper le spectateur de diverses prothèses, les plus connues étant les lunettes polarisantes. Le spectateur a ainsi la sensation que les acteurs évoluent dans le même espace que les décors représentés, et que l’espace vient vers lui. Au théâtre, l’emploi de ces artefacts (ne particulier la stéréoscopie) est récent, même si l’on sait depuis longtemps créer des images en volume grâce au port de lunettes.

Contrairement aux dispositifs précédents, qui maintiennent une coupure très nette entre la scène et la salle, l’usage des lunettes et de la projection 3D permet de rapprocher le décor du spectateur, comme s’il flottait entre la scène et la salle. Le sentiment d’immersion dans la scène est ainsi renforcé.

Plusieurs metteurs en scène et chorégraphes en ont fait l’expérience :

·       George Coates en 1997 pour 20/20 Blake.

·       Robert Wilson en 1998 pour un opéra sans acteur de Philip Glass, Monsters of Grace. Si Robert Wilsonutilise la stéréoscopie (les projections d’images sont très rares dans ses spectacles), c’est sans doute parce qu’elle lui permet de créer une impression visuelle identique à celle de ses éclairages diffus qui créent un halo lumineux.

·       La Fura dels Baus pour Obs en 2000,

·       Jean Lambert-Wild, pour Le recours aux forêts en 2009

Jean Lambert-Wild, Le recours aux forêts (2009)

·       Wayne McGregor pour l’opéra Twice Through the Heart en 2011

  • Kitsou Dubois propose dans Perspectives, le temps de voir (2011), une suite d’installations vidéo mettant en scène différents espaces et techniques de projection, dont de la vidéo-relief ou du fish-eye. Les installations présentent les images filmées avec des caméras 3D/relief de danseurs évoluant en apesanteur, dans des vols paraboliques.

       Kitsou Dubois, Perspectives, le temps de voir (2011)

Les lunettes 3D trouvent un prolongement dans les casques de réalité virtuelle dits HMD (pour Head Mounted Display). En général utilisés dans le contexte des jeux vidéos, ces casques sont dotés de deux écrans placés devant les yeux et sur lesquels on peut projeter des images tout en voyant au travers. Ce dispositif, complété par des systèmes de tracking qui permettent de coupler le défilement des images au mouvement de la tête, provoque un fort sentiment d’immersion. Peu de metteurs en scène ont encore recours à ces dispositifs. Les plus représentatifs sont Yacov Sharir, Marc Reaney, Otávio Donasci et Eric Joris :

·       Dès 1994, le chorégraphe Yacov Sharir, en collaboration avec Diana Gromala, crée Dancing with the Virtual Dervish, une installation pour casque de réalité virtuelle. Le visiteur est immergé dans un corps qu’il peut pénétrer, au travers duquel il peut cheminer. Le dispositif est également utilisé dans le cadre de performances.

Yacov Sharir, en collaboration avec Diana Gromala, Dancing with the Virtual Dervish (1994)

·       Mark Reaney, professeur de scénographie à l’université du Kansas explore depuis le début des années 1990 l’utilisation de la réalité virtuelle au théâtre. Dans un premier temps, il utilise la réalité virtuelle pour prévisualiser des maquettes de scénographie. En les projetant à l’échelle 1, ces maquettes se révèlent être des décors interactifs, souples et malléables, visibles en 3D grâce au port de lunettes ou de casques HMD. Pour le théâtre de l’université, Mark Reaney réalise plusieurs scénographies en réalité virtuelle avec ses étudiants.

·       Avec VideoCriatura Imersiva (Créature-Vidéo Immersive) l’artiste brésilien Otávio propose au spectateur un tête-à-tête intimiste avec une femme coiffée d’un casque à deux entrées. Le spectateur place son visage dans l’autre entrée du casque et est « téléporté » dans un autre lieu (le projet inclut divers lieux emblématiques au Brésil telle qu’une plage, etc.). L’image diffusée dans le casque est une promenade vidéo, à laquelle le spectateur est invité par une jeune femme. Mais au cours du parcours, le spectateur perçoit des sensations réelles (eau sur les mains, vent, etc.), éléments mis en œuvre dans l’espace physique où il se trouve par des assistants qu’il n’avait pas remarqué avant de coiffer le casque. Au cours de la performance, le spectateur est ainsi amené à voyager entre un univers virtuel et des sensations réelles.

Otávio Donasci, VideoCriatura Imersiva (2005)


  • Le metteur en scène européen qui a sans doute le plus exploré les liens entre réalité virtuelle et immersion du spectateur est le flamand Eric Joris avec sa compagnie CREW. Depuis le premier dispositif pour un spectateur pour Crash en 2004 au dispositif pour plusieurs spectateurs simultanés (Terra Nova, 2011), Eric Joris ne cesse de perfectionner une interface immersive revêtue par « l’immersant » : casque de réalité virtuelle, casque audio, caméra, ordinateur portable attaché sur un sac à dos et éléments de costumes. Ainsi harnaché, celui-ci est conduit dans un parcours narratif. Malgré l’intérêt de l’expérience, les résultats, tant technologiques que dramaturgiques ne sont pas concluants pour le moment.

CREW, Terra Nova (2011)

6. Autres stratagèmes

Il existe d’autres procédés pour faire disparaître les écrans de la scène, créés au cas par cas, en fonction d’une œuvre spécifique, comme par exemple la mise en scène des Aveugles de Maeterlinck par Denis Marleau en 2002. À la suite d’une réflexion sur le double et le spectral qu’il mène depuis plusieurs années, sans forcément recourir à l’usage de technologies, Marleau décide de représenter les douze aveugles de la pièce par des personnages virtuels. Ici l’image n’est pas manipulée en temps réel mais préenregistrée et diffusée à volonté. Il n’y a plus d’acteurs sur scène, seulement des ombres, des visages filmés et projetés sur des masques à échelle humaine, qui servent d’écrans en volume, et qui leur confèrent matérialité troublante, tridimensionnelle, et présence quasi-humaine. Dans cette « fantasmagorie technologique », sous-titre donné en référence au fantascope de Robertson (une technique de projection lumineuse héritière de la lanterne magique et dont l’objet était l’apparition de fantômes), la confusion entre la chair et son image, entre le personnage et son reflet, est portée à son comble.

Denis Marleau, Les Aveugles (2002)

Digital performance: bibliography

Theatre Performance and Technology: The Development of Scenography in the Twentieth Century (Theatre and Performance Practices) Paperback – 19 Sep 2005 by Professor Christopher Baugh (Author)

Throughout history, all great theatre cultures have used technology as an important part of performance: as a means to shift and change scenic appearance, and as visual rhetoric, spectacle and show. Revolutionary scientific thinking in the twentieth century, alongside the technology to use electric light in performance, served to underpin the ideas of Appia, Craig, Meyerhold, Terence Gray, Caspar Neher and Josef Svoboda. Distinctive though their ideas remain, they were unified in their firm belief that new forms of performance would only be achievable through a detailed and close study of artistic resources and technologies.

Their practices and understandings have served both to significantly expand and to create distinctive new connections and possibilities between technology, scenography and performance. In this stimulating survey, Christopher Baugh explores the ways in which development and change in technology have been reflected in scenography, and considers how change in scenographic identity has impacted upon the place and meaning of performance.

Theatre and the Digital Paperback – 2 Oct 2014

This question opens up a rich seam of provocative and original thinking about the uses of new media in theatre, about new forms of cultural practice and artistic innovation, and about the widening purposes of the theatre’s cultural project in a changing digital world. Through detailed case-studies on the work of key international theatre companies such as the Elevator Repair Service and The Mission Business, Bill Blake explores how the digital is providing new scope for how we think about the theatre, as well as how the theatre in turn is challenging how we might relate to the digital.

Performance and Technology: Practices of Virtual Embodiment and Interactivity Paperback – 13 Oct 2006

Staging the Screen: The Use of Film and Video in Theatre (Theatre and Performance Practices) Paperback – 19 Nov 2007

Immersive Theatres: Intimacy and Immediacy in Contemporary Performance Paperback – 31 May 2013

ESPAI NO TÀCTIL (interactive and telematic performance Site Specific-INFLUX) by KONIC

Un concert-dansa-performance amb configuració variable i site specific. El projecte explora la complementarietat i les relacions entre el tàctil i el no tàctil: espai cos = tàctil / espai psicològic = no tàctil. Un encontre dansat de personatges que mai podran tocar-se però sí comunicar-se des de la distància. Un entrellaçat fluir de moviments, música i imatges produïdes en directe que ofereix a l’espectador submergir-se en una poètica de cossos fràgils i imatges efímeres i suggerents.

ESPAI NO TÀCTIL (Site Specific per a INFLUX)

Rosa Sánchez: coreografia / direcció escènica
Alain Baumann: direcció tecnològica i desenvolupament interactius
Rosa Sánchez: performance
Alain Baumann: música i visuals en directe
Rosa Sánchez i Adolf Alcañiz: imatge
Rosa Sánchez: concepte escenografia
Adolf Alcañiz i Amir Gazit: realització escenografia
Anna Candela: coordinació i comunicació
Koniclab: producció
Amb el suport de:
Fàbrica de Creació Fabra i Coats. Barcelona
ICUB. Institut de Cultura. Ajuntament de Barcelona
Departament de Cultura. Generalitat de Catalunya
Ministeri d’Educació, Cultura i Esports

Antic Teatre
Verdaguer i Callís, 12. 08003 Barcelona

Entrada 6€
comprar entrades

INITI, “Anticodes” by Braňo Mazúch

Anticodes is interactive, visual and dance production based on Václav Havel’s collection of experimental poetry from 60’s of the same name. Laterna Magika’s classic and typical use of film footage is replaced in Anticodes by projection and sounds from live sources. The production introduces real-time animations to Laterna Magika, which detects people or objects in pre-determined zones. Here are some scene previews introducing these interactive principles. Tecnica multitouch detection.

Concept: Braňo Mazúch, Dan Gregor
Stage director: Braňo Mazúch
Interactive projections: Dan Gregor
Choreography: Věra Ondrašíková
Music: Filip Míšek, Michal Nejtek
Sound-design: Stanislav Abrahám
Light-design: Patrik Sedlák
Software programing: Jakub Koníček
Costumes: Kristýna Javůrková

The making of


TALOS by Arkadi Zaides, VIE FESTIVAL. A keynote presentation that examines the relation between movement, innovative technologies and the future of borders

L’Europa sta chiudendo i suoi confini e nelle aree circostanti sta emergendo un nuovo tipo di coreografia. Lontano dal porre fine ai movimenti migratori, le chiusure dei confini generano nuovi movimenti.
TALOS, il nuovo solo del coreografo israeliano Arkadi Zaides, indaga il futuro dei confini. Il suo punto di partenza è TALOS, un progetto tecnologico creato dall’Unione Europea, un sistema robotico mobile destinato a rilevare e prevenire attraversamenti illegali dei confini.
Con l’ausilio di interviste, materiale filmato e documenti, Zaides e il suo team interdisciplinare di ricerca riflettono sulle conseguenze e le questioni etiche dell’iniziativa TALOS.


Arkadi Zaides

Arkadi Zaides, è un coreografo nato nel 1979 in Bielorussia, immigrato in Israele nel 1990. Attualmente sta lavorando in Israele ed in Europa. Zaides ha conseguito una laurea al Amsterdam Master of Choreography. Ha danzato in diverse Compagnie israeliane tra le quali la Batsheva Dance Company e la Yasmeen Godder Dance Group. Nel 2004 inizia una carriera indipendente. Attraverso i suoi lavori, si dedica alle problematiche sociali e politiche, prima concentrandosi sul contesto israeliano/palestinese, e ora su quello europeo. La pratica artistica di Zaides punta a scatenare un dibattito critico, concentrandosi sul corpo come mezzo attraverso il quale i problemi politici e sociali vengono vissuti più intensamente.

What kind of choreography arises in the proximity of borders? Which strategies of restriction define movement? Zaides’ new work sets out to explore a dynamic system of action and reaction, limitation and transgression, stasis and mobility. The work is a response to TALOS, an EU-funded initiative that designed an advanced system for protecting European land borders. TALOS was a collaborative project between ten countries that was officially conducted between the years 2008-2013. It resulted in a demonstration of a surveillance system that could be deployed in a matter of hours at any location. The system included mobile, semi-autonomous robots that patrol border areas and gain physical and performative presence. For the TALOS project, Zaides put together a team of choreographers, dramaturges, video artists, and For the TALOS project, Zaides put together a team of choreographers, dramaturges, video artists, and robotic experts. Since 2016, the team has been developing a keynote presentation that references the original project 

Arkadi Zaides, biografia

DandyPunk Live immersive theater at Sundance Festival 2017

Trailer for a live, immersive installation at Sundance New Frontier 2017 by Dandypunk, Darin Basile and Jo Cattell

There was an amazing projection mapped, immersive theater piece at Sundance this year by Heartcorps called “Riders of the Storyboard.”

Trained street performers interacted with a virtual projection-mapped 2D objects, and through the slight of hand of magic broke these flat objects into the third as glowing 3D props. There were 15 people packed into a small room with about half a dozen performers for a 13-minute show about a these 2D characters who interact with the performers who are playing Alchemy of Light gods in the third dimension. It was an awe-inspiring performance, and the projection mapping technology provided a shared augmented reality experience. Heartcorps is proving out some of the techniques with projection mapping technology that should also work really well in the future of live performance and immersive theater designed for augmented reality glasses.

Here the interview to dandypunk who talks about their process, ritual inspiration, and mixture of immersive theater and cutting-edge projection mapping. from the e-mag The voices od Virtual Reality:

Instagram @dandypunk

Music – Рaдость Моя – Пей солнце

(dandypunk edit)

Intervista al M°FABIO LUISI sulla Tetralogia di Wagner con la regia di Robert Lepage produzione MET

L’Orchestra e il Coro del Maggio Musicale Fiorentino, diretti dal Maestro Fabio Luisi, eseguiranno 5 luglio alle 21.15. a Pistoia  la Sinfonia n. 2 di Gustav Mahler proprio dal palco di Piazza Duomo. Arts. Noi lo avevamo incontrato nello splendido foyer del Carlo Felice e gli avevamo fatto alcune domande relative alla sua collaborazione con uno dei massimi registi teatrali contemporanei, Robert Lepage per la produzione del RING di Wagner del Metropolitan di New York. L’intervista sarà parte del documentario in corso di realizzazione sul teatro di Robert Lepage.

Fabio LUISI è nato a Genova ed è Direttore principale al Metropolitan di New York. Attualmente è Direttore musicale  anche dell’Opera di Zurigo

Fabio Luisi ha vinto un Grammy Award per la sua interpretazione delle ultime due giornate del Ring des Nibelungen: l’intero ciclo prodotto in DVD da Deutsche Grammophon, considerato la migliore registrazione operistica del 2012. La sua vasta discografia comprende opere di Verdi, Salieri e Bellini; brani sinfonici di Honegger, Respighi e Liszt; musiche di Franz Schmidt e Richard Strauss e una premiata esecuzione della Nona sinfonia di Bruckner. Nel 2015 ha inaugurato la collana discografica della Philharmonia Zurich incidendo musiche di Berlioz e Wagner, nonché Rigoletto di Verdi, cui recentemente si è aggiunta la versione originale dell’Ottava sinfonia di Bruckner, raramente registrata in disco.  La sua biografia è straordinariamente ricca di successi e di riconoscimenti e rimandiamo al suo sito per approfondimenti. Ci piace ricordare che oltre a dirigere grandi teatri e orchestre internazionali, Luisi è anche direttore musicale del Festival della Valle d’Itria e dell’Accademia del Belcanto “Rodolfo Celletti” a Martina Franca promossa e organizzata dalla Fondazione Paolo Grassi di cui è presidente il prof. Franco Punzi e direttore Rino Carrieri.

 Oltre alla musica, coltiva anche un’altra passione: quella di creare profumi artigianali, da lui stesso personalmente realizzati, le cui vendite, attraverso, servono a finanziare la Luisi Academy for Music and Visual

Anna Monteverdi: Lepage e Wagner, teatro, musica e tecnologia: quanto a suo avviso l’innovazione tecnologica può contribuire a far conoscere non solo l’opera musicale ma anche il messaggio di opera d’arte totale di Wagner?

Fabio Luisi: L’opera lirica come del resto qualunque rappresentazione teatrale, è per definizione destinata a mutare nel tempo; questo cambiamento avviene normalmente durante la storia di questa opera d’arte che non è come un’opera d’arte figurativa a sé stante, chiusa nell’epoca in cui è stata concepita. Lepage è il testimone più congeniale di opera d’arte che muta nel tempo; in questo senso il suo approccio molto tecnologico alla tetralogia di Wagner è stato stimolante e interessante, e Wagner ne sarebbe stato piuttosto contento. 

Anna Monteverdi: Quale è stato il rapporto tra direzione musicale e direzione registica e quali sono stati i passaggi chiave di questa drammaturgia musicale?

Fabio Luisi: Lepage è una personalità straordinaria, sa esattamente come realizzare ciò che desidera e per me è stato molto educativo, lui è un uomo di teatro visionario con un accento su ciò che si deve vedere,  per noi musicisti l’aspetto visivo è secondario rispetto a quello musicale. Il suo modo di lavorare è estremamente rispettoso dei cantanti e della musica; non essendo musicista si è dovuto instaurare un dialogo tra noi e gli ho dato qualche suggerimento per quei momenti in cui bisognava sottolineare visivamente certi accenti come li abbiamo in musica e lui è stato comprensivo e ha accettato; tutto è stato condotto con una calma davvero inusuale nel nostro campo. I passaggi chiave sono stati quelli che richiedevano un colore e un carattere speciale reso visibile, in questo Lepage è stato straordinario, i cambiamenti di scena, i grandi “coup de théâtre” che ci sono anche in questa opera monumentale, sono stati risolti in maniera eccezionale e hanno provocato “pelle d’oca” a spettatori e anche agli esecutori. Lo sforzo tecnico è stato enorme, ed è stato una sfida per tutto il team della produzione perché non si erano mai trovati a mettere in pratica un’idea così tecnologica quale quella concepita da Lepage.

Anna Monteverdi:Qual è la Specificità musicale del Siegfried e come ha impostatola direzione musicale?

Fabio Luisi: Sigfried, se si considera la Tetralogia come una grande sinfonia  è lo scherzo; è estremamente vivace, è tra le quattro, a parte Das Reinhgold, che è un prologo complesso, quella che maggiormente si presta a soluzioni sceniche originali, ed è anche l’opera che presenta la maggior linearità di narrazione. In un cerro senso è quella più facile, ma date le proporzioni anche più difficile, che può presentare soluzioni sceniche più eclatanti: abbiamo questo clima da favola del bosco, con l’uccello che parla a Sigfried, dell’uccisione del drago, del risveglio della Valchiria, del bacio di Sigfried alla Valchiria. E questa idea di storia è quella che è probabilmente più facile da realizzare di tutta la tetralogia.

Anna Monteverdi: Considerato il lavoro di Lepage nell’ambito del teatro tecnologico non ha avuto paura che la macchina predominasse sulla musica?

Fabio Luisi: La macchina era l’elemento predominante di tutta la visione di Lepage durante le 15 ore di musica del Ring. Sarebbe sciocco dire che abbiamo cercato di nascondere la macchina perché c’era ed era ben visibile.Era un elemento portante della sua regia, può essere considerato in modo positivo o meno, ma c’era. Considerando le proporzioni di quest’opera direi che l’approccio tecnologico è stato assolutamente giustificabile, e personalmente lo ritengo più che valido nella rappresentazione visiva, scenica e anche nel fatto che spiega scenicamente cosa succede nella musica. Il bravo regista è colui che riesce a comunicare quello che succede nel testo, benché lo spettatore non ne conosca la lingua e in questo Lepage c’è riuscito benissimo; ma sinceramente la macchina era visibile ed è rimasta visibile per tutte le quattro opere.

Anna Monteverdi: Lepage in questa regia è stato definito “Tradizionalista e modernista”. lei come la definirebbe?

Fabio Luisi: La regia è stata più che leggibile, su questo non ci sono dubbi, per questo si è attirato critiche di tradizionalismo, perché è stata estremamente leggibile, non ci sono state interpretazioni a meta livelli. Dal punto di vista della spiegazione e della narrazione non ci sono stati affatto dei problemi.

Master in Advanced Interaction directed by Klaus Obermaier and Luis Fraguada IAAC, Institute for Advanced Architecture of Catalonia Barcelona / Spain

The Master in Advanced Interaction (MAI) is a unique opportunity for Designers, Visual and Performing Artists, Choreographers, Dancers, Architects, Interaction Designers, VJs and DJs, Sound Artists, Scenographers, and profiles from related backgrounds to explore creative uses of technology for experimental and practical purposes.

The course is aimed at developing and exhibiting pr which define meaningful interaction through novel technological solutions, performances and installations. The ambition of these projects go well beyond digital media and are communicated through software and hardware development, solid theoretical foundations, and prototypes completed in IaaC’s digital fabrication laboratory. The theoretical basis of the course is to question how current technology can augment the agency and impact of all kinds of interactions around us.  

Our learning-by-doing research integrates methods used in design, programming and social sciences to produce projects prototypes and products that will define the outer limits of what is possible to do imaginatively with technology today. Wearables, artificial intelligence, human-machine interaction and augmented environments are some of the key topics which form the agenda of the Master of Advanced Interaction.  Students who attend the Master in Advanced Interaction join an international group, including faculty members, researchers and lecturers investigating critical issues facing modern society with the aim of developing the skills necessary to implement practical solutions in diverse professional environments.

The Master in Advanced Interaction (MAI) is a 9 months program accredited by the Universidad Politècnica de Catalunya (UPC) with 75 ECTS. The MAI program is directed by Klaus Obermaier and Luis Fraguada.



The Institute for Advanced Architecture of Catalonia has evolved from an institution for questioning architecture and territory, to a place where new architectures are conceived. There is a space between the built environment, the territories we inhabit, and the technology we confront, that nowadays needs to be addressed. Therefore, after the successful pilot-program in 2008, the Institute officially launches the Master in Advanced Interaction, as a natural evolution of the domains it is looking to further explore.

Today we communicate and interact with smart devices, physical and virtual environments, the Internet of Things. User-generated content mixes with professional contributions. In our Age of Participation, mostly driven by social media and gaming but also by interactive arts and performances, passive recipients turn into active participants, becoming creative players. Interactive environments go beyond the passive reception by creating an immersive, communicative and social experience.

All fields of study and practice require the skills to make meaningful use of available and forthcoming technologies. This is mainly due to the increased adoption of technology in our daily lives. Data and Information now encompass a sort of Metadata Layer which crosses all aspects of our existence.

The Master in Advanced Interaction questions the limits of this contemporary technological phenomena and prepares candidates to be the key actors capable of making connections between disciplines where none were possible or even considered before.

Digital Theatre: Teatime with me myself and I (Taiwan)

Very Theatre from Taiwan performs a video, mobilephone interaction act.
Fra Click Festival 16 maj 2015

Teatime with Me, Myself and I, a theatrical performance directed by Chou Tung-Yen, to rethink and reframe the shift in the mode of spectating introduced by contemporary artists.

The work inherits the original themes – “mobile phone”, “life full of screens” and “media world” – and features a quasi-improvisational[1] performance jointly executed by Hong Chian-Sang and Inred Liang. It speaks about the modern-day reality of people becoming inseparable from smartphone, tablets and other mobile devices.

  • Very Theatre

    Founded by interdisciplinary artist Chou Tung-Yen, with sponsoring from Very Mainstream Multimedia Ltd, its members are equipped with the expertise on theater and multimedia design and are committed to the development of cross-domain multimedia productions emphasizing on context. Its aim is to create a new conceptual experience of listening and viewing.
    Past works include: Emptied Memories , winner of the Interactive & New Media Design category at the World Stage Design 2013, as well as the Digital Performing Art Festivals from 2011-2014. Debut group production: multimedia puppetry performance Teatime with me, myself and I, as well as Lights Flowing Out of Frame – Chou Tung-Yen Solo Exhibition.
  • Director/Concept Chou Tung-Yen

    Chou Tung-Yen holds a MA in Scenography with distinction from Central Saint Martins College of Art and Design in London and a BFA in Theatre Directing from TNUA. He is the founder and director of Very Mainstream Studio, and now the lecturer in school of Theatre Arts, TNUA. Besides working on film & theatre pieces that are performed and screened internationally, he also dedicates to the realm of technology & performing arts.

BIRDIE Agrupación Señor Serrano


Agrupación Señor Serrano

1 – 2 dicembre 2016
ore 21.00
Lo spettacolo si terrà c/o TeatroLaCucina,
ex Ospedale Psichiatrico Paolo Pini
(via Ippocrate 45 – MM3 FN Affori, NB uscita via Ciccotti)

Stormi di uccelli migratori che formano indecifrabili disegni in cielo. Movimento. Vita. Nulla nel cosmo sta fermo.
Che senso ha costruire muri e recinzioni per fermare gli stormi di uccelli?

Birdie mette a confronto due miraggi.
Da una parte welfare, rispetto dei diritti umani, prosperità, facilità di movimento per informazioni e capitali.
Dall’altra guerre, sfruttamento, persecuzioni, ostacoli per i migranti.

Uno spettacolo multimediale, visionario e razionale al contempo, che affronta l’evoluzione e la sopravvivenza del genere umano con spirito critico e arguzia in una sapiente composizione di performance dal vivo, videoriprese in diretta, modellini in scala, film d’autore e ritagli di giornale.

Dal gruppo di punta della scena catalana, Leone d’Argento alla Biennale di Venezia 2015.



Creazione Àlex Serrano, Pau Palacios, Ferran Dordal; performance Àlex Serrano, Pau Palacios, Alberto Barberá; project manager Barbara Bloin; produzione GREC Festival de Barcelona, Fabrique de Théâtre – Service des Arts de la Scène de la Province de Hainaut, Festival TNT – Terrassa Noves Tendències, Monty Kultuurfaktorij, Konfrontacje Teatralne Festival; sponsor degli animali in miniatura Safari Ltd;


Theatre and Technology

How do we create a theatre for the digital age? Is technology fundamentally changing the ways in which we engage with and make performance?

Matt Adams (Co-Founder and Artist, Blast Theory), David Sabel (Head of Broadcast and Digital at the National Theatre) and Tassos Stevens (Co-Director, Coney) discuss how digital technology is revolutionising performance.

Chaired by Headlong’s Artistic Associate, Sarah Grochala.