Archivi categoria: interactive performance

Nuovo documentario su Marcel.lì Antunez Roca sulla TV nazionale spagnola RTVE ora on line

E’ Marcel.ì Antunez Roca stesso a scriverci per segnalare questo speciale televisivo su di lui e sulla sua storia artistica in forma di breve documentario visibile anche sulla piattaforma on line (che in Spagna si chiama A la carta).

E’ in spagnolo ma -ci scrive ancora Marcel.lì- “ma si capisce” (anche perché il testo dell’intervista è trascritto a lato).

Ci sono anche dei contenuti extra qua

DISCRETE FIGURES Realtime AR + AI Dance Performance by Daito Manabe/Rhizomatiks×ELEVENPLAY

Manabe Daito

Manabe Daito (ph
Shizuo Takahashi. )

Tokyo-based artist, interaction designer, programmer, and DJ.
Launched Rhizomatiks in 2006. Since 2015, has served alongside Motoi Ishibashi as co-director of Rhizomatiks Research, the firm’s division dedicated to exploring new possibilities in the realms of technical and artistic expression with a focus on R&D-intensive projects. Specially-appointed professor at Keio University SFC.
Manabe’s work in design, art, and entertainment takes a new approach to everyday materials and phenomenon. However, his end goal is not simply rich, high-definition realism by recognizing and recombining these familiar elemental building blocks. Rather, his practice is informed by careful observation to discover and elucidate the essential potentialities inherent to the human body, data, programming, computers, and other phenomena, thus probing the interrelationships and boundaries delineating the analog and digital, real and virtual.
A prolific collaborator, he has worked closely with a diverse roster of artists, including Ryuichi Sakamoto, Björk,OK GO, Nosaj Thing, Squarepusher, Andrea Battistoni, Mansai Nomura, Perfume and  sakanaction. Further engagements include groundbreaking partnerships with the Jodrell Bank Center for Astrophysics in Manchester, and the European Organization for Nuclear Research (CERN), the world’s largest particle physics laboratory.
He is the recipient of numerous awards for his multidisciplinary contributions to advertising, design, and art. Notable recognitions include the Ars Electronica Distinction Award, Cannes Lions International Festival of Creativity Titanium Grand Prix, D&AD Black Pencil, and the Japan Media Arts Festival Grand Prize.

Manabe is an innovator in data analysis and data visualization. Notable artist collaborations run the gamut from “Sensing Streams” installation created with Ryuichi Sakamoto; performances of the ancient Japanese dance “Sanbaso” with prominent actor Mansai Nomura; and Verdi’s opera “Othello” as conducted by Andrea Battistoni. As a recent example, Manabe was selected for a flagship commission and residency program in 2017 at the Jodrell Bank Center for Astrophysics, a national astronomy and astrophysics research center housed at the University of Manchester. His close partnership with researchers and scientists concretized in “Celestial Frequencies,” a groundbreaking data-driven audiovisual work projected onto the observatory itself.

In 2015, Manabe developed the imaging system for Björk’s music video “Mouth Mantra”, and oversaw the production of AR/VR live imaging for her “Quicksand” performance.
In performance with Nosaj Thing, Manabe has appeared at international music festivals including the Barcelona Sónar Festival 2017 and Coachella 2016. Having also directed a number of music videos for Nosaj Thing, his work on “Cold Stares ft. Chance the Rapper + The O’My’s” was recognized with an Award of Distinction in the Prix Ars Electronica’s Computer Animation/Film/VFX division. Further directorial work includes the music videos of artists such as Squarepusher, FaltyDL, and Timo Maas.
As a DJ with over two decades of experience, Manabe has opened for international artists such as Flying Lotus and Squarepusher during their Japan tours. His wide repertoire spans from hip-hop and IDM to juke, future bass, and trap. Manabe has also been invited to perform at numerous music festivals around the globe.

Manabe’s collaborations on dance performances with MIKIKO and ELEVENPLAY have showcased a wide array of technology including drones, robotics, machine learning, and even volumetric projection to create 3D images in the air from a massive volume of rays. Additional data-driven performances have explored innovative applications of dance data and machine learning. These collaborations have been performed at major festivals including Ars Electronica, Sónar (Barcelona), Scopitone (Nantes), and MUTEK (Mexico City) to widespread media acclaim (WIRED, Discovery Channel, etc.)

Manabe is actively involved in the development and implementation of media artist summits (notably, the Flying Tokyo lecture series) as well as other educational programs (media art workshops for high school students, etc.) designed to cultivate the next generation of creators.


discrete figures’ explores the interrelationships
between the performing arts and mathematics,
giving rise to mathematical entities
that engage with the bodies of human dancers onstage.

Alan Turing applied mathematics to disembody the brain from its corporeal host. He sought to expand his body, transplanting his being into an external vessel. In a sense, he sought to replicate himself in mechanical form. Turing saw his computers as none other than bodies (albeit mechanical), irrevocably connected to his own flesh and blood. Although onlookers would see a sharp delineation between man and machine, in his eyes, this progeny did not constitute a distant Other. Rather, he was the father of a “living machine,” a veritable extension of his own body, and a mirror onto the act of performing living, breathing mathematics.

―Daito Manabe


Making of from the official site

History Scene

Music 01

Using Openpose, we analyzed publicly available stage footage and poses from movie scenes, collected pose data, and developed a neighborhood search system that analyzes that data and pose data obtained from analyzing the dancer’s movements. Drawing from the actual choreography footage for this piece, we attempted to create a production utilizing video material with the closest pose on a per-frame level.

1 audience scene

Music 09

We set up a booth in the venue lobby and filmed the audience. Analyzing the participants’ clothing characteristics and movements on multiple remote servers right until the performance, we managed to feature the audience as dancers using that analytical data and motion data from the ELEVENPLAY dancers.

2 dimentionality reduction scene

Music 09

Using dimension reduction techniques, we converted the motion data to two-dimensional and three-dimensional ones and visualized it.

AI Dancer scene

Music 10

We were interested in dance itself, on the different types of dancers and styles, or how the musical beats connect to improvizational dances. To explore further we worked together with Parag Mital to create a network called dance2dance:

This network is based on Google’s seq2seq architecture. It is similar to char-rnn in that it is a neural network architecture that can be used for sequential modeling.

Using the motion capture system, approximately 2.5 hours worth of dance data was captured about 40 times at 60 fps. Each session involved the dancers improvizing under 11 themes including joyful, angry, sad, fun, robot, sexy, junkie, chilling, bouncy, wavy, swingy. To maintain a constant flow, the dancers were given a beat of 120 bpm.

Background movie

This time we generated the background movie with “StyleGAN”, which was introduced in the following paper “A Style-Based Generator Architecture for Generative Adversarial Networks” by NVIDIA Research.

“StyleGAN” became open source. The code is available on GitHub.

We trained this “StyleGAN” on NVIDIA DGX Station using the data we had captured from dance performance.



For hardware, we used five palm-sized microdrones. Due to its small size, it is safer and more mobile compared to older models. Because of its small body, it provides a visual effect as if a light ball is floating on stage. The drones’ positions are measured externally via motion capture system. They are controlled in real time by 2.4 GHz spectrum wireless communication. The drone movements are produced with motion capture data that has already been analyzed and generated.


The frame serves an important role in projection mapping onto the half screen and AR synthesis. It contains seven infra-red LEDs, built-in batteries, and the whole structure is recognized as a rigid body under the motion capture system. Visible to the naked eye, retroflective markers are usually not that suitable for stage props use. However, we designed and developed a system using infrared LEDs and diffusive reflectors that allow for stable tracking invisible to the naked eye.

htiks Research

Lino Strangis VR live al KLANG di Roma: Perdersi nella Realtà Virtuale (e dare vita ad altri mondi)

Lino Strangis uno dei  migliori rappresentanti della Computer Art in Italia, persegue da anni in modo rigoroso la sua idea estetica di “intermedialità” che comprende oggi sculture 3D, Realtà Virtuale, installazioni immersive collocandole in azioni performative che “estendono” e “espandono” al limite della fantascienza, i confini della realtà. E’ l’esempio della straordinaria e originalissima performance di Azione sonora Altre musiche per Altri mondi al  KLANG di Roma del 4 aprile legata all’omonimo album di musica elettronica sperimentale di prossima uscita.

E in questa realtà parallela, formata da una costellazione scenografica digitale totalizzante fatta di sculture in forma di aggregato corallino 3D di note scappate dalla partitura per mineralizzarsi in altre vite – totemici relitti di una memoria umana- la performer (Veronica D’Auria) si immerge e si perde, indossando una “maschera” Oculus Rift che la rende al tempo stesso cieca e testimone di visioni surreali che dona al pubblico.

Lei è dentro forme in trasformazione, prigioniera di sogni/incubi plasmati in diretta in un firmamento digitale altrui:  Il suo corpo diventa segno dentro altri segni digitali che insegue suoni liquidi. Nulla accade prima dell’epifania rappresentata dall’atto di indossare la maschera-Oculus nelle cui cavità si ritrovano mondi fantastici creati in diretta dal “plasmatore” Lino Strangis.  La Realtà virtuale quale piattaforma teatrale e insieme maschera evocativa di mondi in divenire. E’ la materializzazione di utopie avanguardiste, a detta dell’autore in cui suono e immagine si sublimano a vicenda, definendo territori drammaturgici complessi, improvvisati e  ogni momento rinnovati e rinati dal vuoto e dalla luce.

Così Strangis ci racconta la funzione centrale della musica (e dei suoni) nella performance, vero centro gravitazionale di tutte le arti messe in campo: Sono scritture musicali per libere improvvisazioni, tramite le quali ho cercato di proseguire quella linea di ricerca che ha visto la scrittura musicale diventare altro, altre arti. Mi interessa molto spingere la performance sonora-musicale fino alla fondazione di una scena che ricomponga le drammaturgie sonore a quelle di gesti, luci e visioni in una versione minima e odierna dell’opera totale.

La performance al KLANG anticipa una serie di eventi, incontri, presentazioni di libri in cui l’opera di STRANGIS sarà al centro di molte attività ed esperienze multisensoriali per il pubblico prossimamente al MACRO di ROMA.

Qua il sito rinnovato dell’autore

EX FABRICA CANAPA-Direzione artista TPO (Prato)

Produzione: Teatro Metastasio di Prato
Direzione artistica: Compagnia TPO – Davide Venturini
Con: Valentina Consoli, Běla Dobiášová
Engineering ed effetti visivi: Rossano Monti, Elsa Mersi
Musiche originali: Spartaco Cortesi
Scene: Laura VdB Facchini

“Canapa” è un progetto dedicato alla sperimentazione delle nuove tecnologie digitali ed interattive in ambito teatrale nato in collaborazione con la compagnia pratese TPO. Questa azione scenica costituisce la terza parte di un trittico chiamato “trilogia botanica per Ex Fabrica”, E’ la seconda parte della trilogia botanica per Ex Fabrica dedicata alla mitologia di 3 piante: l’eucalipto, il cotone, la canapa. Il significato olistico della pianta di cotone rimanda al rapporto tra tenacia e leggerezza, respiro e ritmo vitale, divisione e unione. La capsula del cotone chiusa nel suo guscio legnoso si apre e libera un fiocco leggero, aereo, in contatto con il cielo e la luce. Al tempo stesso il simbolismo del filo è quello del mezzo che collega tutti gli stati dell’esistenza fra loro. Da questa relazione terra/aria trae spunto la performance. Una visione ipnotica tra reale e virtuale in cui le immagini (interattive) sono lo strumento di fusione tra le piante e il corpo delle due danzatrici. Un gioco di sensi e di senso, tra botanica e teatro.

le altre due azioni già prodotte sono Landskin (2016) e Cotone (2017).

“Canapa” quindi rappresenta la conclusione di un percorso artistico dedicato alla relazione personale e sociale tra esseri umani e piante, nella fattispecie l’eucalipto, il cotone e la cannabis. Nella visione olistica e mitologica, queste tre piante rappresentano, in modi diversi, relazioni profonde tra esseri umani, la terra, il viaggio, il sogno. In “Canapa” sono presenti gli archetipi di una tradizione colturale dismessa negli anni (70-90) del secolo scorso a causa del proibizionismo ma oggi in via di recupero grazie ad una nuova prospettiva che vede in questa pianta una risorsa universale. La canapa sativa nel contesto artistico e teatrale di questo progetto costituisce un universo a sé, un ambiente sensoriale, un paesaggio botanico vivente che comunica con il corpo ed in questo dialogo crea un contatto epidermico ed emotivo.


Interview to Rosa Sanchez and Alain Baumann (Koniclab) about theatre and technologies. The costs of the media theatre.

Interview to KONICLAB

Kònic thtr is a Barcelona based artistic platform focusing on contemporary creation at the border between art and new technologies. Its main center of activity is the application of interactive technology to artistic projects. Kònic thtr is internationally renowned for the use and incorporation of interactive technology in creative projects. Their work have been shown in Spain, Europe, America, Asia and Africa. Through the research processes undertaken by Kònic Thtr since the beginning of the 1990’s, the company has developed a unique and personal language in this field of contemporary creation.

Rosa Sánchez. Multidisciplinary and multimedia artist, performer & choreographer. Artistic director and co founder of Konic thtr. Alain Baumann. Musician, multimedia artist. He is in charge of the interactive systems used by Konic thtr.

1. When you start an artistic multimedia theater project, do you guarantee yourself in advance having the necessary budget through residencies or external funding from Theatres or you usually work in self-production? And what are the real costs of your technological production as example?

KONIC Depending on the project we have in mind and the availability of funding, we work with different timings and budgets. Over the (many) years that we have dedicated to theatre and technology, we have acquired skills and knowledge that permit us to produce works with a very small team of four or five collaborators and auto-produce works when the economy is not buoyant. This means that we can do small scale project, with a limited amount of pre-booking for around 20-25K€. On the other hand, we can do large scale projects, with larger team of people participating in the company. When we work with non-readily available technology, such as our recent works with high speed networks, the budget is partly taken in charge by the technological partner who will provide the high bandwidth connection and team to operate it. This is an interesting way for us to work, as it means we are collaborating with engineers and exchange with them, and at the same time they support the work so that the budget for the piece stays reasonable.

2. In a case of a “call” for producing multimedia theater to stimulate the work of new authors or for a creation of a new work, which do you think is an adequate amount that a Theater should allocate?

This is a difficult question. It depends on the type of residency, if it does include a technical team at disposal to do research on technologies, video, light etc. The main problem is not so much the amount than the precarity of the work in general. It is becoming more and more difficult (especially in Spain, but we feel it is a general problem) to find co-producers to make good productions. The problem is that the precarity is also affecting the residencies, and a lot of times the spaces that offer residency have a very limited amount of funding, and rely more on the availability of space and knowledge that they can offer to the artist, than on the ability to remunerate the research that is taking place in their center.

3. Can the residency system be useful for the creation of multimedia theater? What does a residence imply for this theater and in the case, what should a theater / festival offer a residence for an entire crew or company?

We think that the residencies should be oriented towards research, a part of the development of the creative work, the pre-production of the work, that is not covered by productions. To support the pre-production processes is what we would expect from the residency centers, especially when the work implies the use of technology. This should also help to generate a cultural tissue, giving artists the tools to develop new ideas and praxis, and connect them to other artists.

4.The fact that funding is always so low as we observe in this period, does not perhaps oblige the artist to “break up” the work in too many segments of work that are likely to lose continuity and novelty to the work itself? How can we direct a certain theatrical cultural policy to a greater investment by making the theaters understand the complexity (and costs) of this theater?

Even if this is not the best condition, creating a work in a modular way is possible and can be interesting, as long as the work is coherent. As we work with diverse media, the work will evolve with time in the various facets of the piece as they get developed even if in segments. On the other hand, the problem arises when one is obliged to do so because of poor financing, so it is not a choice but the only alternative.

In these times of austerity, an additional problem for unconventional stage proposals (such as for example the ones that rely on technology) is that they are more affected by the reduction of funding than more conventional theatre. The producers, the exhibitors are wary to take ‘risk’ (yes, they see this kind of proposal as a risk…) and therefore there is even less funding going to these proposals.

We have to convince the theatres that they need to show technological proposals, and to make the ideas these proposals convey accessible to the audience. From our point of view, it is important to focus on the content, which is what theatre is about. The collectives working in this field are bringing content to the stage through innovation. Contents related to our technological era using languages that make culture a living and evolving entity.

The cost is inherently related to the design of the work. It is difficult to evaluate, but possibly the idea of higher cost of a piece relying on technology compared to conventional theatre is not so true anymore. New professionals are appearing who specialize in technology for the stage, and many software and hardware solutions, that would have been very expensive only a few years ago because they would need time to develop specifically, are now readily available.

5. Is there an ideal formula for creating this kind of show? What situation do you know that would correspond to a kind of “good practice” (residential or production) linked to the technological theater?

Our best experiences, both in production and residency where in fact initiatives from technological partners rather than from theatres. Somehow there is more interest for people who work in the technological research to develop cultural content that can serve them to test their technology in a context that can also give them visibility, than theatres showing interest in introducing technology into their contents. There are examples of using high velocity internet to transmit opera in real time, or to give masterclasses with a teacher in a city and the students in another city. Or using 360º cameras to film the opera etc.

So these companies or research centers are there and have some interest in showing to culture what their technology can do. They have the skills and the equipment available and if you have the opportunity to collaborate with them, they can be very open to new ideas.

A research agreement to collaborate directly with technology specialists and or research centers is in our experience a way to explore new formats and to be able to have interesting feedback in the way we use technology. In the frame of such agreements, we have had the opportunity to collaborate for instance with the department in artificial intelligence from the higher council for scientific research in Spain, and develop several projects in which we used artificial intelligence for pattern recognition in the use of sensors for dancers. More recently, we are working with specialists in the field of high bandwidth internet. Their work normally consists in connecting scientific research centers from all over the world with extremely fast internet. Working with them in order to develop artistic projects is very challenging, because neither them nor us do really know what the outcome will be. We have successfully showed the results of such collaboration in theatres were part of the audience was interested in the technology and the other part in the artistic content – the project Near in the Distance that was shown in Viena (2015) and in Linz (2017). There need to be mutual trust between the teams and this is a good start for making good projects.

6. In your opinion, is there any new technology that has not yet been explored and that would be useful to a “new format” of technological theater? For example, Robotics or AI?

Artists are very curious people and probably all technologies have been explored ! There is a need for many artists to explore the possibilities that are infusing our everyday life, but in our opinion, it will take some time until these elements are fully understood and the technology made available to artists so that new ways and new format really start to happen.

We cannot compete with technology in order to get the attention of an audience that is nowadays immersed in social networks. These social networks have at their disposal the very latest development in big data analysis and artificial intelligence software that they constantly adapt and develop in order to offer the best experience to their audience. In some ways they compete with theatre, and from our perspective we need to preserve the important part of theatre which is the contents. The contents that can be brought to the stage by using contemporary technologies are where we need to focus as artists.

7. The consideration that there are few productions or few groups that propose innovative ways in terms of theatrical narration creates a limitation for a theorical analysis of the phenomenon of the so-called ” intermediality?

From our point of view, innovation is created by specific projects who differ from others with proposals that may be a little ahead of their time and bring something qualitatively different, linked to the concept of innovation. They are never many projects, but they are models. These models are the ones that can be analyzed and studied to differentiate them from others. They might create a trend or not, but they are unique/different and can be studied.

It is correct that with the lack of funding, there is an increased difficulty to create such works since they require longer research times, and also the lack of opportunity to show transmedial works in theatres, many artists who dedicated part of their creation to new media have opted to dedicate less time to this type of shows. There is less production, but at the same time younger artists introduce technology in their work in a more informal manner using readily available technologies on stage and this is positive in our opinion.

Trevor Paglen’s Sight machine with Kronos Quartet and Obscura Digital

Trevor Paglen is an artist whose work spans image-making, sculpture, investigative journalism, writing, engineering, and numerous other disciplines. Among his chief concerns are learning how to see the historical moment we live in and developing the means to imagine alternative futures. Paglen’s work has had one-person exhibitions at Vienna Secession, Eli & Edythe Broad Art Museum, Van Abbe Museum, Frankfurter Kunstverein, and Protocinema Istanbul, and participated in group exhibitions the Metropolitan Museum of Art, the San Francisco Museum of Modern Art, the Tate Modern, and numerous other venues. He has launched an artwork into distant orbit around Earth in collaboration with Creative Time and MIT, contributed research and cinematography to the Academy Award-winning film Citizenfour, and created a radioactive public sculpture for the exclusion zone in Fukushima, Japan.

The Cantor Center for Visual Arts at Stanford University came to OBSCURA DIGITAL and proposed a collaboration with artist Trevor Paglen, whose work addresses topics like government secrecy and surveillance, exposing the vast apparatus of machines, systems and algorithms that monitor virtually every aspect of our lives. Paglen’s “Sight Machine”project would demonstrate to a live audience how machines “see” the world — in this case, a performance by the renowned Kronos Quartet.

Obscura digital worked with Paglen’s team to develop the computer and video systems to take a live video feed of the string quartet’s performance, run it through actual off-the-shelf artificial intelligence surveillance algorithms (over a dozen of them in total), and project what the AIs see and how they interpret it onto a screen above the musicians.

These AIs — whether for facial recognition, object identification or threat detection — are designed to communicate with their machine counterparts, not to provide human-readable output. Making that possible in realtime required Obscura’s systems engineers to maximize throughput in a Herculean research and development effort.




KONIC THTR a Belgrado con #14 Skyline: quando la (video)danza esplora la vita (e la filosofia).

I Konic di Barcellona ovvero Alain Baumann e Rosa Sanchez tra i protagonisti della scena digitale catalana, hanno esplorato nella loro lunga e importante carriera internazionale, ogni anfratto delle tecnologie e lo hanno depositato con cura amorevole sul palcoscenico dandogli una forma magica e sorprendente, amichevole e insieme profonda. Dall’arte interattiva al videomapping alla telematic dance (di cui sono gli indiscussi fondatori) hanno davvero sperimentato ogni genere di tecnologia, disseminando per il mondo le loro proposte artistiche; non a caso sono tra gli artisti più ricercati per giganteschi progetti transfrontalieri finanziati dall’UE come il famoso progetto IAM che li ha visti coinvolti con il Comune di Alghero nel triennio 2012-2015 in una proposta artistica complessa di realtà aumentata e videomapping per i beni culturali in Libano- Egitto- Palestina e Tunisia. Recentemente il loro telematic project Espai No tàctil ha avuto una rappresentazione fisica a Barcellona con una diretta telematica creativa con Santiago del Cile e Strasburgo. La rete viene intesa da Konic non solo come comunicazione ma come prolungamento creativo dello spettacolo.

Ogni loro lavoro teatrale o di digital art è una riflessione sull’umano e sulla trasformazione dell’individuo grazie alle tecnologie, una trasformazione che può ampliare gli orizzonti in modo smisurato offrendo libere  e insolite prospettive  ma solo se non ci limitiamo a essere “giocati” dalle tecnologie, cioè a subire le manipolazioni della scienza e della tecnologia in atto. Vale la pena ricordare la teorizzazione dei Konic, in epoca di piena euforia di videomapping, sul significato di una “mediaturgia” del mapping

Di ritorno dal Bangladesh dove erano stati invitati per un meeting su Cultural Transformation in Digital Ecosystem a Dhaka, li abbiamo incontrati a luglio al Theatre World Congress  di IFTR a Belgrado dove, dopo una lecture sul loro lavoro all’interno degli affollati General panel, hanno proposto una serata speciale con il loro più recente spettacolo di danza interattiva e network performance #14Skyline .

Lo spettacolo intrattiene con il pubblico una relazione inusuale, quasi rituale: non a caso è la musica, la poesia e il canto ad avvicinare nel prologo, l’orecchio e lo sguardo dello spettatore in modo intimista, quasi sussurrato, facendolo entrare delicatamente nella materia teatrale che si snoda attraverso percorsi astratti, suggestioni visive, frammenti di parole e atmosfere sonore immersive. Tutta la tecnologia è live, gestita, manipolata, ricreata e proiettata dal vivo.

Tre sono le postazioni: una scenografia che si eleva come una torre tronca a vortice fatta d listelle metalliche videomappate alla perfezione all’interno della quale Rosa Sanchez si muove interagendo con suoni e immagini, una seconda con un tavolino dove la performer “opera” davanti a un miniproiettore creando maschere video per il suo volto e una terza è lo spazio di azione di Alain Baumann, presente non solo come “tecnico” ma anche come performer a fotografare, riprendere e manipolare immagini proiettandole sui fondali, cambiando continuamente, appunto lo “skylyne” del palcoscenico con un cellulare.

Ogni momento della performance è rivolta a una  specifica sensorialità, ma è il corpo a “geolocalizzare” le coordinate per una sua immersione creativa (e interattiva) nella totalità spaziale. Lo spettacolo dei Konic sembra incarnare al meglio la nozione di Maurice Merleau-Ponty di “teoria del corpo come teoria della percezione”: la nostra esistenza come esperienza primariamente e incondizionatamente spaziale, che vive, connessa con la sua corporeità, lo spazio-immagine che ci circonda. Le forme, vissute in questo frammento di ispirata danza digitale, uniscono definitivamente l’uomo al suo ambiente. Un vero e proprio “approccio fenomenologico” che permette di ri-tracciare continuamente la realtà adeguandola alle immagini della nostra esperienza ma che si sommano, che si confondono anche con le immagini della nostra coscienza.

Il mondo che percepiamo in questo spettacolo è fatto di rimandi all’arte (dai viluppi delle sculture cubo futuriste di Tatlin, ai frammenti astratti futuristi, fino alla glitch art), ma il vero tema è proprio il legame tra coscienza e realtà che ci circonda. Lo spettacolo sembra suggerire come potrebbe diventare la città e il suo “skyline”,  videomappandola con il nostro occhio interiore, sovrapponendola a strati di colorate piazze abitate e vissute, scannerizzando non solo la superficie fisica degli edifici ma anche la nostra memoria, la nostra interiorità psicologica.

La traccia-guida, che è poi il tema portante dello spettacolo, è appunto il magnifico prologo di Rosa Sanchez che invita a “sentire” l’inaudibile, dare forma a pensieri, un momento che è anche un omaggio alla migliore arte d’avanguardia. La danza capta l’invisibile, i sensori colgono aneliti di trasformazione, generando immagini e identità multiple e il nostro volto indossa volontariamente maschere di ciò che siamo o che vorremmo essere, facendoci diventare, nell‘instagrammismo dei nostri tempi, immagini-filtro ovvero magnifici zombie digitali. Il teatro è uno spazio sensoriale frammentato e insieme uno spazio di connessione condivisa, uno skyline digitale che ci contiene con tutti i nostri desideri, sogni, passioni.

Uno spettacolo #14 Skyline di grande forza espressiva e di grande pregnanza concettuale, un concentrato di tutte le tecnologie – quelle che oggi, come ben sappiamo, stanno in qualche APP del nostro cellulare- dove però si insinua il fondamentale messaggio che, in un mondo che vive costantemente sugli schermi,  l’arte (e la vita) stiano piuttosto, nello spazio liminale dell’errore, della latenza, dell’imperfezione, esattamente ciò che la macchina non può e non sa programmare.

Anna Maria Monteverdi

Prossimi appuntamenti in cui sarà possibile vedere #14Skyline:

7 SEPTEMBER : 4th ‘Jornadas Escena Digital’. Barcelona. Spain

28 SEPTEMBER : XV Int. Festival VIDEOMOVIMIENTO / Cuerpo Multimedia. Bogota. Colombia

20 OCTOBER : Festival IDN+ / Mercat de les Flors. Extended version. Barcelona.


ESPAI NO TÀCTIL (interactive and telematic performance Site Specific-INFLUX) by KONIC

Un concert-dansa-performance amb configuració variable i site specific. El projecte explora la complementarietat i les relacions entre el tàctil i el no tàctil: espai cos = tàctil / espai psicològic = no tàctil. Un encontre dansat de personatges que mai podran tocar-se però sí comunicar-se des de la distància. Un entrellaçat fluir de moviments, música i imatges produïdes en directe que ofereix a l’espectador submergir-se en una poètica de cossos fràgils i imatges efímeres i suggerents.

ESPAI NO TÀCTIL (Site Specific per a INFLUX)

Rosa Sánchez: coreografia / direcció escènica
Alain Baumann: direcció tecnològica i desenvolupament interactius
Rosa Sánchez: performance
Alain Baumann: música i visuals en directe
Rosa Sánchez i Adolf Alcañiz: imatge
Rosa Sánchez: concepte escenografia
Adolf Alcañiz i Amir Gazit: realització escenografia
Anna Candela: coordinació i comunicació
Koniclab: producció
Amb el suport de:
Fàbrica de Creació Fabra i Coats. Barcelona
ICUB. Institut de Cultura. Ajuntament de Barcelona
Departament de Cultura. Generalitat de Catalunya
Ministeri d’Educació, Cultura i Esports

Antic Teatre
Verdaguer i Callís, 12. 08003 Barcelona

Entrada 6€
comprar entrades

1,000fps Projector Combined With High-speed Camera

Ishikawa Watanabe Laboratory at the Graduate School of Information Science and Technology, The University of Tokyo, and Tokyo Electron Device Ltd co-developed a high-speed projector capable of projecting images with a frame rate of 1,000fps.


The projector has an 8-bit color scale (256 levels) and a resolution of 1,024 x 768. When combined with a high-speed camera having a frame rate of 1,000fps, it is capable of projection mapping on a fast-moving object.

It recognizes an object in real time with the high-speed vision technology. And an image is projected on the object by the high-speed projector. It takes three millisecond to project an image (lag time).

The projector realizes the high-speed operation for a frame rate of 1,000fps by adding original control techniques to Texas Instruments Inc’s DLP (digital light processing) system. The DLP is an optical engine using a DMD (digital micromirror device) that is made by using a MEMS (micro-electric-mechanical system) to form many micromirrors on a silicon chip.

It takes images by changing the orientation of each mirror and turning the irradiation of reflected light on/off for each pixel. It has already been widely used in projectors for use in homes and movie theaters. But their frame rates are usually 30-120fps.

A high-speed projection mapping system combined with a high-speed projector (circled in red)

This time, to realize the frame rate of 1,000fps, a new driving technology was developed mainly by Tokyo Electron Device. There were some challenges. For example, when the orientations of micromirrors on the DMD were frequently changed, the mirrors did not go back to desired orientations. The company solved such problems to make commercialization possible.

For the fast-speed operation, a control circuit embedded with an FPGA was newly developed. Also, an original communication interface for high-speed transmission of image data was newly employed.

Digital Playgroundz – Interactive Installation

Digital Playgroundz – Interactive Installation (UPCOMING SHOW: SIGGRAPH CONFERENCES, LOS ANGELES, 30 July – 3 August, 2017)

We have developed a digital playground to be embedded to the real environment. It combines two very potent technologies of projection mapping and large-scale motion detection. Basically this system allows to make multitouch area on every flat surfaces. Main benefits are that there are no limits in a scale of game-stage (it could be 1km long if you want) and no limit how many people can be involved in one moment. Not only children will enjoy it. 😉

Basically this system allows to make multitouch area on every flat surfaces. Technically it’s combination of motion-tracking and projection-mapping. Main benefits are that there are no limits in a scale of game-stage (it could be 1km long if you want) and no limit how many people can be involved in one moment.

Here is a video from our test installation in Prague. We developed kind of computer game but projected in real scenery. It is mainly dedicated to kids but also adults were involved as players.


The Demonz is a large-area virtual reality game projected in to reality. The goal is similarly as in the classic czech ball game from childhood “dodgeball“ but instead of hitting live opponents you will try to hit virtual targets in the shape of animated figures or other moving objects. Due to combination of art and programming work was created a unique game pushing you to move and cooperate with the others.


DandyPunk Live immersive theater at Sundance Festival 2017

Trailer for a live, immersive installation at Sundance New Frontier 2017 by Dandypunk, Darin Basile and Jo Cattell

There was an amazing projection mapped, immersive theater piece at Sundance this year by Heartcorps called “Riders of the Storyboard.”

Trained street performers interacted with a virtual projection-mapped 2D objects, and through the slight of hand of magic broke these flat objects into the third as glowing 3D props. There were 15 people packed into a small room with about half a dozen performers for a 13-minute show about a these 2D characters who interact with the performers who are playing Alchemy of Light gods in the third dimension. It was an awe-inspiring performance, and the projection mapping technology provided a shared augmented reality experience. Heartcorps is proving out some of the techniques with projection mapping technology that should also work really well in the future of live performance and immersive theater designed for augmented reality glasses.

Here the interview to dandypunk who talks about their process, ritual inspiration, and mixture of immersive theater and cutting-edge projection mapping. from the e-mag The voices od Virtual Reality:

Instagram @dandypunk

Music – Рaдость Моя – Пей солнце

(dandypunk edit)

Master in Advanced Interaction directed by Klaus Obermaier and Luis Fraguada IAAC, Institute for Advanced Architecture of Catalonia Barcelona / Spain

The Master in Advanced Interaction (MAI) is a unique opportunity for Designers, Visual and Performing Artists, Choreographers, Dancers, Architects, Interaction Designers, VJs and DJs, Sound Artists, Scenographers, and profiles from related backgrounds to explore creative uses of technology for experimental and practical purposes.

The course is aimed at developing and exhibiting pr which define meaningful interaction through novel technological solutions, performances and installations. The ambition of these projects go well beyond digital media and are communicated through software and hardware development, solid theoretical foundations, and prototypes completed in IaaC’s digital fabrication laboratory. The theoretical basis of the course is to question how current technology can augment the agency and impact of all kinds of interactions around us.  

Our learning-by-doing research integrates methods used in design, programming and social sciences to produce projects prototypes and products that will define the outer limits of what is possible to do imaginatively with technology today. Wearables, artificial intelligence, human-machine interaction and augmented environments are some of the key topics which form the agenda of the Master of Advanced Interaction.  Students who attend the Master in Advanced Interaction join an international group, including faculty members, researchers and lecturers investigating critical issues facing modern society with the aim of developing the skills necessary to implement practical solutions in diverse professional environments.

The Master in Advanced Interaction (MAI) is a 9 months program accredited by the Universidad Politècnica de Catalunya (UPC) with 75 ECTS. The MAI program is directed by Klaus Obermaier and Luis Fraguada.



The Institute for Advanced Architecture of Catalonia has evolved from an institution for questioning architecture and territory, to a place where new architectures are conceived. There is a space between the built environment, the territories we inhabit, and the technology we confront, that nowadays needs to be addressed. Therefore, after the successful pilot-program in 2008, the Institute officially launches the Master in Advanced Interaction, as a natural evolution of the domains it is looking to further explore.

Today we communicate and interact with smart devices, physical and virtual environments, the Internet of Things. User-generated content mixes with professional contributions. In our Age of Participation, mostly driven by social media and gaming but also by interactive arts and performances, passive recipients turn into active participants, becoming creative players. Interactive environments go beyond the passive reception by creating an immersive, communicative and social experience.

All fields of study and practice require the skills to make meaningful use of available and forthcoming technologies. This is mainly due to the increased adoption of technology in our daily lives. Data and Information now encompass a sort of Metadata Layer which crosses all aspects of our existence.

The Master in Advanced Interaction questions the limits of this contemporary technological phenomena and prepares candidates to be the key actors capable of making connections between disciplines where none were possible or even considered before.

XTH Sense: the first biocreative instrument

After 5 years of development, creative workshops worldwide and published research, the world’s first biocreative instrument, the XTH Sense, is ready!

Help us raise the funds needed to manufacture the first batch. Without your help we cannot make it on our own. By pre-ordering the XTH Sense now you contribute to bringing this innovative open hardware and open source technology to the world. Make a pledge for one of our rewards on the right side of the screen. THANK YOU!!

Check the XTH Sense 3D drawing application in action, Kenji Williams uses it for his new show. This is just one of the things you can do with your XTH Sense!

Forget your average tracker or MIDI controller. Enter a world of new creative possibilities.

The applications of the XTH Sense are endless because it does one simple, yet visionary thing: transforms your unique expressive biosignature into a creative digital interface.

Play the video to see what you can do with the XTH Sense.

Thanks to its advanced sensing algorithms, the XTH Sense does much more than capturing data from your body. It learns the nuances of your body by extracting precise biophysical features, like:

  • the acoustic properties of the body
  • data patterns created by motion
  • changes in body temperature.

The combination of these features is what we call a biosignature.

Through the XTH Software Suite you can link the sensor data to the stroke of a live drawing in a virtual environment, a 3D avatar’s walk in a fantasy world or the pitch of a chord progression in a musical composition, like Susanne Eder in the video below.

The XTH Sense consists of:

  • The XTH Sense wireless wearable, in two colours: pearl white or obsidian black
  • The XTH Software Suite, professional for experts and easy to use for beginners
  • The XTH Platform, our community hub

All of this works smoothly on Mac OSX, Windows and Linux.

When you wear the XTH Sense an array of biosensors captures 7 types of signals from your body:

1) The sound of your muscles contracting when you move
2) The sound of the heart beating
3) The sound of the blood flowing
4) Temperature
5) Motion
6) Orientation
7) Rotation

The XTH Sense transmits the sound and data wirelessly over radio frequency to communicate with the XTH Software Suite on your laptop. To use the data creatively, all you need to do is launch our plug & play application, or load our plugins in your favorite music, video or creative coding software.

Our intelligent algorithms crunch the data from your body and extract the distinctive characteristics of your movement and inner bodily processes, like movement dynamics, muscular energy and temperature changes. These are the expressive features of your body.

Using the graphical interface of the XTH Software Suite you can easily link your body’s expressive features and raw data to digital content. You can control musical parameters, create digital drawings, interact with game mechanics and play in virtual reality (VR) with your body.

On the XTH Platform, our community web hub, you can share your projects with other XTH Sense users across the world, and find new software applications for the XTH Sense. If you are an expert, you can also offer your skills for hire on the Platform, so others in need can benefit from your expertise, and you can get paid for your work.

It really is that simple. No need to learn how to move or code. Unless you want to.

With the XTH Sense plug & play application you can easily link sensor data to musical parameters, game mechanics or live drawings using a handy graphical interface.

Would you rather use the XTH Sense with another software, say, Ableton Live, Open Frameworks or Unity? Simply open your software, load the XTH Sense plugins and connect the sensor data to a synthesizer, a brush stroke or a virtual character.

Illay Chester tells us about live sampling the sound of her cello with the XTH Sense. She uses her muscle dynamics and motion patterns to change the sound of the cello and activate digital sounds real-time.

Pedro Lopes uses the XTH Sense with the Oculus Rift to control an avatar’s movement in VR. He uses the XTH Sense to track muscle force and arm motion for VR applications.

Gordey Chernyi creates live visuals with the XTH Sense. He uses muscle sounds to control the brush size and density, and motion data to transform the direction and position of 3D objects in space.

The secret of the XTH Sense is a highly sensitive microphone sensor. It amplifies bioacoustic sounds from your body: tiny sounds produced by your muscles contracting, by your heart beating and even by the blood flowing in your veins.

Like magic.

With the XTH Sense you can listen to these sounds, create music by live sampling them, or use them to create ambient sounds for games. It’s like giving your body a voice.

Here’s how muscle sounds look!

In red you can see the sound of Marco’s forearm muscle as he lifts a weight. In blue, at the bottom, you see the sound frequencies that compose the muscle sound (from 1 Hz up to 140 Hz). This type of visualisation is called a spectrogram.

In addition to our unique bioacoustic sensor, the XTH Sense is embedded with 4 more sensors that track temperature, motion, rotation and direction of your body in 9 dimensions. This offers you a multitude of ways to create.

The motion sensors show how your body moves in space, the temperature sensor assesses your heat changes and the bioacoustic sensor reveals how your muscles move within the body. It’s a truly immersive experience.

We made the XTH Sense so sensitive that it captures the most subtle nuances of your body.

You can create your own micro-gestures using tiny movements of the fingers, subtle changes in your muscular tension, or simply the imperceptible beating of your heart.

A high sampling rate is crucial when creating music, drawings or experiencing games and VR, because it ensures a fluid and smooth interaction.

With a sampling rate clocking at 2000 Hz, the XTH Sense is the fastest device you’ll ever need.

To increase the immersivity of an interactive experience it is important to be engaged with the whole body. The XTH Sense works efficiently and reliably on any part of the body.

The XTH Sense comes with wearable bands that fit all ranges of bodies. In addition, the XTH Sense band can be extended so you can comfortably wear it on your arms, legs or torso.

For a fully immersive experience, using multiple XTH Sense on different parts of the body, the XTH Sense Double Set is the best way to go.

The XTH Sense is made of innovative materials that make it lighter than anything else before.

The wearable band is made with a special eco-jersey produced in Italy. It’s gentle on the skin, washable and it stays dry. For the wearable case, we use anti-allergic, washable silicone-polyurethane, in white pearl or black obsidian. It’s nice to touch, textured and indestructible.

The XTH Sense is so light that it feels like a plume on your skin.

Place your laptop wherever you want and roam freely in the space around you, even in larger venues.

The XTH Sense uses radio frequencies to transmit sound and data wirelessly. Our original and open wireless technology works reliably up to 15 meters distance in an open space.

Pictured: [radical] signs of life, a large-scale interactive dance piece realized at EMPAC (NY) by Heidi, our CEO, using 10 XTH Sense. 

Want to visualize what’s happening inside your body or add an interactive light to your projects? The XTH Sense is embedded with a RGB LED that shines in synch with your bioacoustic sounds or motion data.

And if you have the skills, you can program the LED with any behavior you want. The LED can also be switched off to make the XTH Sense more discreet.

The XTH Sense integrates smoothly with your own setup thanks to the XTH Software Suite. The suite includes our plug & play software with MIDI and OSC compatibility, and plugins that let you use the XTH Sense inside your favorite third party software.

We also provide you with an API so you can create your own XTH Sense applications.

The XTH Software Suite is released with an open source GNU GPLV2 license.

Get the best out of your biocreative experience with:

  • Open access to raw bioacoustic sound, temperature and motion data
  • OSC and MIDI data transmission: the XTH Sense talks to any compatible software
  • Native plugins for Ableton Live, Max/MSP, PureData, Unity, Python,  & OpenFrameworks
  • Arduino compatibility: program the XTH Sense from the Arduino IDE
  • Flexible API to develop your own apps

Below you can see our XTH plugin for Ableton Live in action. As you can see at the bottom of the gif, it is super easy to map sensor data from the XTH Sense (the white module bottom left) to control a synthesiser in Ableton (the grey module bottom right).

It doesn’t matter what your previous experience is or what your physical skills are, if you have a body, you can be a creator with the XTH Sense. The XTH Software Suite is designed to be easy to use, so easy that even children can play with it.

The XTH Sense can amplify inner bodily processes so you can listen to them in the form of music or watch them as live visuals. This is called biofeedback, and it helps realign attunement with your body.

These are some inspiring projects created by members of our community.


  • Augment your own musical instrument with biocreative interaction
  • Create music through body sounds, physiology and movements
  • Control your favorite software with your body


  • Sonify or visualize your movement by amplifying your bodily sounds
  • Move lights and visuals according to your physiology
  • Augment your voice with accurate gesture control


  • Create personalized sounds, controls or game mechanics for VR and games
  • Make music or visuals with your heartbeat and share it across a network
  • Design biocreative interaction with web or live animations


  • Amplify sounds of cardiovascular and muscular processes
  • Analyze stress and force levels through bioacoustic muscle sounds
  • Create biofeedback systems with your colleagues or students

We care about education and we want to empower the next generation of creators. We have taught over 40 workshops worldwide in the past 3 years and we have seen first-hand the potential of the XTH Sense for teaching creative media.

If you are an educator or you hold workshops, the XTH Sense EDU Pack is just perfect for you. It includes 4 XTH Sense wearables, the XTH Software Suite and detailed tutorials and teaching materials to help you design your own biocreative course.

“We have incorporated the XTH Sense into our courses to initiate our students into this new creative realm. Whether incorporating sound, choreography and new media into exploratory live performance or movement into interactive installation, the open-architecture of the XTH Sense system give a stable foundation and point-of-departure for biosensing, creative coding and a new sensorial imagination.

Thanks to the XTH Team for this astounding and exciting bioperformance research and development platform.”

Our schematics, wearable designs and source code will be available on our website as soon as we are ready to ship. We have chosen a GNU GPLV2 license for the Software Suite, and a Creative Commons Attribution Share-Alike license for the hardware and design files.

You are free to hack your own XTH Sense and create new projects!

We are inspired by the awesome work being done by folks at Arduino, RaspberryPI and LittleBits, who, like us, want to help creative technologists learn and create.

Join us in defining how our bodies will interact with computers in the years to come.

In combination with the XTH Sense, we have ideated the XTH Platform: an online community where creators, like you, can gather to discuss projects with the XTH Sense, share apps and code, and even get paid for on-demand coding.

It is a platform to get your personal projects done while innovating technology, together.


Here’s the bare bone electronic soul of the XTH Sense.

We have been developing this project so far because we love it. It is a piece of our life and we are sharing it with you in the most open way we can. We chose Kickstarter because it is the home of people like you, like us, who wants to actively be part of the future of technology.

The XTH Sense has been thoroughly tested and is ready for large-scale production. We have visited manufacturing companies in Europe and the U.S. to conduct the preparatory work to ensure all is in place to deliver a great product.

We have developed a precise business plan to successfully deliver the Kickstarter rewards and to support long-term sustainability.

We want XTH to be a rich ecosystem combining instruments, software, and ideas from a broad community of creators worldwide. Our long-term vision is to continue inventing new instruments for creators to enable new modes of expression.

Help us bring biocreative interaction to creators all over the world.

The more XTH Sense we manufacture, the less they will cost. This means that if we receive a high enough volume of requests on Kickstarter, we will be able to cut the production costs of the XTH Sense by about a half and use the remaining funds to deliver the XTH Platform faster.

MISSION: We want to establish a new and open field for sensory experiences, so we have created the XTH Sense as a new medium for expression. We believe technology does not only do things for us, it does things to and with us.

EXPERTISE: We are a passionate team of recognized innovators and authorities in creative technology. We have decades of experience in creative media production, academic research and human-computer interaction design.

ETHOS: We believe in openness and sharing of knowledge as a means of people empowerment. We will not change the world by ourselves, we will do it together with you, our community.

In late 2011, Marco, our CTO, had the idea to use sounds from the body to create music, so he started experimenting with a clumsy wired microphone on a prototype board.

With the help of Dorkbot Edinburgh, the Sound Lab at Edinburgh University and his loving parents, Marco created the earliest version of the XTH Sense. He released it with open source licenses and soon other creators started using the XTH Sense. Our core community was born.

In 2012, Marco met Heidi, our CEO. Heidi brought in great new ideas on how to expand the breadth of use of the XTH Sense, and together with MJ Caselden, they created the first wireless prototype.

Then, something clicked, a bold vision of creating not only a tool, but a radical way for everybody to interact with technology. In 2014, Marco and Heidi founded XTH, Inc. and gathered a group of top engineers, designers, developers and artists.

Together, we have completely redesigned the original XTH Sense, extending its functionalities and applications to provide a whole new experience. Quite a long way, no?

We count on a fantastic network of institutions, art centers and research labs which enables us to test ideas and organize events that are meaningful to creators. These are our main partners and supporters:

Building on ten years of research, development and experimentation, 4DSOUND has developed an innovative spatial sound technology that has significantly improved and expanded the possibilities to create, perform and experience sound spatially. 4DSOUND is a fully omnidirectional sound environment. Sound can move infinitely distant or intimately close to the listener: it moves around, as well as above, beneath, in between or right through them.

HARVESTWORKS is a non-profit contemporary art center that presents experimental art in collaboration with their Technology, Engineering, Art and Music (TEAM) Lab. Since 1977 they have supported the creation of work that explores sound and other new and evolving technologies. In line with the historical E.A.T. (Experiments in Art and Technology) they provide an environment for experimentation with technicians, instructors and innovative practitioners in the electronic arts.



AΦE (Esteban Fourmi and Aoi Nakamura) – Dance & Choreography
Susanne Eder – Dance
Maura White – PR & communication
Martina Scala – Campaign editor and motion graphics
That Thing Production – Additional videography
Alessandra Leone – Logo animation
Margherita Pevere – Photography, production, consultancy

Follow us on Twitter
Like us on Facebook
Tweet about the project

Risks and challenges

With any hardware innovation, there are going to be risks and challenges associated with manufacturing and supply chain logistics. To mitigate this risk, we’ve spent the past two years carefully working through iterative development, testing and experience design. We’ve conducted extensive market research across various sectors through hands-on workshops and meeting one-on-one with artists to refine our algorithms for sensitivity and low latency and to ensure comfort and usability.

The XTH Sense comes from 5 years of rigorous scientific research and artistic practice. Marco Donnarumma and Heidi Boisvert, XTH co-founders, have over 20 years combined experience in managing creative teams, directing artistic collaborations and overseeing hardware/software development.

We have also carefully selected our global team, and possess a strong advisory committee who understand both the unique engineering and software design challenges of sensor-based technology for creative expression as well as the in and outs of running a sustainable business.

In order to produce a large volume of units to fulfill our Kickstarter campaign with high quality components, while maintaining the lowest possible price for our customers, we’ve identified a streamlined manufacturing approach.

We will print the PCB, then assemble components, produce injection mold, and finally insert PCB into the mold. Its a fairly straightforward process, but we plan to hire a logistic person to handle the full production pipeline to ensure the XTH Sense is delivered on time to our Backers.

We believe in open design, development, and knowledge-sharing. This ethos transcends how we run our company and communicate with the public. We will keep backers abreast of our milestones and any set backs as we progress. We are not simply promoting a product, but building a community.

We can’t wait to see what you’ll create with the XTH Sense, so we’ll work diligently to ensure that we meet our delivery estimates. Please don’t hesitate to email us directly if you have further questions at

Systorgy, a platform for interactive environment by Antunez Roca

Systorgy is a platform that offers free tools for writing and subsequent control of interactive environments. Systorgy is based on software and hardware tools developed from 1992 by the artist Marcel·lí Roca Antúnez and his team of collaborators. Since 2002 these tools take the form of the application POL, core labor of Marcel·lí’s interactive work and the Systorgy project.

The program allows POL connect different interfaces with various programs and devices actuators and write their interactivity, organized interactions in different parts and content and reuse them in new projects. The platform also offers Systorgy in this first version PIXMAP library based on Open Frameworks and allows control of a wide video.

Systorgy also offers the manual and information necessary to develop two types of control bays based on the technology of Joystick and Raspberry Pi.

Robert Lepage and the New York MET Opera 2011-2013 (Ring Cycle)

Lepage’s Ring Cycle proved to be the most technically advanced production the Met has embarked upon. Lepage and Met director Peter Gelb are on the record as having tried to imagine the Ring the way Wagner would have staged it if he had access to twenty-first century technology.Video projection and the use of a very unique set required true theatrical innovation and were anything but traditional.

Das Rheingold introduced a high-tech set that rotates, bends and transforms into different shapes — such as a river or a spiral staircase. “It is also a projection screen”, says Lepage. “Whatever configuration it takes, no matter how complicated, it can receive projection and transform itself into all sorts of things. And, of course, the story of the Ring is all about transformation.”


The set of 24 swivelling beams formed a myriad of different shapes which, with the aid of complex large scale video projections, created stunning scenic images. The set and Realisations’ projections were used for the entire cycle transforming from walls, into a ceiling, a forest, cliffs and mountain ranges, and even the surface or the bottom of a river through the use of our digital animation and visual effects that generated the simulated 3D imagery.

Though unusual for a Wagner opera, the audience was so enthusiastic about the use of the set and projections during the ride of the Valkyries that it could not restrain itself and started applauding at every performance.

Credits: video: Réalisations, Joël Proulx Bouffard / music: Tab and Anitek (source:, CC License)

Tools Used

Director Robert Lepage chose Realisations to create and integrate 3D interactive effects with the scenography of his production for the Wagner tetralogy.

Realisations combined our video projections and our partner’s Maginaire’s virtual cameras to allow us to project computer-generated images on the stage and décor, creating the illusion of 3D holograms.

The lack of precision in the human eye for seeing real volumetric images created these illusions triggered by the artists’ movement and voices. Opera patrons did not have to wear special glasses since we were able to develop a new technology that allowed projected 3D images on stage to be seen without special eyewear.


Digital Theatre: Teatime with me myself and I (Taiwan)

Very Theatre from Taiwan performs a video, mobilephone interaction act.
Fra Click Festival 16 maj 2015

Teatime with Me, Myself and I, a theatrical performance directed by Chou Tung-Yen, to rethink and reframe the shift in the mode of spectating introduced by contemporary artists.

The work inherits the original themes – “mobile phone”, “life full of screens” and “media world” – and features a quasi-improvisational[1] performance jointly executed by Hong Chian-Sang and Inred Liang. It speaks about the modern-day reality of people becoming inseparable from smartphone, tablets and other mobile devices.

  • Very Theatre

    Founded by interdisciplinary artist Chou Tung-Yen, with sponsoring from Very Mainstream Multimedia Ltd, its members are equipped with the expertise on theater and multimedia design and are committed to the development of cross-domain multimedia productions emphasizing on context. Its aim is to create a new conceptual experience of listening and viewing.
    Past works include: Emptied Memories , winner of the Interactive & New Media Design category at the World Stage Design 2013, as well as the Digital Performing Art Festivals from 2011-2014. Debut group production: multimedia puppetry performance Teatime with me, myself and I, as well as Lights Flowing Out of Frame – Chou Tung-Yen Solo Exhibition.
  • Director/Concept Chou Tung-Yen

    Chou Tung-Yen holds a MA in Scenography with distinction from Central Saint Martins College of Art and Design in London and a BFA in Theatre Directing from TNUA. He is the founder and director of Very Mainstream Studio, and now the lecturer in school of Theatre Arts, TNUA. Besides working on film & theatre pieces that are performed and screened internationally, he also dedicates to the realm of technology & performing arts.

Taller de sistemas interactivos: visión artificial aplicada a las artes performáticas y a la instalación.


La performance con sistemas interactivos describe una forma de arte híbrida que combina las artes escénicas, el vídeo arte y la instalación, con las características de las tecnologías interactivas, de donde emergen nuevas cuestiones filosóficas sobre el cuerpo virtual, el escenario aumentado y la transformación del rol del espectador.
Este curso está orientado a dotar a los participantes de las herramientas técnicas y conceptuales para diseñar y construir un sistema interactivo basado en la visión artificial, capaz de percibir los movimientos del cuerpo y con ellos controlar eventos audiovisuales en tiempo real.
Analizaremos la evolución del pensamiento en torno a la relación entre cuerpo, máquinas y performatividad, desde la fenomenología, y la idea de la obra de arte total, hasta los paradigmas actuales de los sistemas interactivos inmersivos. Estudiaremos aspectos técnicos de los sistemas audiovisuales interactivos y sus aplicaciones específicas en el espacio escenográfico. Aprenderemos a programar en Processing (Java), un entorno de programación libre y extensible, orientados a diseñar y construir instalaciones interactivas que nos inviten a utilizar el cuerpo entero como instrumento creativo para generar y controlar eventos audiovisuales en tiempo real.

-Analizar el contexto técnico y estético de la aplicación de los sistemas interactivos en la performance con el cuerpo.
– Desarrollar habilidades de programación con especialización en visión artificial en el entorno Processing.
-Diseñar una acción performática significativa y construir un prototipo de sistema interactivo controlado en directo con gestos y movimientos del cuerpo.

Áreas de aplicación:
Artes performáticas, audiovisuales, diseño de escenografía y vestuario, publicidad, arquitectura, diseño gráfico, educación.

Contenidos del curso:
– Estudio de la evolución en la relación entre cuerpo, máquinas y performatividad, recorriendo casos históricos referenciales desde el pensamiento de la fenomenología y la idea de la obra de arte total, hasta los paradigmas actuales de los sistemas interactivos inmersivos, y las teorías del cuerpo virtual. Análisis de textos de Johannes Birringer, Paul Virilio, Gilles Deleuze, Brenda Laurel, Donna Haraway, y otros.

-Análisis técnico de una instalación interactiva: dimensionamiento, disposición de luces, cámaras y sistemas audiovisuales. Visión general de software y hardware para capturar imágenes y para controlar múltiples proyectores sincronizados.

– Introducción a la programación en Processing (Java) y a la estructura de un proyecto. Tipos de datos y operaciones, estructuras de control, arrays, operaciones fundamentales de la programación.

– Fundamentos del audio y vídeo digital. Generación de gráficos 2D y 3D, y control de archivos de película. Síntesis de sonido y control de archivos de sonido.

– Estudios sobre visión artificial, métodos para adquirir y procesar imágenes de vídeocámaras usando la librería OpenCV: detección de contornos, rostros, colores, movimiento, reconocimiento de objetos.

– Métodos para adquirir y procesar datos de cámaras de profundidad tipo Kinect o Leap Motion. Detección de usuarios, movimientos y contornos. Análisis de esqueletos e interpretación de los datos del cuerpo.

– Estudios sobre interactividad. Métodos de diseño de experiencia interactiva basados en el cuerpo. Criterios para valorar cualidades de una experiencia interactiva.

– Construcción de prototipos de sistemas interactivos controlados en directo con gestos y movimientos del cuerpo.

Todos los participantes tendrán oportunidad de mostrar el resultado de su trabajo al finalizar el curso.

No se requiere experiencia previa.

Dirigido a: personas interesadas en construir sistemas interactivos para el cuerpo entero, bailarines, performers, actores, coreógrafos, escenógrafos, arquitectos, publicistas, artistas visuales, educadores y de disciplinas relacionadas. Se invita a los participantes a traer ideas de proyectos afines para ser desarrollados durante el curso.

Duración: 24 horas
Fechas: 2, 4, 9, 12, 16, 19, 23, 26 noviembre
Matrícula: 180€

Taller a cargo de |

Belén Agurto

Artista, investigadora en filosofía de la tecnología y estética digital. Máster en Estudios Comparativos de Literatura, Arte y Pensamiento por la Universidad Pompeu Fabra y Título en Comunicación por la Universidad de Lima (Perú) con especialidad en medios audiovisuales y sistemas interactivos. Premio Ibermúsicas 2014 por Sonomapa proyecto de arte sonoro y realidad aumentada. Actualmente es integrante del laboratorio educativo ( y el grupo de investigación 010 ( Docente en el programa formativo de tecnología y estética en entre 2015-2016.

Álvaro Pastor

Artista electrónico, investigador en realidad virtual y sistemas interactivos multimodales. Máster en Sistemas Cognitivos y Medios Interactivos por la UPF (Barcelona), y Arquitecto con especialidad en computación gráfica por la UPC (Lima).  Premio Ibermúsicas 2014 por Sonomapa proyecto de arte sonoro y realidad aumentada. Residente en eTOPIA Centro de arte y tecnología (Zaragoza) en 2013. Premio Iberescena en 2011 en colaboración con compañías Generarte y Raumkay. Director del medialab en Lima (Perú) entre 2006-2014. Director I+D en entre 2011-2014. Actualmente trabaja en (

Emilia Coranty 16
Can Ricart
E-08018 Barcelona

T. +34 93 308 4041
F. +34 93 307 1211

Performance and Media, Taxonomies for a changing field

Performance and Media Taxonomies for a Changing Field by Sarah Bay-Cheng, Jennifer Parker-Starbuck, and David Saltz

An innovative approach for explicating and mapping work at the media and performance nexus


This timely collaboration by three prominent scholars of media-based performance presents a new model for understanding and analyzing theater and performance created and experienced where time-based, live events, and mediated technologies converge–particularly those works conceived and performed explicitly within the context of contemporary digital culture.
Performance and Media introduces readers to the complexity of new media-based performances and how best to understand and contextualize the work. Each author presents a different model for how best to approach this work, while inviting readers to develop their own critical frameworks, i.e., taxonomies, to analyze both past and emerging performances. Performance and Media capitalizes on the advantages of digital media and online collaborations, while simultaneously creating a responsive and integrated resource for research, scholarship, and teaching. Unlike other monographs or edited collections, this book presents the concept of multiple taxonomies as a model for criticism in a dynamic and rapidly changing field.
“By drawing distinctions, differences, limits, and oppositions, by naming them with terms that already have a context, history, set of cultural associations, and meanings, the authors ‘create’ the board on which others can play. Bay-Cheng, Parker-Starbuck, and Saltz offer maps for the field (understood as a metaphorical territory) that will allow others to perform operations—creative and/or analytical—that may not have been possible otherwise.”
— Lance Gharavi, Arizona State University
Photo: Scene from The Builders Association multimedia theater project, Continuous City,Summer 2008.  Photo by James Gibbs.
Sarah Bay-Cheng is Professor of Theatre and Dance at Bowdoin College
Jennifer Parker-Starbuck is Professor of Theatre and Performance Studies at the University of Roehampton, London.
David Z. Saltz is Associate Professor of Theatre and Film Studies at the University of Georgia.

Theatre performance and Technology by C. Baugh

CHRISTOPHER BAUGH is Emeritus Professor of Performance and Technology at the University of Leeds. He is himself a professional stage designer, is Chair of the Society for Theatre Research’s Research Committee, was on the planning committee for The Globe Theatre, Bankside, and is editor of the journal Scenography International (with Christine White).

Throughout history, scenography has played a significant role in theatre, always drawing upon the latest technologies of manufacture and control. In the twenty-first century, it is fast becoming an artistic practice in its own right, engaging with audiences in varied ways. Christopher Baugh considers how change in scenographic identity has impacted upon the place and meaning of performance over the past 300 years.
Thoroughly revised and updated, the second edition of the book by PALGRAVE ed. discusses:
• moving light technologies
• the Internet as a platform of performance
• urban scenography
• scenography’s role in the creation of memory
• the development of scenography as a collaborative practice.

The poetry (and philosophy) of face projection mapping. My interview to NOBUMICHI ASAI, author of the OMOTE project

Nobumichi Asai is one of the best known media artist and projection mapping producer. Graduated from Department of Science, Tohoku University. Currently belongs to WOW Inc. Creative/Technical Director/Media Artist. He is globally acclaimed with face mapping for Lady Gaga at the Grammys 2016, “Connected Colors” for Intel’s global campaign, the laser hologram “Light of Birth” exhibited at La Triennale di Milano and “Ghost in the Shell Virtual Reality Diver”. He has led the field of projection mapping both nationally and internationally introducing “OMOTE” and SMAP’s “FACE-HACKING”. He is also invited to media art festivals over the world as a keynote speaker and a judge. His works challenge innovative visual arts by bringing advertisement, design and both ideas of art and programming together. Received Jury Selection in Japan Media Arts Festival, Grand Prize in VFX AWARD 2015, Honorary Mention, Computer Animation/Film/VFX Award in Ars Electronica and many other awards.
WWW.W0W.CO.JP <> ( <>)
Anna Monteverdi: We were all impressed by you project OMOTE, can you tell me which was the original idea to create this real time projection mapping, and what are in your opinion the potentials of its application?
NOBUMICHI ASAI: First I found the potentials of makeup arts and traditional makeup. There are diverse patterns of makeup and tribal face makeup in different countries and in times. Those would reflect their lives and values in places and in times. It is very interesting. And I realised a face could be working as a communication language. It would be very inevitable. People have developed a sort of ‘face language’ from our long history. My idea originated from how I could use the content into visual arts. Maybe OMOTE project can be innovative in that way.

Anna Monteverdi: The face is the place of the largest number of muscles that move expressions and is connected with the inner self. Do you consider this work a sort of “anthropological research” about the religious and sacral theme of the ancient masks?

NOBUMICHI ASAI: Exactly the face is connected with the inner self. It reflects the inner feeling and it is the communication media. It is conscious and unconscious. There is not a single person who has the same personality with others. There is the number of identity same as our population.

It was needed for human evolution to develop complex civilisation and to evolve ourselves. This is opposite to a uniform unicellular organism. In the process of evolution, it was needed to distinguish each difference with their varieties and characters.neu

There are differences in faces as many as individuals for identification tags.
Shapes and colours of faces vary with nationality, gender, age, personality, feeling and physical condition. There is no single person who has a same face as others.

At the same time, face language is developed for the need of communication, because we have to communicate our feelings and thoughts between each other for coexistence. I got inspiration from a radical function of faces to pursue the possibilities of face mapping. I don’t know if  it is an “anthropological research” about the religious and sacral theme of the ancient masks. If it has mutual points with my projects, that would be interesting.

Anna Monteverdi. Can you describe it technically, and explain the major difficulty in creating it?

NOBUMICHI ASAI: I receive the texture and the shape of the face in digital format first, and then it is projected with a projector after being processed and animated. The meaning of the face is transformed by this. In principle, this is an extension of the makeup culture of a long human history. It was technically difficult to project the animation onto the moving face. We faced problems with accuracy of positions and latency. If we couldn’t have solved those problems, we couldn’t have made enough communicating and visual effects. We made it possible by using the latest technologies.

Anna Monteverdi. Ologram and laser in Light of Birth: in the introduction to the project you propose a “philosofic explanation”: ” The notion that an object of light exists in space conjures up an image of a ‘substance’ being born” and you quoted Plato’s ‘Theory of Forms’ .Can you explain this statement and how much of these thoughts has inspired your works in general?

NOBUMICHI ASAI: There are questions related to ideas in the base of my works. Questions lie in ‘idea’ and ‘reality’. In programming, we can see them as ‘class’ and ‘instance’. Also it can be seen in the relationship between ‘protein’ and ‘organism’ generated by ‘DNA’. In my perception, essentially ‘information’ is equivalent to ‘existence’.

But information of ‘existence’ is maybe the matrix of 0 and 1, but it doesn’t mean materialism nor atheism. ‘Meaningful matrix’ has ‘life’ and essential ‘consciousness’ inside. For example, if you convert beautiful Beethoven music into 16 bit wav file and make it completely digitise, the essence of the music would not be lost. If you dismantle it into 0 and 1, it doesn’t mean you unravel its essence. If we dismantle human beings into 4 types of DNA, ATGC, it doesn’t mean we unravel human beings. It just shows deeper ‘consciousness’ in inner parts. As science is developed, we have to realise ’something great’ that is invisible in scientific viewpoints. I always look at ’something’ and ‘boundaries between human beings and machines’ beyond technological developments.

One of my work came from this idea is Kagami.

Anna Monteverdi:  When we speak about digital performance, we speak of a  live format in which more and more there is a crescent interweaving of languages, with an impressive presence of  interactive media. what is your definition of technological theater and what you believe is the right direction of a technological theater?

NOBUMICHI ASAI: In short, it can be said as ’the reproduction of the world’. Not just theatres, but also films, literature and arts have visions of how people should be from their real difficulty and tell the stories. That can be said as philosophy and religion. It’s an important process for human beings for mental evolution besides biological evolution. Arts are important for our development. I would say Interactive media let people virtually experience that kind of arts. It is like physical equations in codes execute simulation of a natural phenomenon and reproduce it. Media artists have that kind of desire in their spirits. Eventually, I wonder if people admire an ideal person and a kind of omnipotent god and would like to create one? That is linked to today’s robotics, the development of AI and the mythology of singularity.

ANNA MONTEVERDI: Lev Manovich speaks about the dominant Software culture: do you think this definition fits for your work, and linked to this, can you explain your statement: “At the beginning there was the code”.?

NOBUMICHI ASAI: The answer is clear. It was already mentioned in my previous answers. Previous art works can be converted into data of 1 and 0 as Lev Manovich said. But we have to remember there is ’something essential’ in the code. For example, let’s imagine a human life as a big swirl on the sea. It somehow shows ‘existence which has an energy with force’. But scientific analysis would prove just sea water in a test tube. It would be a fault for digital materialism to hide the ’something’. In buddha’s enlightenment and psychedelic awakening, we experience the universal truth. We can’t say it’s unscientific. In my perception, instinct methodologies is to inductively get answers from a large amount of aspects in life and nature. It is similar to the process of gaining literary truth. Additionally, it is a process of our learning via our experiences. It is just like a process that a computer gains ideas by deep learning. Eventually we could find the original code (which means ’the beginning code’) after massive big data and infinite deep learning.

Anna Monteverdi: In your works you use open source codes like OpenCV. Can you tell us the reasons for this “open attitude”?

NOBUMICHI ASAI: I am driven by the feelings which is similar to the desire of universal truth. Open source codes and physical and mathematical truth in them include ’secret of existence’. Maybe coders are attracted to that and would like to partly experience like a creator.


PERHAPS ALL THE DRAGONS – Horror Vacui [#3] – 2014 […in our lives are princesses who are only waiting to see us act, just once, with beauty and courage]

One round table with thirty seated screens at one side of the table and thirty audience seats at the other side. Thirty one-on-one narratives. The spectator can choose five he would like to see.


A famous pianist realizes on stage that she studied the wrong concerto – a neurosurgeon swaps head and body of two monkeys, they stay alive – in Japan there are 700 000 people living as hikikomori, withdrawn in their sleeping rooms for at least a year, away from social life – Six degrees of separation, a theory that everyone is six or fewer steps away from any other person in the world. A large array of dispersed stories will be offered to the audience. Berlin will encounter the people behind the little or great stories in international magazines, newspapers, specialized internet sites, youtube. The themes brought up in these stories will be eclectic: from a philosophic proposition, over a scientific detail, to an anecdote…

Thirty stories, transformed to thirty filmed monologues with a dramaturgy that gives them a certain coherence. The duration of each narrative will be exactly the same and multiple interactions will intervene at different moments. A one–on–one performance for 30 spectators, around one round table.PERHAPS ALL THE DRAGONS – Horror Vacui [#3] – 2014 […in our lives are princesses who are only waiting to see us act, just once, with beauty and courage]

One round table with thirty seated screens at one side of the table and thirty audience seats at the other side. Thirty one-on-one narratives. The spectator can choose five he would like to see.

A famous pianist realizes on stage that she studied the wrong concerto – a neurosurgeon swaps head and body of two monkeys, they stay alive – in Japan there are 700 000 people living as hikikomori, withdrawn in their sleeping rooms for at least a year, away from social life – Six degrees of separation, a theory that everyone is six or fewer steps away from any other person in the world. A large array of dispersed stories will be offered to the audience. Berlin will encounter the people behind the little or great stories in international magazines, newspapers, specialized internet sites, youtube. The themes brought up in these stories will be eclectic: from a philosophic proposition, over a scientific detail, to an anecdote…

Thirty stories, transformed to thirty filmed monologues with a dramaturgy that gives them a certain coherence. The duration of each narrative will be exactly the same and multiple interactions will intervene at different moments. A one–on–one performance for 30 spectators, around one round table.PERHAPS ALL THE DRAGONS – Horror Vacui [#3] – 2014 […in our lives are princesses who are only waiting to see us act, just once, with beauty and courage]

One round table with thirty seated screens at one side of the table and thirty audience seats at the other side. Thirty one-on-one narratives. The spectator can choose five he would like to see.

A famous pianist realizes on stage that she studied the wrong concerto – a neurosurgeon swaps head and body of two monkeys, they stay alive – in Japan there are 700 000 people living as hikikomori, withdrawn in their sleeping rooms for at least a year, away from social life – Six degrees of separation, a theory that everyone is six or fewer steps away from any other person in the world. A large array of dispersed stories will be offered to the audience. Berlin will encounter the people behind the little or great stories in international magazines, newspapers, specialized internet sites, youtube. The themes brought up in these stories will be eclectic: from a philosophic proposition, over a scientific detail, to an anecdote…

Thirty stories, transformed to thirty filmed monologues with a dramaturgy that gives them a certain coherence. The duration of each narrative will be exactly the same and multiple interactions will intervene at different moments. A one–on–one performance for 30 spectators, around one round table.PERHAPS ALL THE DRAGONS – Horror Vacui [#3] – 2014 […in our lives are princesses who are only waiting to see us act, just once, with beauty and courage]

One round table with thirty seated screens at one side of the table and thirty audience seats at the other side. Thirty one-on-one narratives. The spectator can choose five he would like to see.

A famous pianist realizes on stage that she studied the wrong concerto – a neurosurgeon swaps head and body of two monkeys, they stay alive – in Japan there are 700 000 people living as hikikomori, withdrawn in their sleeping rooms for at least a year, away from social life – Six degrees of separation, a theory that everyone is six or fewer steps away from any other person in the world. A large array of dispersed stories will be offered to the audience. Berlin will encounter the people behind the little or great stories in international magazines, newspapers, specialized internet sites, youtube. The themes brought up in these stories will be eclectic: from a philosophic proposition, over a scientific detail, to an anecdote…

Thirty stories, transformed to thirty filmed monologues with a dramaturgy that gives them a certain coherence. The duration of each narrative will be exactly the same and multiple interactions will intervene at different moments. A one–on–one performance for 30 spectators, around one round table.

Carl Fillion et le travail avec Robert Lepage pour la scénographie

Carl Fillion, né en 1966 dans la province de Québec au Canada, est diplômé de la section scénographie du Conservatoire d’Art Dramatique de Québec en 1991. Fort de son expérience technique, Carl Fillion est très rapidement en demande pour concevoir plusieurs scénographies avec les professionnelles les plus en vues du milieu du théâtre au Québec. Rapidement il se démarque avec des conceptions scénographiques théâtrales originales par des espaces en mouvements et par l’utilisation de moyen technologique. Il compte maintenant plus de 40 créations scénographiques de théâtre, opéra, cirque, spectacle multimédia, musicale et muséologique.

En 1993 le metteur en scène Robert Lepage fait appel aux services de Carl Fillion pour concevoir la  scénographie du spectacle, Les 7 branches de la rivière Ota, qui sera présenté dans plusieurs villes d’Europe et du Japon. Suite à cette première collaboration, Carl Filion devient un des concepteurs privilégié d’Ex Machina et signe plus d’une quinzaine de productions au côté de Robert Lepage, dont Elseneur en 1995, Le songe d’une nuit d’été en 1995, La géométrie des miracles en 1997, La Celestina en 1998, Jean-Sans-Nom en 1999, l’opéra La Damnation de Faust en 1999 au Japon repris par la Bastille de Paris et ensuite par le Metropolitan de NY, La casa azul en 2001, La Celestina en 2004, l’opéra 1984 en 2005.l’opéra The Rak’s Progress en 2005, l’opéra Le Rossignol et autres petites fables en 2009. Totem en 2010 pour le Cirque du Soleil. Cette collaboration qui se fait dans le cadre de productions de théâtres, d’opéras et de spectacles musicaux explore et renouvelle à chaque production le mouvement de l’espace scénique en utilisant des moyens technologiques de pointes et contribue à démarquer le travail originale de Carl Fillion, ce qui lui à permi d’acquérir une expertise forte et unique en son genre. Fillion et Lepage travaillent présentement sur de nouveaux projets d’opéra dont la tétralogie de Wagner Der Ring des Nibelingen au Metropolitan de New York en 2010-11-12.

Outre le travail avec Robert Lepage, Carl Fillion participe comme créateur à d’autres spectacles de théâtre et d’opéra avec différents metteurs en scène au Québec et en Europe, dont The Burial at Thebes (Antigone) au Abbey Theater de Dublin en 2004. Simon Boccanegra au Liceu de Barcelona en 2008,

Parallèlement à son travail de concepteur, Carl Fillion a participé à la formation de nouveaux scénographes en enseignant pendant 10 ans au Conservatoire d’art dramatique de Québec de 1992 à 2002, et à l’École nationale de théâtre de Montréal de 2000 à 2002.

Conférence du scénographe Carl Fillion à l’École d’architecture le 18 janvier 2011. En collaboration avec le Conservatoire d’art dramatique de Québec, le LANTISS de l’Université Laval et le Centre d’études collégiales de Montmagny.

The Tempest & Intel-Royal Shakespeare Company

The Tempest runs November 8 to Jan. 21 at the Royal Shakespeare Theatre in Stratford-upon-Avon.


If you think computer-generated special effects are only for blockbuster movies, then the 2016 stage production of The Tempest by the Royal Shakespeare Company (RSC) might surprise you.

Working with Intel and The Imaginarium Studios – the performance capture studio co-founded by actor Andy Serkis and Jonathan Cavendish – the RSC wanted to reinvent Shakespeare’s epic tale in a way that could excite and amaze 21st-century audiences. The challenge was to use the latest technology to create something live theater audiences had never seen before.


For the first time ever, a digital reinvention of Ariel the sprite, one of The Tempest’s key characters, will take to the stage. And unlike movies and video games, which rely on post-production rendering and integration, in this production, Ariel’s avatar will be performing on stage alongside human actors in real time.

“Because Ariel is not of this world, we can be really imaginative,” said Sarah Ellis, Head of Digital Development at the RSC. “We can do things we haven’t been able to do before in terms of how we show the character live on stage. We can make him very small or we can make him the width of the stage with the technology we are using.”

Played by an actor wearing a motion capture suit, Ariel’s movements are captured by inertia-detecting sens ors, rendered using two Xeon-powered servers, and then projected onto the stage as a computer-generated avatar…(see in )



Performance Fuma-Kai (ENRA – motion graphics performing arts)

Le groupe de danse japonais ENRA réalise une performance chorégraphique intitulée “FUMA-KAI” devant un mur de projection où les danseurs vont interagir avec les animations projetées en guise de décor. Sous la direction de Nobuyuki Hanabusa, la troupe ENRA est composée des danseurs Maki Yokoyama, Saya Watatani, Tachun, Yusaku Mochizuki et Tsuyoshi Kaseda. Ce spectacle de danse multimédia a été réalisé pour fêter la victoire de Tokyo qui a été choisie pour organiser les Jeux Olympiques de 2020.
Musique : Yuko Sonoda (Hanabusa Remix Version)

Nobumichi ASAI:Connected Colors, Real time projection on face

New face projection mapping work using Real-Time Face Tracking “Connected Colors” is included within the Intel Corporation experience amazing campaign.
“Connected Colors” which was planned and produced with the technical direction by media artist Nobumichi Asai (WOW), conceptualizes the coexistence of life. The motif is centered on the colors of nature, and is an expression of electric make-up in the form of various colors intermingling and harmonizing. Expressed from beyond the viewpoint of humanity, “Connected Colors” aims to convey the human desire for harmony and coexistence of life on Earth.

Official Release Site


Creative Director NOBUMICHI ASAI(WOW)
International Liaison YUI TANAKA(ROBOT)
Director of Photography HIDEYUKI HASHIMOTO
Stylist RYO KURODA(Vivid)
Production Manager TAKESHI DAIMON(ROBOT)


Polymedia, a technical partner of the Moscow Musical Theatre for the play All about Cinderella, helped to prepare a new production. In the fifth theater season, the company worked on a project to stage a rock-opera version of Fyodor Dostoevsky’s classic novel, Crime and Punishment, in Moscow. Polymedia prepared an innovative solution – 6D video mapping – for the implementation of the director’s original idea.

This production is the first in Russia to use an innovative Black Trax real-time tracking system.

The system allows video mapping on moving objects in real time. Until now, that technology was used only by foreign filmmakers and directors, and brought international fame to such shows as Madonna concerts and performances of Cirque du Soleil.

The project consists of the integration of two complex systems – a tracking system and a video mapping system.

A tracking system tracks the position of the movement on stage and rotating scenery, using special cameras and sensors, then it reads the data about the shape and configuration of the scenery and transmits it to the video mapping system, which, in turn, works on the content online so that the projection onto the scenery is displayed correctly, based on the location and dynamic rotation of the objects on stage.

The total number of projectors on the stage is eight; during the performance the stage scenery changes its position 36 times, and the mapping content also changes 36 times. The integration of such an intelligent system with audiovisual equipment creates a dynamic 3D effect, but, unlike a 3D movie, it performs online, with no need to cut.

We all became spectators of a unique action, and it is not only the merit of the theater team – a brilliant creative team – but also the team of the Polymedia. Thanks to the Polymedia engineers’ idea the rock- opera of “Crime and Punishment” of Moscow Musical Theatre is one of the most technologically advanced performance in Russia, – said Alexander Novikov, Director of the Theatre.

Andrei Konchalovsky went to work on the production of the rock opera in September 2014, calling its format a “poly-genre fusion performance”.

The rock opera written by composer Eduard Artemyev in 2009 will serve as the basis for the music, and the action will be set in a modern-day city. The opera will be “harsh” and “without the sentimentality of the Dostoevsky novel of which Vladimir Nabokov was so disdainful,” said Mikhail Shvydkoi, Art Director of the Moscow Musical Theater where the musical will be staged.

The rock opera debuted 17 March, 2016.