Archivi categoria: augmented reality

False Positives / Esther Hovers (NL)
1124

Beautiful project by ESTHER HOVERS seen at ARS electronica 2018

The project False Positives is about intelligent surveillance systems. These are cameras that are able to detect deviant behavior within public space. False Positives revolves around the question of normal behavior. It aims to raise this question by basing the project on eight different ‘anomalies’. These so-called anomalies are signs of body-language and movement that could indicate criminal intent. It is through these anomalies the algorithms are built and cameras are able to detect deviant behavior. The eight different anomalies were pointed out to me by several intelligent surveillance experts with whom I collaborated for this project. The work consists of several approaches, photographs and pattern drawings. All together, these form an analysis of different settings in and around the business district of the de facto European capital: Brussels.

Prolonged pausing, groups of people suddenly splitting up, a woman coming to stop exactly at the corner of the street, a man running through a slow-moving crowd. All of these can be classified as deviant behavior within the context of public space. To find out what constitutes deviance we first need to ask ourselves the question: What is considered normal?
Public security is a growing concern throughout Europe. To the eye of the camera every person is a possible suspect, every person a possible perpetrator.
Will intelligent surveillance help us to safeguard our need for security?

This video is mimicking the interface of an intelligent surveillance system. It was made in the business district of Brussels in 2015.

© 2015 Esther Hovers

 

[help me know the truth] Mary Flanagan (US)-a software-driven participatory artwork for Ars electronica
1156

[help me know the truth] is a software-driven participatory artwork in which visitors first snap a digital self-portrait (or “selfie”) at the gallery. The image is then sent around the gallery’s network and appears on digital stations located around the gallery. Using the tools of cognitive neuroscience, the faces are manipulated with noise patterns to literally, through time and user input, ‘construct’ the perfect stereotype.

On digital stations in the gallery, visitors are asked to choose between two slightly altered portraits to match the text label shown. By selecting slight variations of the images over time, differing facial features emerge from what are otherwise random patterns that reveal unconscious beliefs about facial features or tendencies related to culture and identity.

http://maryflanagan.com/work/help-me-know-the-truth/

 

[help me know the truth] utilizes Reverse Correlation to investigate how psychological responses to people’s faces might uncover both positive and negative reactions to those who visit the gallery. The viewer/participant chooses between two identical selfies, where different computational noise has been applied. The faces appear somewhat blurry, so the viewer/participant chooses one blurry image over another that might match criteria given. The list of prompts for visitors to the gallery ranges from the politically-charged to the taboo: ”Choose the victim;“ falls after “Indicate the leader“ but might lead to the timely, “Select the terrorist.” Other judgements passed by visitors include identifying which face is the most angelic, kind, criminal, etc. Through choosing faces manipulated by particular noise patterns, facial features emerge that reveal larger thoughts and beliefs about how we fundamentally see each other.

 

Why do people—even internationally—tend to gravitate towards similar stereotypes? Bias against ’the other‘ is a dangerous impediment to a just Twenty-First Century society, in part encouraged by our own neurological structures that have not caught up with our lived realities. Hyper-scale image-based categorization is being deployed in government and surveillance programs worldwide. These processes demand our critical attention. Where do we find the “truth“ about each other this way?

[help me know the truth] raises awareness about the unconscious stereotypes we all carry in our minds, and how these beliefs become embedded in myriad software systems including computer vision programs. My intent is to both utilize and question how computational techniques can uncover the categorizing systems of the mind, and how software itself is therefore subject to socially constructed fears and values. [help me know the truth] provokes discussion about the types of biases that surround us: that we are under global technological surveillance is troubling; that the humans involved in crafting these systems, the systems themselves, and the people brought in to make final calls on various warnings, alerts, and arrests are all products of unconscious biases, is troubling. Perhaps software systems do not help us know the truth at all.

Biografie:

Mary Flanagan (US)Mary Flanagan (US) plays with the anxious and profound relationship between technological systems and human experience. Her artwork ranges from game-based installations to computer viruses, embodied interfaces to interactive texts. In her experimental interactive writing, she’s interested in how chance operations bring new texts into being. Flanagan’s work has been exhibited internationally at venues including The Whitney Museum of American Art, The Guggenheim, Tate Britain, Postmasters, Steirischer Herbst, Ars Electronica, Artist’s Space, LABoral, the Telfair Museum, ZKM Medienmuseum, and museums in New Zealand, South Korea, and Australia. She was awarded an honoris causa in design in 2016, was a fellow in 2017 at the Getty Museum, and in 2018 she was a cultural leader at the World Economic Forum in Davos, Switzerland.

Credits:
Thanks to Jared Segal, Kristin Walker, Danielle Taylor; open source RC software by Dr. Ron Dotsch.
Supported by: The Leslie Center for the Humanities, Dartmouth College

VFrame by Adam Harvey: Machine learning for Human Rights researchers and Investigative Journalists
1040

VFRAME is a computer vision toolkit designed for human rights researchers and investigative journalists

VFrame

Project from Adam Harvey creates tools using Machine Learning for human rights researchers and investigative journalists – the video above demonstrates visual recognition of munition and shells:

Accelerating Human Rights Investigations with Computer Vision

People caught in conflict zones such as the Syrian conflict frequently post videos online that contain critical information for human rights investigations. But manually reviewing these videos is expensive, does not scale, and can cause vicarious trauma. As an increasing number of videos are posted, a new approach is needed.

VFRAME is a computer vision framework being developed to address these demands. It provides researchers and technologists with access to state-of-the-art tools to locate objects of interest, extract visual metadata, query related media, organize and annotate custom datasets, and provide filtering for traumatic content.

VFRAME is currently working directly with the Syrian Archive to establish the most effective and relevant tools for improving their workflow. Phase 1 of VFRAME will research and develop the following capabilities:

Visual Taxonomy
To understand how objects can be annotated for training datasets, they are first stuctured into a visual hierachy that defines and links signature visual characteristics
Custom Datasets
Using the visual taxonomy, datasets can be constructed using a new web-based annotation platform developed specifically for VFRAME
Object Detection
Using the custom datasets, state-of-the-art object detection algorithms are trained to locate and quantify objects of interest in large video datasets
Scene Metadata
Additional visual metadata such as scene attributes will provide researchers with more keywords to search and link videos
Graphic Filtering
Manually reviewing videos from conflict zones can cause vicarious trauma. VFRAME is creating graphic filtering capabilities to provide relief from reviewing traumatic content
Visual Metadata API
VFRAME converts videos into RESTful API endpoints that can be integrated into other workflows and custom search engines

Sebastian Errazurriz -A PAUSE IN THE CITY THAT NEVER SLEEPS
981

Times Square Electronic Billboards | January 1, 2015 – January 31, 2015 On January 17, 2015, Performance Space 122’s COIL 2015

Performance Festival concluded with a special gathering on Duffy Square with artist Sebastian Errazurriz and Times Square Arts. Chilean artist Errazurriz’s Midnight Moment for January 2015 was an experiment with the contagious phenomenon of yawning, the black and white moving image of a repeating yawn positioned at the city’s touristic core contrasting to the outstanding, colorful Times Square advertisements and acting as a disruptive comment on contemporary society, needs, and frustrations.

The participants in the January 17 “yawn-in” assembled to test their resistance to the power of the yawning epidemic, which spread over the screens of Times Square every night in January 2015. Midnight Moment is a monthly presentation by The Times Square Advertising Coalition (TSAC) and Times Square Arts. This month’s presentation was in partnership with Performance Space 122’s COIL 2015 Festival.

Snapchat e l’arte aumentata: JEFF KOON taglia il nastro e subito viene “vandalizzata” per rivendicare lo spazio digitale pubblico
980

Jeff Koons, inaugura la galleria d’arte aumetata di snapchat

In pratica, attraverso le fotocamere dell’applicazione e la funzionalità World Lenses questi lavori si potranno visualizzare in AR quando ci si trovi ovviamente in specifici luoghi.

In sostanza le opere saranno “posizionate” in diversi posti del mondo e gli utenti del fantasmino potranno scoprirle e immortalarle quando si troveranno nelle vicinanze. Una speciale “lente” si visualizzerà e un indicatore li guiderà al posto stabilito in cui possono visualizzare il pezzo di realtà aumentata inquadrandolo con lo smartphone. Sapranno della presenza quando riceveranno una notifica Snapchat Art ma pare potranno anche andare alla ricerca in piena autonomia grazie alla SnapMap, che includerà appunto anche i lavori degli artisti. Gli strani animali di Koons sono per esempio piazzati a Londra, New York, Chicago, Las Vegas, Rio de Janeiro, Washington D.C., Toronto e Sydney.

FONTE http://www.repubblica.it/tecnologia/social-network/2017/10/03/news/snapchat_si_da_all_arte_jeff_koons_in_realta_aumentata-177277315/

LA VERSIONE in realtà aumentata del Balloon Dog di Jeff Koons è stata vandalizzata. L’installazione, creata dall’artista statunitense per il lancio di Snapchat Art, è già stata ricoperta di tag, come se non si trovasse virtualmente a Central Park, ma in uno dei peggior sobborghi di New York. A rivendicare la parternità del gesto è stato l’artista cileno Sebastian Errazuriz, che ha pubblicato su Instagram gli scatti dell’opera accompagnandoli da messaggio conciso ma in grado di accendere i riflettori su un tema che sta acquisendo sempre più rilevanza: la protezione dello spazio pubblico digitale

fonte REPUBBLICA

 

 

Il primo Museo di Arte Urbana Aumentata è a Milano MAUA
979

MAUA – Museo di Arte Urbana Aumentata – è una galleria a cielo aperto, fuori dal centro di Milano, che consta di oltre 50 opere di street art animate con altrettanti contenuti virtuali fruibili attraverso la realtà aumentata.

MAUA nasce grazie al al Bando alle Periferie finanziato dal Comune di Milano, con il quale sono stati selezionati e finanziati, tra oltre 150 proposte pervenute, 14 progetti culturali per le aree di Giambellino-Lorenteggio, Adriano-Padova-Rizzoli, Corvetto-Chiaravalle-Porto di Mare, Niguarda-Bovisa e Qt8-Gallaratese.

Le opere del MAUA sono state selezionate dagli abitanti dei quartieri, in un esperimento avanzato di curatela diffusa che ha previsto l’individuazione collettiva e partecipata delle opere e una discussione comune sul loro significato percepito e sul loro valore per le strade della città. Le opere sono state documentate da studenti e associazioni di quartiere, insieme ai professori della scuola CFP Bauer e da ognuno dei 5 quartieri, al termine dei workshop di fotografia sono state selezionate 10 opere maggiormente rappresentative.

UN MUSEO DIFFUSO, A CIELO APERTO, FUORI DAI SOLITI PERCORSI

50 giovani animation designer hanno poi elaborato le immagini durante un workshop di realtà aumentata e prodotto 50 contenuti digitali inediti che oggi animano le opere selezionate.

Oggi le opere del MAUA sono liberamente fruibili consultando la mappa completa in basso, sull’app Bepart o ancora sul catalogo cartaceo in distribuzione gratuita presso BASE Milano da giorno 17 dicembre 2017.

Il progetto è stato condotto e gestito da un largo partenariato comprendente la cooperativa sociale Bepart (capofila), il centro per la cultura e la creatività BASE Milano, la casa editrice Terre di Mezzo, la scuola di fotografia Bauer, il laboratorio di design PUSH. e la Fondazione Arrigo e Pia Pini.

Augmented culture by GOOGLE ARTS
978

https://artsandculture.google.com/project/cyark

Cyark and Google Arts & Culture are launching “Open Heritage,” the largest 3D collection of heritage data, now available to browse and download at g.co/openheritage.

Thanks to Google’s new Arts & Culture project, developed in collaboration with a non-profit called CyArk, you can tour some of the most remote and historically significant locations in the world. The Open Heritage project provides virtual access to 26 world heritage sites in 18 different countries, complete with data about each location, 3D structural models, and laser scanning technology.

 

 

I tutorial di #MarcoPucci: UNITY TUTORIAL #1 – REALTÀ AUMENTATA
976

Per vedere tutti i tutorial: www.marcopucci.it

SE NON HAI INSTALLATO UNITY CLICCA QUI (PER PC) O QUI PER (iOS)

In questo tutorial realizziamo la prima applicazione per Android e iOS in Realtà Aumentata utilizzando Unity e Vuforia.
Unity è un software che permette di creare giochi, realtà virtuale e AR mentre Vuforia è una libreria che deve essere integrata con Unity e permette il riconoscimento del marker visivo.

Continua su:

http://www.marcopucci.it/unity-tutorial-1-realta-aumentata/

 

 

Realtà aumentata per interior design
973

3D Catalog for furniture

https://catalog.sayduck.com/

according to Centric Digital Augmented reality in retail study on US customers, furniture is the top product people want to shop with AR (60%). Other products include clothing (55%), groceries (39%), shoes (35%) and jewelry (25%). So even though the AR technology is not ultimately here yet, AR furniture shopping is already a thing known to customers.

How do AR apps work

Augmented Reality is a way of overlaying virtual animated objects onto real world surroundings using mobile devices. A live environment augmented by digital data – sounds, images, videos.

There are 2 groups of AR apps:

  • Marker-based apps
  • Location-based apps

First ones work with image recognition using camera to scan an image (marker) and then add a virtual image on the phone screen. For example, many apps read QR codes and present additional information.

Location-based AR apps use GPS to locate places nearby and/or to offer directions, etc.

The idea for using augmented reality apps is simple:

  • Download the app
  • Get the marker image/code/flyer
  • Place the marker in your environment
  • Point the device at the marker and interact with augmented reality

AR apps are mostly used in commerce, entertainment, games, retail, medicine.

Benefits of augmented reality in furniture

Overall, using augmented reality applications businesses can:

  1. Personalize shopping experiences
  2. Entertain and amaze customers
  3. Engage and retain customers
  4. Be ahead of competition

While customers can enjoy AR tools in such ways as:

  1. Find and try products remotely
  2. Explore brand new ways to shop
  3. Making more informed purchasing decisions
  4. Combine buying with entertainment

For furniture retailers, the approach of ‘try before you buy’ is crucial and this option for customers is a winning ticket. Offering this yet futuristic technology of AR is of news value alone, to begin with. The mere fact that a retailer has an AR app instantly gets the attention and potentially increase sales. To prove this point, a good example is GAP chain of stores, that opened up augmented dressing rooms, got big publicity and higher traffic to their website in result.

AR is also a fine tool for personalization of customer needs, especially when it comes to  furniture. Shoppers really want the ability to see how the items will look at their homes or offices. IKEA was the first AR ice-breaking furniture company introducing the catalog app for these exact needs. A user takes a photo of a place with the app, browses the catalog and tries selected items onto a place.

How can AR in furniture help expand your business

Being on the forefront of modern technology, you can benefits from augmented reality in following ways:

Showcasing. Retailers are able to showcase their products visually and interactively with AR tools. A furniture store may offer customers to see how a table of a bed would look like in their house.

Beat competitors. With such advanced technology as AR you can compete and easily get ahead of competition. Offering innovative shopping experiences leads customers your way, no doubt about it.

Younger customers and swag (in a good sense). Augmented reality apps are trending among  younger audience so why not use it as a new advertising option? And unlike Virtual Reality apps that require additional equipment, AR is more widely available, and there are literally millions of smartphone owners.

Trials free of risk. With an AR app people can place a bookshelf in their rooms in a few taps on a phone. Or see how a sofa would look in brown or red color. The risk of product return and logistical expenses is minimized.

New marketing opportunities. AR offers new ways to promote your brand, to offer product information, to present new products, to offer helpful 3D experiences to customers. And most importantly, to attract more customers.  

FONTE https://thinkmobiles.com/

 

 

 

 

 

Tecnologia aumentata per i Musei: ETT per le Gallerie dell’Accademia di Venezia
970

 

LINK

http://www.gallerieaccademia.org/

 CHALLENGE

  • Offrire ai visitatori l’opportunità per migliorare il coinvolgimento nel percorso guidato, abilitando un vero e proprio viaggio multimediale nell’arte veneta del XVII e XVIII secolo
  • Rendere l’esperienza museale più intuitiva e soprattutto adatta a tutti
  • Promuovere la didattica digitale, garantendo diversi livelli di approfondimento ed interazione

 SOLUTION

Tecnologie innovative (totem multimediali, videowall, app e realtà aumentata, tecnologia beacon) che danno vita ad un nuovo modo di conoscere l’arte

 RESULTS

Innovazione tecnologica che permette ai visitatori di esplorare e fruire la nuova ala delle Gallerie secondo i migliori standard dell’odierna museologia

PHOTOGALLERY

Un progetto innovativo, voluto dal Ministero dei beni e delle attività culturali e del turismo, che ha visto coinvolti Venetian Heritage in qualità di coordinatore e Samsung in qualità di co-finanziatore e provider tecnologico, in partnership con ETT S.p.A. per gli allestimenti multimediali.

CHALLENGE

Percorsi guidati e tematici, caratterizzati da linguaggi e livelli di approfondimento differenti, in grado di rendere la visita adatta ad ogni genere di pubblico: dagli adulti e gli studenti che desiderano un approccio facile e sintetico all’arte, ai bambini e ai loro genitori che hanno la possibilità di vivere un’esperienza museale più intuitiva e interattiva, oltre alla possibilità di fruire in maniera diretta e autonoma dei percorsi museali posti in essere nelle sale. Un sistema basato su tecnologia beacon (trasmettitori Bluetooth 4.0 dislocati lungo il percorso) crea un interazione diretta con smartphone e tablet su cui è installata l’app, localizzando i visitatori all’interno del museo e proponendo quindi esperienze e contenuti specifici dedicati per ogni singola opera.

SOLUTION

All’ingresso, per accogliere il visitatore, sono presenti sei monitor touch da 32” che permettono di accedere alle informazioni sull’allestimento delle cinque nuove sale. Toccando il monitor il visitatore viene accolto da un messaggio di benvenuto del regista James Ivory che lo introduce alla visita. Il visitatore può così visualizzare e avere informazioni di approfondimento delle opere esposte nelle nuove 5 sale (il progetto architettonico di recupero è stato firmato dall’architetto Tobia Scarpa), può consultare dei percorsi tematici definiti all’interno del museo e visualizzare le precedenti collocazioni delle opere nella città di Venezia. I percorsi preferiti sono scaricabili inoltre sul proprio dispositivo mobile. Per ogni opera è possibile esprimere la propria preferenza attraverso la condivisione con i maggiori social network.

Le tre sale espositive delle Grandi Gallerie sono dotate ciascuna di due monitor touch che aiutano il visitatore nella comprensione della tematica della sala e delle singole opere in essa presenti. Il primo è un dispositivo statico, che descrive ciò che è presente in sala, la grafica non è invasiva per non distrarre il visitatore che può così concentrarsi sulle opere esposte e apprendere nello stesso tempo il filo conduttore della visita all’interno della sala in cui ci si trova e i contenuti sono in Italiano con la possibilità di cambiare lingua direttamente agendo sul monitor con un tocco. Il secondo è invece un dispositivo interattivo che permette al visitatore di avere maggiori informazioni sulle opere esposte. È infatti possibile visualizzare il singolo quadro con la sua scheda tecnica e accedere ad ulteriori approfondimenti. L’immagine dell’opera è inoltre ridimensionabile per vederla nei minimi particolari. I contenuti descrittivi sono in italiano con possibilità di cambiare lingua.

Per accompagnare il visitatore nelle sale è disponibile un’app mobile installata su tablet noleggiabili o direttamente dall’utente sui propri dispositivi che sono utilizzati come videoguide. L’app mobile, pensata per diverse tipologie di utenza (Bambini, Ragazzi, Adulti) permette di approfondire alcuni aspetti e dettagli delle opere non percepibili naturalmente. Tutto ciò permette di rendere la visita alle Gallerie dell’Accademia interessante anche ai bambini e ai ragazzi, che possono apprendere la storia della struttura e delle opere in esso contenute divertendosi.
L’applicazione mobile pensata per bambini (5-10 anni) e ragazzi (10-16 anni) ha una serie di giochi che accompagnano il visitatore in un percorso di visita all’interno di tutte le 5 sale (come ad esempio caccia al tesoro, memory e puzzles).

L’app mobile su tablet per gli adulti ha delle funzionalità in più che permettono di accedere a livelli maggiori di approfondimento. Sono presenti, ad esempio, percorsi tematici, tra i quali sono evidenziati quelli eventualmente scelti nei monitor di accoglienza, realtà aumentata, attraverso la quale è possibile vedere quanto emerso in fase di restauro dell’opera o visualizzare l’opera in modalità tridimensionale, oltre a consultare, per alcune opere, dei video di approfondimento.

All’interno della Sala 4 è inoltre presente un videowall composto da 9 moduli sul quale è visualizzabile un virtual tour che descrive l’evoluzione della struttura dell’Accademia e dell’Insula nei vari secoli.

RESULTS

Un progetto innovativo, voluto dal Ministero dei beni e delle attività culturali e del turismo, che ha visto coinvolti Venetian Heritage in qualità di coordinatore e Samsung in qualità di co-finanziatore e provider tecnologico, in partnership con ETT S.p.A. per gli allestimenti multimediali.
Un progetto che ha saputo coniugare un importante percorso di sponsorizzazione con un intervento d’innovazione tecnologica che permette ai visitatori di esplorare e fruire la nuova ala delle Gallerie secondo i migliori standard dell’odierna museologia

Augmented ART
965

Damjansky Bees and herring in red

Alongside the printing press, the internet represents the greatest democratization of knowledge and information in our history. While the work of the artists represented in the exhibit “Hello, we’re from the internet” are not restricted to web-art, the artists are all of a generation inspired by the access, tools, and freedom it has brought them.

Participating Artists:
Sarah Rothberg sarahrothberg.com
Gabriel Barcia-Colombo gabebc.com
Tara Sinn tara-sinn.com
Louise Foo louifoo.com
David Lobser dlobser.com
Scott Garner scottmadethis.net
Harald Haraldsson haraldharaldsson.com
Damjanski damjanski.com

More info @ momar.gallery/

The Danger Tree / A Groundbreaking Visual Arts Experience
1st – 31st July 2016
Riverside Unit, New Capital Quay, Greenwich, London SE10 9FR

Further UK exhibitions throughout Oct-Nov 2016 in Birmingham, Manchester and Liverpool.

World’s First Augmentist 30 year old artist Scarlett Raven unveils her ambitious, groundbreaking visual arts experience, The Danger Tree this summer. Scarlett is the first oil painter to work in the exciting world of augmented reality and reveals the deeply personal process of creating her multi-layered, experiential art.
Working exclusively with Blippar, world leaders in the field of augmented art, the viewer can use their smartphone app to unlock Scarlett’s poignant work, stripping away the countless layers of paint to reveal the creative journey beneath each painting.

SPOTIFY


ABOUT THE PROJECT

Campaign for Spotify and Síminn (Iceland Telecom). Created & directed by Harald Haraldsson. Produced by Wonwei.

 

 

 

Augmented exhibition/Augmeted museum, Animated paintings
964

ARART is an application that breathes life into objects.
ARART presents a new platform of expression connecting artistic creation with reality.

ARART è una nuova applicazione con realtà aumentata nata in Giappone  da Shiratori, Takeshi Mukai e Younghyo bak, e permette di trasformare dipinti o disegni in animazioni virtuali in tempo reale.

MoMART:

A collective of eight internet artists transformed the Jackson Pollock room in the New York City Museum of Modern Art into their own augmented reality gallery—without the museum’s permission.

 

A transmedial story experience: the augmented book Sherwood Rise
963

The first  ‘future of the book’ projects by DAVE MILLER. It’s a transmedia story told over 4 days, through emails and newspapers and mobile phones running augmented reality.

It’s an update to the Robin Hood story, commenting on the state of things in the UK.

To read the story, please start here: http://davemiller.org/ar/truth/

http://davemiller.org/davemiller/sherwood_rise/

 

This is a research collaboration between Dave Miller, Dave Moorhead and Professor Alexis Weedon, at the University of Bedfordshire, UK.

The project is part of the UNESCO ‘Crossing Media Boundaries: Adaptations and New Media Forms of the Book’.

For further information, please visit my project blog: http://augmentedwonder.blogspot.co.uk

 

Augmented educational and children’s books and augmented games
962

EAnimated Sandbox

Selfie Wall

Pic Me On

Digital Ball Wall

Virtual Tag

Dynamic Floor

Quantum Space

Street Run

The Augmented Book is the best way to tell any story, by animating the pages as the viewer turns them. Keeping your visitors physically and visually engaged, this digital interactive blurs the lines between learning and playing. perfect for children and adults, the content will quite literally jump off the page.

Call for Papers:  Performance and VR Practice; New Work for New Environments    
897

 The International Journal of Performance Arts and Digital Media is seeking contributions for a special issue on Performance and VR Practice.

International Journal of Performance Arts and Digital Media

http://www.tandfonline.com/toc/rpdm20/current

DEADLINE: 31st January 2018
Full manuscripts should be submitted online: http://www.edmgr.com/rpdm/default.aspx
Publication: Autumn 2018 in Volume 14, Issue 2

Virtual Reality technologies have a long and established history. As Oliver Grau recognizes in his seminal text Virtual Art: From Illusion to Immersion (2003), the idea of installing an observer in a hermetically closed-off image space of illusion did not make its first appearance with the technical invention of computerised virtual realities. On the contrary, VR forms part of the core of the relationship of humans to images” (Grau, 2003: 4-5). Such is our fascination with creating illusionary spaces” (ibid.), it is understandable that artists and technologist have spent the last few decades exploring how technologies, such as VR, can enable us to extend beyond our own reality towards immersive and illusionary theatrical experiences. Since the 1980’s, when VR was first used in a performative context, beyond its application in industry, artists and scholars have continued to challenge notions of what is ‘real’ and what is ‘virtual’; they have challenged concepts of transcendence, simulation, immersion, materiality, alternate realities, hybrid or mixed realities to name but a few. The use of VR has therefore been important for opening up perspectives and for developing new performance paradigms. Yet, whilst the use of VR over the last three decades has been focused and rigorous, it has not been as widely adopted as other technological tools (such as gesture/motion-sensing systems or live video and projection mapping systems) have been. This is largely due to practical concerns and the availability of such a complex technology. However, over the last few years, VR technologies have made a reemergence, not only in terms of affordability, but because the continued advances in design and usability are making it increasingly possible for artists to access and explore its potential.

 In 2017, Sony released the Play Station VR headset, enabling high quality VR technologies to be accessed at home. Google cardboard and other VR -goggles enable users to access VR content through their smartphone and 360 streaming is available on Youtube. In response to this, a greater number of performance practitioners have begun to explore how such VR technologies can be used. For instance, 2017 has a seen the premier of a number of new examples of VR performance work, some made by independent artists and others by established organisations including AoE’s Whist, Boleslavsky? and Júdoká’s Dust (supported by Rambert/V&A), Makropol & Bombina’s The Shared Individual and a new VR film by the English National Ballet, inspired by Akram Khan’s Giselle. As its use continues to increase, this special issue wishes to examine how, and in what ways, VR is continuing to have an impact on current performance works. For example, some artists are using VR technologies to reimagine existing performance work, others to offer new perspectives on performance making, and others who are exploring new relationships with their audiences.

 This raises a number of interesting and timely questions relating to the impact and influence of VR technologies on creative processes and the nature of the work made – In what ways are current VR technologies helping artists to re-imagine their practice? What new work is being created and is this having an impact on professional performance practice? In what ways have current VR technologies and practices extended concepts such as, transcendence, simulation, immersion, materiality, alternate realities etc? How might the use of VR technologies open up new models and/or possibilities for collaboration between artists and technologists? What new performance environments are being created within VR and how might this change how audiences access and engage with professional performance? How can VR enable audiences to engage with performance work in new ways, both collectively and individually? What can VR offer professional performance practice that a traditional ‘live’ experience cannot? What can we learn from emerging VR practice across other sectors to inform and extend professional performance practice as a whole? 

 We invite full essays of between 5,000 and 8,000 words or artistic position papers of between 2,000 and 3,000 words.  We would particularly welcome practice-as-research contributions that experiment with content and form, while maintaining a rigorous enquiry into their disciplinary frameworks. Contributions might consider (but are not limited to) the following topics:

 •  New paradigms of performance offered by VR

• Live 360 streaming

• Choreographing/directing for VR

• VR and the collective experience of performance

• Role of the audience/participant in VR performance

• Participant experience

• Notions of performance

• Constructions of narrative

• Ethics of VR Performance

• VR and theatre design

Essays should be formatted according to the Routledge journal style.

Please contact Sophy Smith at <ssmith05@dmu.ac.uk> and/or Kerry Francksen at <kerryfk1@hotmail.co.uk> if you have any queries. 

Guest Editors: Prof. Sophy Smith and Dr Kerry Francksen, Directors of DAPPER (Digital Arts Performance Practice – Emerging Research), De Montfort University. DAPPER is a space where people working in all areas of digital performance can come together – practitioners, technologists, academics, organisations and all those in-between – to capture, share, discuss, experiment and develop work and ideas relating to digital art and performance. It is our contention that whilst many individuals work within their own specialist area or sector, innovation occurs when we have the opportunity to collaborate and cooperate with others. Digital art performance practices are emerging as a response to a fast moving technological landscape and as artists adapt to these new paradigms it is clear that digital practices are having a profound effect on the ways in which we make and understand our work. DAPPER aims to provide a space to focus on and interrogate the range of inter/transdisciplinary approaches specifically from the perspective of artistic process and practice. DAPPER runs Knowledge Exchange and Professional Practice events. In 2017-8 these have included practice-based digital performance residencies at Waterman’s Arts Centre as part of Digital Weekender and De Montfort University, offering spaces for experimentation and dialogue for professional practitioners in open creative space, and 2 cross-sector development events, exploring the practices of narrative development in virtual environments and sharing of current practices.

Augmented reality and Virtual garments
736

Magic mirrors superimpose virtual clothing over viewers’ mirror image to let them evaluate fashion items without actually wearing them.

We contribute the Mirror Mirror system that supports not only mixing and matching existing fashion items, but also lets users design new items in front of the mirror and export designs to fabrication devices.

Mirror Mirror makes use of spatial augmented reality and a mirror Virtual garments are visible both on the body for precise manipulation as well as in the reflection to obtain a third person perspective.

 

Designing with Mirror-Mirror is easy. Select brushes and artwork on the mirror surface and apply it directly on the body. A background image is projected behind the user to support evaluating designs in simulated environments such as office, forest or beach.

This novel, but seemingly complicated optical setup that combines a projector and mirror-TV results in a system that is easy to use and versatile due to the multiple “display” layers: on the body, on the mirror surface that shows the UI, on the reflected body and on the background behind the user. Thereby, allowing users to experience and evaluate designs in context.

Mirror Mirror reacts on the contemporary fast fashion trend of disposable clothing and anticipates the rise of personal fabrication technologies and possible futures of flexible color changing e-ink garments and sharing designs over the Internet.

 

Demos

  • Maker Festival (Creative Korea Expo 2015), 26 Nov 2015 – 29 Nov 2015. Seoul, South Korea.
  • SIGGRAPH 2015 Studio, 9 Aug 2015 – 13 Aug 2015. Los Angeles, California, USA.
  • Dongdaemun Exhibition, 18 July 2015 – 25 July 2015. Seoul, South Korea.
  • CHI 2015 KAIST Visit, 24 Apr 2015. Daejeon, South Korea.