D.TV

360º Video and Interactive Storytelling

During Open Fields conference Adnan presented “after.video – displaying video as theory and reference system”:

360º Video and Interactive Storytelling

Aigars CEPLITIS / Luis BRACAMONTES / Adnan HADZI / Arnas ANSKAITIS / Oksana CHEPELYK
Venue: The Art Academy of Latvia

Moderator: Chris HALES

Aigars CEPLITIS. The Tension of Temporal Focalization and Immersivity in 360 Degree 3D Virtual Space

The fundamental raison d’être for the productions of immersive technologies is the attainment of an absolute psychosomatic and physical embodiment. The impasse, however, for audiovisual works shot in 360° space, is that their current schemata as well as their visual configuration oppose the very type of an experience it strives to deploy. To crack the code of narrative design that would render 360° films to offer a truly immersive experience, a number of 360° video prototypes have been created and tested against the backdrop of Seymour Chatman’s narrative as well as Marco Caracciolo’s theories of embodied engagements in order to assess the extent of immersion in a variety of 360° narrative setting, zooming in on summary, scene, omission, pause, and stretch. Such prototype simulation is further followed by testing audiovisual plates whose micronarratives are structured in a rhizomatic pattern. Classical films are edited elliptically, although cut and omission are demarcated in cinema, with cut being an elliptical derivative, and in favor of using freeze frames to pause for a pure description. In 360° cinema, in turn, omission, cut, and pause, do not operate properly; its cinematic preference for here and now creates an inherent resentment to montage. Singulative narrative representations of an event (describing once what happened once) remains the principal core in spherical cinema, with repetitive representations deployed rarely, merely as special effects, or as a patterning device in flashbacks or thought-forming sequences through the post-digital editing style. The repetitive sequences in 360° become particularly disturbing, when their digital content is viewed, using VR optical glasses, instead of desktop computers, and such contrasts answer more fundamental questions as to whether montage is detrimental in 360° film, what types of story material and genre are more suitable for 360° cinema, and how do we gage the level of embodiment. Finally, the residual analysis of the before mentioned prototype simulation brings to the fore the rhizomatic narrative kinetics (the fusion of the six Deleuzoguattarian principles with the classic narrative canons), that should become, de facto, the language of 360°, if the embodiment is to be the key.

Biography. Aigars Ceplītis is the Creative Director of Audiovisual Media Arts Department at RISEBA University, where he teaches Advanced Film Editing Techniques and Film Narratology. He is also a PhD candidate at New Media MPLab, Liepaja University, where he is investigating novel storytelling techniques for 360 degree Cinema. Aigars has been working as a film editor in feature films “The Aunts”, “The Runners”, “A Bit Longer”, “Horizont”, and on 20 TV miniseries “The Secrets of Friday Hotel”. He has formerly served as an office manager and film editor for Randal Kleiser, an established Hollywood director best renown for hits such as “Grease” and “Blue Lagoon”. While in Los Angeles, Aigars headed the program of film and video for disadvantageous children of Los Angeles under the auspices of Stenbeck Family, the owners of MTG. Aigars holds an M.F.A. in Film Directing from California Institute of the Arts and B.A. in Art History from Lawrence University in Wisconsin.

Luis BRACAMONTES. Teleacting the story: User-centered narratives through navigaze in 360º video

This research explores the possibilities of a user-centered narrative strategy for 360º video through a new feature called “navigaze”. Navigaze is a feature introduced by the swedish Startup “SceneThere”, and it allows a controlled level of agency in the storytelling that borderlines between gaming and film. “Teleaction” here is understood from Manovich’s perspective of “acting over distance in real time” as opposed to “telepresence” implying only “seeing at a distance”. Immersive storytelling for Virtual Reality and 360º video presents a new challenge for creators: The dead of the director. At least in the traditional way as seen in other mediums such as theater or film. As its frameless quality and highly active nature demands for a fluid and flexible narrative.
Navigaze allows the user to inhabit a story instead of just witnessing it. By including a space-warp feature reminiscing to Google Street View, users can explore the “virtual worlds” and unravel pieces of the story on their own by gazing at the blue hotspots that transports them to the next location within the same world. Thus, the story constitutes a series of pieces of a puzzle that every person can put together as they want creating a unique narrative experience, similar to Julio Cortázar’s game-changing novel “Hopscotch” (1963).
Focusing on two pieces by SceneThere, “Voices of the Favela” (2016) and “The Borderland” (2017), this research delves into the evolution of storytelling on VR and 360º video and the early stages of the creation of their own narrative language.

Biography. Luis Bracamontes is a narrative designer and writer, specializing in Storytelling for VR and AR. He is an intern in the Virtual Reality start-up, “VRish” in Vienna. And he is currently studying an Erasmus Mundus Joint Master Degree in “Media Arts Cultures”, between Danube University Krems, Aalborg University and the University of Łódź. He has a B.A. in Communication Sciences with a specialization in Marketing and has worked for over 6 years in performing arts and literature. In 2014, he was awarded the “Youth Achievement Award for Art & Culture”, an honorary award given to the most promising and active youths with an outstanding trajectory in art and culture given by the City Hall of Morelia, for the work of his production company “Ala Norte”. Since 2015, he is a member of the Society of Writers of Michoacán (SEMICH). In 2016, he worked as an innovation and marketing consultant in the VIP Fellowship by Scope Group and the Ministry of Finance of Malaysia, in Kuala Lumpur. His recent research includes a paper on hybrid VR Narratives supervised by Oliver Grau, and a research project on post-digital archive experiences supervised by Morten Sondergaard, and presented at the NIME (New Interfaces for Musical Expression) International Conference 2017 in Copenhagen.

Adnan HADZI. after.video – displaying video as theory and reference system

After video culture rose during the 1960s and 70s with portable devices like the Sony Portapak and other consumer grade video recorders it has subsequently undergone the digital shift. With this evolution the moving image inserted itself into broader, everyday use, but also extended it ́s patterns of effect and its aesthetical language. Movie and television alike have transformed into what is now understood as media culture. Video has become pervasive, importing the principles of “tele-” and “cine-” into the human and social realm, thereby also propelling “image culture” to new heights and intensities. YouTube, emblematic of network-and online-video, marks a second transformational step in this medium’s short evolutionary history. The question remains: what comes after YouTube?
This paper discusses the use of video as theory in the after.video project (http://www.metamute.org/shop/openmute-press/after.video), reflecting the structural and qualitative re-evaluation it aims at discussing design and organisational level. In accordance with the qualitatively new situation video is set in, the paper discusses a multi-dimensional matrix which constitutes the virtual logical grid of the after.video project: a matrix of nine conceptual atoms is rendered into a multi-referential video-book that breaks with the idea of linear text. read from left to right, top to bottom, diagonal and in ‘steps’. Unlike previous experiments with hypertext and interactive databases, after.video attempts to translate online modes into physical matter (micro computer), thereby reflecting logics of new formats otherwise unnoticed. These nine conceptual atoms are then re-combined differently throughout the video-book – by rendering a dynamic, open structure, allowing for access to the after.video book over an ‘after_video’ WiFi SSID.

Biography. Dr. Adnan Hadzi has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his Goldsmiths PhD, based on his work with Deptford.TV. It is a collaborative video editing service hosted in Deckspace’s racks, based on free and open source software, compiled into a unique suite of blog, cvs, film database and compositing tools. Through Deptford TV and Deckspace TV he maintains a strong profile as practice-led researcher. Directing the Deptford TV project requires an advanced knowledge of current developments in new media art practices and the moving image across different platforms. Adnan runs regular workshops at Deckspace. Deptford.TV / Deckspace.TV is less TV more film production but has tracked the evolution of media toolkits and editing systems such as those included on the excellent PureDyne linux project.
Adnan is co-editing and producing the after.video video book, exploring video as theory, reflecting upon networked video, as it profoundly re-shapes medial patterns (Youtube, citizen journalism, video surveillance etc.). This volume more particularly revolves around a society whose re-assembled image sphere evokes new patterns and politics of visibility, in which networked and digital video produces novel forms of perception, publicity – and even (co-)presence. A thorough multi-faceted critique of media images that takes up perspectives from practitioners, theoreticians, sociologists, programmers, artists and political activists seems essential, presenting a unique publication which reflects upon video theoretically, but attempts to fuse form and content. http://orcid.org/0000-0001-6862-6745

Arnas ANSKAITIS. The Rhetoric of the Alphabet

Through practice and research I aim to reflect on the connections between language, perception, writing and non-writing.
Jacques Derrida wrote in his seminal book Of Grammatology: “Before being its object, writing is the condition of the episteme”. I am curious – to what extent a written text still is (or should be) the condition of knowledge in artistic research? Does artistic research in general belong to and depend on this understanding of science? Would it be possible to do research without writing? How then one could share findings and outcomes of such research with the public and other researchers?
Writing interests me not only in the context of language, but also from the position of handwriting. How did letters of the alphabet emerge? It seems they were shaped by a human hand. What would letters look like, if they were written not on a flat sheet of paper, but in simulated three-dimensional space? In an attempt to answer the self-imposed question, I have created 3D models of cursive letters and exhibit them as video projections. In each visualization an imaginary writing implement produces an uninterrupted trace – a stroke on the writing plane. On this digitally-simulated stroke – the projection plane – a stream of texts and images is being projected.
I will attempt to combine two sides of artistic research (practice and theory) through writing – a writing system as an art project. Part of the doctoral thesis could be written and presented using this system.

Biography. Arnas Anskaitis (1988) is a visual artist, a lecturer and a PhD student at the Vilnius Academy of Arts. He employs a variety of media in his work, but always starts from a direct dialogue with the site and context in which he is working. His work has been shown at the Riga Photography Biennial (2016); National Art Museum of Ukraine, Kiev (2016); 10th Kaunas Biennale (2015); Contemporary

Oksana CHEPELYK. Virtual Reality and 360-degree Video Interactive Narratology: Ukrainian Case Study.

The aim of this thesis is to present some Ukrainian initiatives developing VR and 360-degree Video Interactive filmmaking: SENSORAMA in Kyiv and MMOne in Odesa. SENSORAMA as AN IMMERSIVE MEDIA LAB of VR reality grow VR | AR ecosystem in Ukraine by supporting talents with infrastructure, education, mentorship and investments.
An interactive documentary «Chornobyl 360», created by the founders of Sensorama Lab, filmed in spherical view of 360 degrees about Chernobyl Nuclear Power Plant, which was the site of Chernobyl disaster in 1986, is now proven to be in demand on the global market. The immersive technologies are used to change human experience in the fields that matter to millions: VR therapy research, healthcare etc. SENSORAMA is based in UNIT.city, a brand new tech park in Kyiv.
The company MMOne from Odesa has created the world’s first three-axis virtual reality simulator, in the form of chair attached to an industrial robot-like arm that moves in response to the action in a video game called Matilda. MMOne hopes the invention takes the global gaming industry in some entirely new directions. The startup debuted Matilda in October 2015 at Paris Game Weeks in France, presenting its device in cooperation with multinational video game developer Ubisoft, which created a racing game especially for Matilda called “Trackmania.” Since the Paris games exhibition, MMOne has had several big companies from the U.S. IT community ask to try out their chair, like Youtube, the Opera Mediaworks, world’s leading mobile advertising platform, Facebook’s Instagram, and Oculus LLC.

Biography. Dr. Oksana Chepelyk is a leading researcher of The New Technologies Department at The Modern Art Research Institute of Ukraine, author of book “The Interaction of Architectural Spaces, Contemporary Art and New Technologies” (2009) and curator of the IFSS, Kiev. Oksana Chepelyk studied at the Art Institute in Kiev, followed a PhD course, Moscow, Amsterdam University, New Media Study Program at the Banff Centre, Canada, Bauhaus Dessau, Germany, Fulbright Research Program at UCLA, USA. She has widely exhibited internationally and has received ArtsLink1997 Award (USA), FilmVideo99 (Italy), EMAF2003 Werklietz Award 2003 (Germany), ArtsLink2007 Award (USA), Artraker Award2013 (UK). Residencies: CIES, CREDAC and Cite International of Arts in Paris (France), MAP, Baltimore (USA), ARTELEKU, San Sebastian (Spain), FACT, Liverpool (UK), Weimar Bauhaus (Germany), SFAI, Santa Fe, NM, (USA), DEAC, Budva (Montenegro). She was awarded with grants: France, Germany, Spain, USA, Canada, England, Sweden and Montenegro. Work has been shown: MOMA, New York; MMA, Zagreb, Croatia; German Historical Museum, Berlin and Munich, Germany; Museum of the Arts History, Vienna, Austria; MCA, Skopje, Macedonia; MJT, LA, USA; Art Arsenal Museum, Kyiv, Ukraine; “DIGITAL MEDIA Valencia”, Spain; MACZUL, Maracaibo, Venezuela, “The File” – Electronic Language International Festival, Sao Paolo, Brazil; XVII LPM 2016 Amsterdam, Netherlands.

Art Centre, Vilnius (2014); 16th Tallinn Print Triennial (2014); National Gallery of Art, Vilnius (2012); Gallery Vartai, Vilnius (2012), and in other projects and exhibitions.

Virtualities and Realities

Adnan presented the after.video project at the Open Fields conference. After video culture rose during the 1960s and 70s with portable devices like the Sony Portapak and other consumer grade video recorders it has subsequently undergone the digital shift. With this evolution the moving image inserted itself into broader, everyday use, but also extended it ́s patterns of effect and its aesthetical language. Movie and television alike have transformed into what is now understood as media culture. Video has become pervasive, importing the principles of “tele-” and “cine-” into the human and social realm, thereby also propelling “image culture” to new heights and intensities. YouTube, emblematic of network-and online-video, marks a second transformational step in this medium’s short evolutionary history. The question remains: what comes after YouTube?

This paper discusses the use of video as theory in the after.video project, reflecting the structural and qualitative re-evaluation it aims at discussing design and organisational level. In accordance with the qualitatively new situation video is set in, the paper discusses a multi-dimensional matrix which constitutes the virtual logical grid of the after.video project: a matrix of nine conceptual atoms is rendered into a multi-referential video-book that breaks with the idea of linear text. read from left to right, top to bottom, diagonal and in ‘steps’. Unlike previous experiments with hypertext and interactive databases, after.video attempts to translate online modes into physical matter (micro computer), thereby reflecting logics of new formats otherwise unnoticed. These nine conceptual atoms are then re-combined differently throughout the video-book – by rendering a dynamic, open structure, allowing for access to the after.video book over an ‘after_video’ WiFi SSID.

Comments: Dr. Adnan Hadzi has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his Goldsmiths PhD, based on his work with Deptford.TV. It is a collaborative video editing service hosted in Deckspace’s racks, based on free and open source software, compiled into a unique suite of blog, cvs, film database and compositing tools. Through Deptford TV and Deckspace TV he maintains a strong profile as practice-led researcher. Directing the Deptford TV project requires an advanced knowledge of current developments in new media art practices and the moving image across different platforms. Adnan runs regular workshops at Deckspace. Deptford.TV / Deckspace.TV is less TV more film production but has tracked the evolution of media toolkits and editing systems such as those included on the excellent PureDyne linux project.

Adnan is co-editing and producing the after.video video book, exploring video as theory, reflecting upon networked video, as it profoundly re-shapes medial patterns (Youtube, citizen journalism, video surveillance etc.). This volume more particularly revolves around a society whose re-assembled image sphere evokes new patterns and politics of visibility, in which networked and digital video produces novel forms of perception, publicity – and even (co-)presence. A thorough multi-faceted critique of media images that takes up perspectives from practitioners, theoreticians, sociologists, programmers, artists and political activists seems essential, presenting a unique publication which reflects upon video theoretically, but attempts to fuse form and content.

Valletta 2018 discusses Cultural Mapping in local and international contexts

Deptford.TV moved to Malta and became Dorothea.TV, still D.TV. We took part in the Cultural Mapping Conference.

The second Valletta 2018 international conference Cultural Mapping: Debating Spaces & Places opened at the Mediterranean Conference Centre, in Valletta, this morning. The conference focuses on cultural mapping, the practice of collecting and analysing information about cultural spaces and resources within a European and Mediterranean context.

Delivering the opening address, Valletta 2018 Foundation Chairman Jason Micallef spoke in light of the Syria conflict which is resulting in widespread destruction of several cultural resources, such as heritage sites, in the Mediterranean region.

“Against this background cultural mapping takes on a renewed importance, not only in preserving the existing heritage of communities, but particularly in disseminating this knowledge through new, global channels and technology, forging new relationships between people across the world,” Jason Micallef said. “The examples of cultural mapping presented during this conference will allow us to dream of new ways in which the knowledge and understanding of our shared histories and our shared futures can be spread across the world”.

Bringing together a number of international academics, researchers, cultural practitioners and artists, the conference will explore various exercises of cultural mapping taking place across the world. With the subject being relatively new to Malta, speakers will be discussing the role of cultural mapping and how it can influence local cultural policy, artistic practice, heritage and cultural identity, amongst others.

The conference is being organised following last April’s launch of www.culturemapmalta.com – the online map exhibiting the data collected during the first phase of the Cultural Mapping project, led by the Valletta 2018 Foundation. Speakers include experts, academics, researchers and activists within the fields of tangible and intangible heritage, sustainable development, and cultural policy, both across Europe, the Mediterranean and beyond. Keynote speeches will be delivered by Prof. Pier Luigi Sacco, a cultural economist who will be presenting examples of cultural mapping taking place in Italy and Sweden, and Dr Aadel Essaadani, the Chairperson of the Arterial Network, a Morocco-based organisation that brings together art and culture practitioners across the African continent.

The conferrence is being organised by the Valletta 2018 Foundation in collaboration with the Centre for Social Studies (CES), University of Coimbra. Creative Europe Desk, the European Commission Representation Office, EU-Japan Fest Committee, the French Embassy, Fondation de Malte and Spazju Kreattiv are also supporting the event.

https://www.culturemapmalta.com/#/
https://valletta2018.org/cultural-mapping-publication/
https://valletta2018.org/news/cultural-mapping-conference-registration-now-open/
https://valletta2018.org/events/subjective-maps-hamrun-workshop/
https://valletta2018.org/events/subjective-maps-birzebbuga-workshop/
https://valletta2018.org/events/subjective-maps-valletta-workshop/
https://valletta2018.org/news/e1-5m-awarded-to-valletta-2018-european-capital-of-culture/
https://valletta2018.org/objectives-themes/
https://valletta2018.org/cultural-programme/naqsam-il-muza/
https://valletta2018.org/events/naqsam-il-muza-gzira/
https://valletta2018.org/news/naqsam-il-muza-art-on-the-streets-of-marsa-and-kalkara/
http://heritagemalta.org/
https://muza.heritagemalta.org/
https://valletta2018.org/events/naqsam-il-muza-marsa/
https://valletta2018.org/events/naqsam-il-muza-kalkara/
https://valletta2018.org/events/naqsam-il-muza-the-art-of-sharing-stories/
https://valletta2018.org/news/naqsam-il-muza-the-art-of-sharing-stories/
https://valletta2018.org/organised_events/muza-making-art-accessible-to-all/
https://valletta2018.org/news/nationwide-participation-of-the-valletta-2018-cultural-programme/
https://valletta2018.org/events/psychoarcheology-fragmenta-event-with-erik-smith/
https://valletta2018.org/events/fragmenta-imhabba-bl-addocc/
https://valletta2018.org/events/fragmenta-from-purity-to-perversion/
https://valletta2018.org/events/fragmenta-untitled-ix-xemx/
https://valletta2018.org/events/fragmenta-outside-development-zone-odz/
https://fragmentamalta.com/
https://valletta2018.org/events/fragmenta-hortus-conclusus/
https://valletta2018.org/events/film-screening-blind-ambition-and-qa-with-hassan-khan/
https://valletta2018.org/events/fragmenta-outside-development-zone-odz/
https://valletta2018.org/events/get-your-act-together-science-in-the-city/
https://valletta2018.org/events/notte-bianca/
https://valletta2018.org/events/malta-book-festival/
https://valletta2018.org/events/wrestling-queens/
https://valletta2018.org/events/rima-digital-storytelling-workshop/
https://valletta2018.org/cultural-programme/recycled-percussion/
http://latitude36.org/
https://valletta2018.org/latitude-36/
https://valletta2018.org/news/latitude-36-call-for-maltese-living-abroad/
https://valletta2018.org/cultural-programme/latitude-36/
https://valletta2018.org/bar-europa-is-good-for-the-spirit/

Alexa, Who is Joybubbles?

Alexa, Who is Joybubbles?

prix des beaux arts geneve

Horaire Salle Crosnier (jours fériés inclus)
Mardi–Vendredi   15:00 – 19:00
Samedi                  14:00 – 18:00

Jeudi 2 novembre, ouvert jusqu’à 20h30.

Alexa, Who is Joybubbles est le fruit d’une collaboration entre !Mediengruppe Bitnik et le compositeur de musique électronique Philippe Hallais. C’est une chanson qui réactive le souvenir de Joybubbles, le premier pirate téléphonique, et imagine son intervention dans le réseau des appareils domestiques connectés d’aujourd’hui et sa rencontre avec les applications d’assistants personnels.

Les pirates téléphoniques, dont l’activité remonte aux années soixante, étaient d’avides et espiègles explorateurs du réseau téléphonique. Ce réseau les fascinait car il était le premier réseau, le premier ordinateur en fait, et qu’il connectait le monde entier. L’un des pionniers et des plus doués d’entre eux était Joybubbles (25 mai 1949 – 8 août 2007), né Josef Carl Engressia Jr. à Richmond enVirginie, aux États-Unis. Aveugle de naissance, il a commencé à s’intéresser au téléphone à l’âge de quatre ans. Tout jeune, il avait déjà découvert comment passer des appels gratuitement. Il avait l’oreille absolue et était capable de siffler 2600 hertz, la fréquence que les opérateurs utilisaient pour acheminer les appels et produire les connexions et déconnexions. Joybubbles a donc été l’un des premiers à explorer ce réseau et à en apprendre les codes, avec son seul souffle. Pour produire ce son, les autres pirates utilisaient des appareils qu’ils fabriquaient eux-mêmes. Joybubbles a agi comme un catalyseur, unissant des pirates aux activités diverses en l’un des premiers réseaux sociaux virtuels. Alors qu’il était aveugle, le téléphone lui donnait accès à un réseau de personnes partageant ses intérêts tout autour du monde. Après l’annonce de son renvoi de l’université en 1968 et sa condamnation en 1971 pour infractions téléphoniques, il est devenu le centre névralgique du mouvement. Les pirates ont découvert qu’ils pouvaient utiliser certains commutateurs téléphoniques comme ceux des salles de conférence pour que le groupe, géographiquement dispersé, puisse discuter et échanger des idées et des connaissances via l’appel d’un même numéro, créant ainsi un réseau social bien avant l’internet.

!Mediengruppe Bitnik & Phillipe Hallais se sont penchés sur les méthodes de ces premiers hackers pour engager un dialogue avec Alexa et ses collègues assistants personnels intelligents. Ces dispositifs, entités semi-autonomes, commencent tout juste à coloniser nos habitats. Ils font partie d’un nouvel écosystème d’appareils connectant l’espace physique à l’espace virtuel et dont le contrôle s’effectue vocalement. Ces appareils ont une certaine capacité d’action, ils agissent selon un ensemble de règles et d’algorithmes. Ces algorithmes et  ces règles ne sont pas dévoilés à l’utilisateur, de même que les données que ces appareils récupèrent. L’utilisateur n’a donc aucune emprise sur le fonctionnement de ces appareils, il ne peut en évaluer la partialité. Il ne peut savoir quelles données à son sujet sont collectées par l’appareil, quelles informations en sont extraites puis partagées avec d’autres appareils et d’autres entreprises.

Que fera Joybubbles avec ces appareils à commande vocale ? Qui est-ce qui agit quand ces appareils agissent ? Est-ce bien moi qui commande de la nourriture lorsque mon frigo décide de faire des provisions ? Et que se passerait-il s’il était hacké et envoyait plutôt des spams ? Lorsque je m’entoure de ces appareils semi-autonomes, ma capacité d’action est-elle étendue ou au contraire diminuée ? Que se passe-t-il lorsque l’un de ces appareils se déclenche au son d’une chanson qui passe à la radio ?

La musique fait ici référence à la grande influence que les téléphones mobiles ont pu avoir sur la musique populaire contemporaine comme le dancehall et le ragga. Depuis les années 90, lorsque ces téléphones ont commencé à être partie prenante de nos vies, ils ont influencé la manière dont la musique était produite. Dès ses début, le téléphone mobile a fait office de sound système portatif, tout d’abord en utilisant des chansons populaires pour ses sonneries, puis grâce à l’accès à internet procuré par les smartphones. Avec l’instrumentation électronique gagnant du terrain depuis les années 80, le son du dancehall a considérablement changé pour devenir de plus en plus caractérisé par des séquences intrumentales (ou « riddims »). Les sonorités typiques des téléphones mobiles sont devenues une véritable source de samples. Alexa, Who is Joybubbles est un hommage à l’usage du téléphone dans le dancehall, et à Joybubbles. ♥‿♥

Philippe Hallais est un compositeur de musique électronique né en 1985 à Tegucigalpa au Honduras et qui vit à Paris. Sa musique joue de la réappropriation des clichés sonores, du folklore médiatique et de la multiplicité des langages musicaux associés aux subcultures de la dance. Il a publié à ce jour trois albums sous le pseudonyme de Low Jack (Garifuna Variations, L.I.E.S, 2014 ; Sewing Machine, In Paradisum, 2015 ; Lighthouse Stories, Modern Love, 2016) et un sous son propre nom : An American Hero sur le label Modern Love, en 2017. En concert, il a déjà collaboré avec les musiciens Ghedalia Tazartès et Dominick Fernow / Vatican Shadow et a conçu des performances pour le musée du Quai Branly, le Centre culturel suisse et la Fondation d’entreprise Ricard à Paris

Le duo !Mediengruppe Bitnik (lire : le non mediengruppe bitnik) vit et travaille à Berlin et à Zurich. Les deux artistes contemporains font d’Internet leur sujet et leur matériel de travail. Leur pratique part du numérique pour transformer les espaces physiques et se sert régulièrement d’une perte de contrôle intentionnelle pour défier les structures et les mécanismes établis. Les œuvres de !Mediengruppe Bitnik formulent des questions fondamentales sur des problèmes contemporains.

!Mediengruppe Bitnik est composé des artistes Carmen Weisskopf et Domagoj Smoljo. Leurs complices sont le réalisateur et chercheur londonien Adnan Hadzi et le reporter Daniel Ryser. Ils ont reçu, entre autres, le Swiss Art Award, le New Media Jubilee Award Migros, le Golden Cube Dokfest de Kassel et une mention honorifique à Ars Electronica.


Le Prix de la Société des Arts • Arts Visuels • Genève 2017
(Calame • Diday • Harvey • Neumann • Spengler • Stoutz)
est décerné à !Mediengruppe Bitnik.

!Mediengruppe Bitnik est un duo composé de Carmen Weisskopf (*1976, Suisse) et Domagoj Smoljo (*1979, Croatie). Les deux artistes vivent et travaillent à Zurich mais résident actuellement à Berlin. !Mediengruppe Bitnik utilise Internet à la fois comme sujet et comme matériau de son travail artistique, partant du numérique pour transformer l’espace physique. Le duo s’attaque à des problèmes d’actualité et emploie souvent des stratégies de perte de contrôle qui défient les structures et dispositifs existants.

Ce prix est attribué sur la base de recherches effectuées en toute indépendance, sans concours, par les membres du jury constitué cette année par Felicity Lunn et composé de : Ines Goldbach (directrice Kunsthaus Baselland), Valerie Knoll (directrice de la Kunsthalle Bern), Boris Magrini (historien d’art et curateur indépendant ; expert indépendant pour Pro Helvetia, section arts visuels), Laurent Schmid (artiste et professeur à la HEAD, Genève ; responsable Master Arts visuels – Work. Master), Séverine Fromaigeat (historienne et critique d’art ; membre de la commission des Expositions de la Classe des Beaux-Arts de la Société des Arts, Genève).

48 Hours MIND LESS

STWST48x3
48 Hours MIND LESS
8. – 10. Sept 2017

Unter dem Motto MIND LESS bietet STWST48x3, die dritte Ausgabe von STWST48, eine 48-Stunden-Showcase-Kunst-Extravanganza der expandierenden Art. Sinnfreie Information, offene States of Mind, ein Infolab nach den neuen Medien, Quasi-Koordinaten der erweiterten Kontexte, Funky Fungis, Digital Physics und Meltdown Totale: STWST48x3 MIND LESS bringt neue Kunstkontexte, die in den letzten Jahren in und rund um die Linzer Stadtwerkstatt entwickelt wurden. Watch out: Die MIND LESS Stadtwerkstatt steht auch 2017 unter der Direktive von New Art Contexts und autonomen Strukturen.

Start: Freitag, 8. September, 14 Uhr
Ende: Sonntag, 10 Sept, 14 Uhr

RÜCKSCHAU – ALLE VIDEOLINKS ZU DEN PROJEKTEN

MAZI: CAPS community workshop in VOLOS

CAPS Community workshop is taking place on the 12th July

We’re still working on the agenda. Here below you’ll find a first overview of the activities of each day:

10/7/2017

MAZI Workshop – (all day). Hands on experiences tutorial, learn how to use & set-up the MAZI toolkit:

09:30 – 09:45
Welcome and introduction to MAZI

By Thanasis Korakis

09:45 – 10:30
Keynote talk: Digital Commons, Urban Struggles and the Right to the City?

Andreas Unteidig and Elizabeth Calderon Luning

10:30 – 10:45    Coffee break

10:45 – 11:30
MAZI stories
  • 10h45-11h00 Creeknet ‘Bridging the DIY networks of Deptford Creek’ (Mark Gaved and James Stevens)
  • 11h00-11h15 Living together: realistic utopias in Zurich (Ileana Apostol and Philipp Klaus)
  • 11h15-11h30 Unmonastery: a 200 year plan (Michael Smyth and Katalin Hausel)
11:30 – 13:00
The MAZI toolkit and its applications

Harris Niavis and Panayotis Antoniadis

13:00 – 14:00    Lunch break

14:00-17:00
Hands-on experience with the MAZI toolkit and participatory design

The audience will be split in 4 (or more) groups. Each group will have a MAZI leader, one of the partners, for guiding the whole process of the MAZI toolkit deployment. MAZI leaders will describe to each group the context in which they are going to configure their MAZI Zone. Some possible scenarios/contexts in the area around the event will be defined, where they could deploy MAZI Zones and help also the CAPS event for the whole week.
*Please bring your laptop with you or any other equipment (Raspberry pi 3, miniSD cards etc.),so you can actively participate in the workshop.

17:00 – 18:00
Wrap-up of the workshop

11/7/2017

  • MAZI Review (closed meeting – all day) Download here the agenda (PDF)
  • HACKAIR – Project Review Meeting (closed meeting – all day)
  • Greek CAPS & H2020 cluster workshop  (15:00 – 18:00)

CHAIN REACT Workshop – Hands on experiences

12/7/2017

2nd CAPS Community workshop

13/7/2017

EMPAVILLE Role Play (run by EMPATIA Project)

11:00 – 12:30

Empaville is a role-playing game that simulates a gamified Participatory Budgeting process in the imaginary city of Empaville, integrating in person deliberation with digital voting. For more details visit EMPAVILLE ROLE PLAY (https://empaville.org)

PROFIT Workshop (Open meeting – half day)

13:00 – 17:00 Download here the agenda (PDF)

  • Project introduction (M.Konecny – EEA)
  • Financial literacy and ecnomic behaviour for financial stability and open democracy (G.Panos – UoGlasgow)
  • Promoting financial awareness and stability (Artem Revenko – Semantic web company) Presentation available here (PDF)
  • Textual analysis in economics and finance (I.Pragidis – DUTH) Presentation available here (PDF)
  • What’s ethical finance (Febea) Presentation available here (PDF)
  • Walkthrough of the PROFIT platform
    • Discussion in small groups focused on different aspects of the project & platform
  • Conclusions and wrap-up

Note: As this is an interactive event please bring a laptop so you can contribute to the research effort.

14/7/2017

CROWD4ROAD hands on experiences (open meeting – all day)

09:00 – 09:30
Crowd4roads: crowdsensing and trip sharing for road sustainability

Presentation by the Crowd4roads consortium

9:30 – 10:00
Collaborative monitoring of road surface quality

Presentation by University of Urbino

10.00 – 10:30
Car pooling and trip sharing

Presentation by Coventry University

10:30 – 11:00    Coffee break

11:00 – 11:30
Hands on the first release of the Crowd4roads app
11:30-12:15
Hands on Crowd4roads open data
12:15 – 13:00
Gamification strategies for engagement

Presentation by Coventry University

  • Closing Plenary – Wrap-up and Greek cocktail (5-7 pm)

The Next Generation Internet (NGI) initiative, launched by the European Commission in autumn 2016, aims to shape the future internet as an interoperable platform ecosystem that embodies the values that Europe holds dear: openness, inclusivity, transparency, privacy, cooperation, and protection of data. The NGI should ensure that the increased connectivity and the progressive adoption of advanced concepts and methodologies (spanning across several domains such as artificial intelligence, Internet of Things, interactive technologies, etc.) drive this technology revolution, while contributing to making the future internet more human-centric.

This ambitious vision requires the involvement of the best Internet researchers and innovators to address technological opportunities arising from cross-links and advances in various research fields ranging from network infrastructures to platforms, from application domains to social innovation.

Live: Algorave @ Archspace in London

Our friend Mattr performed in the Archspace in London. Jack Chutter wrote the following review in the ATTN:Magazine:

When I initially heard about live-coding, I was quick to presume that it was beyond my technical grasp. After all, surely this music was the reserve of those who have spent their lives immersed in programming, hidden behind a wall of education and natural computer aptitude, forbidden to the layman – me, for example – who should probably stick to more tangible forms of instrumental causality (hitting a drum, pressing a key). Yet coupled with my recent interview with Belisha Beacon (who went from code novice to Algorave performer within a matter of months), my experience tonight has convinced me otherwise. That’s not to say that Algorave doesn’t regularly slip beyond my technical comprehension, folding code over itself to produce spasmodic, biomechanical bursts of light and sound, ruptured by compounded multiplications and tangled up in polymetric criss-cross. Yet with the code projected upon a large screen in front of me, I see that these transparent mechanics are often painfully easy to understand. A line of code is activated: a sample starts. An empty space is deleted: a rhythmic pattern shifts one step. I witness the preparation and execution of “sudden” bass drops; I am exposed to the application of effects and shifts in pitch. At some points at least, I totally get this.

Each set is accompanied by digital projections, most of which are live-coded by the evening’s two visual artists (Hellocatfood and Rumblesan). As I walk into Archspace, the screen is brimming with these vibrant, hyperventilating spheres, all spinning at incredible speed, expanding and contracting as though set to burst. Meanwhile, the electronic pulses of Mathr feel way overcharged, bloating beyond their own mathematical confines, bleating like the alarms that herald the opening and closing of space shuttle doors or slurred bursts of laser beam. A rhythm is present but it never walks in a straight line. It’s constantly correcting itself incorrectly, sliding and slanting between shapes that never properly fit, modulating between various flavours of imbalance.

The sounds remain very much askew for Calum Gunn, although now it’s like someone playing a breakbeat remix of a Slayer track on a scratched disc, choking on the same split-second of powerchord as the beat rolls and glitches beneath, quickly abandoning all sense of rightful rhythmic orientation. Later it’s all synth hand-claps and digital squelches, exploded into tiny fragments that whoosh dangerously close to my ears. On screen, a square cascades like a spread deck of cards, fanning outward across future and past iterations, losing outline and angle in the overlap of shifted self. Together, sound and image shed all time-space integrity, knotted and crushed by the layers of multiplication and if-then function, complicating their own evolution until they can’t possibly find its way back again.

It’s almost as though Martin Klang has witnessed this chaos and taken heed. His music is carefully and precariously built, stacked in a brittle tower of drones and ticks and pops. Hi-hats spill out like coins on a kitchen floor (whoops – not quite careful enough), as the beat tip-toes between them in a nervous waltz, accompanied by what sounds like the glugging, croaking proclamations of an emptying kitchen sink. All adjustments are patiently negotiated. The soundscape switches from a duet between drone sweeps and popping fuses, to an ensemble of water-drenched bouncy balls of various sizes. I feel tense as I watch this music unfold, as though Klang’s synth might explode with just one heavy-handed application of change.

IMG_0846

MARTIN KLANG + RUMBLESAN

This threat doesn’t lift as we move into Miri Kat’s thick, disaster prone dub, although her sound is fearless and indifferent to it: rhythms forward-roll into mists of radiation, smacking into hydraulic doors and tangling itself in the zaps of crossed electronic wires. The rhythm comes and goes in huge obelisks of volume and visceral bass frequency, announcing themselves with thundering severity and then dropping out, allowing myth and ambient chimes to pool in the stretches of absence. At its loudest her performance is incredibly visceral, the beats wracked with the noises of ripping open, or tectonic electronic rumbling, or up-ended boxes of micro-sampled ticks and trinkets. Meanwhile, the visuals come in a gush of glitch-ruptured Playstation animation and over-zealous zoom lens, plunging into pixelated colours and flicker and burst into the far corners.

Archspace is packed out by now. Contrary to my silly assumptions that Algoraves would be an elitist, ultimately cerebral affair, a vast majority of the crowd are dancing (in fact, it is me – pinned against the side wall with my head in my notebook – who could be most readily accused of forgoing visceral enjoyment in favour of lofty pontification). This interface between code and human rhythm is further explored by Canute, tasking a live drummer to find a foothold within an ever-modulating algorithmic output, with snare drums snatching at synthesisers splayed in scattershot and krautrock 4/4s ploughing forward through a hail of ping-pong delays, while micro-samples bounce off the windscreen of those thoroughly human hits. The coded output slots into a new pacing and the drummer realigns accordingly, swerving in and around those blocks of binary exactitude, tumbling across those flatulent bursts of morse code and synthetic mandolin.

It’s a dizzy experience, and Heavy Lifting only nurtures the nausea further. Her set is like having a surreal dream while travel sick, head slumped out of the window of a gigantic cruise-liner, with the excitable voices of in-boat entertainment in one ear and the churn of the sea in the other. Spoken samples and revved motors whirl over beats that throb like an insistent headache, as samples eat themselves and fold over one another, blurred by the waves of sickness or sliced up into phonetic digital chirps. The beat throbs at ever-louder volumes. My heartbeat lodges itself in my head. Somewhere in the mixture of slur and abrasion – the precise combination of ambient wave and brash attack – Heavy Lifting strikes upon a strange form of ecstasy, as a dense rhythm rises from beneath and pushes the quease and wooze aside. It’s wonderful.

Due to a route closure and lengthy diversion affecting my journey home to Cheltenham, my own Algorave concludes tonight with a set from Belisha Beacon (my apologies to tonight’s final act, Luuma, who I had to miss as a result). Her improvisation builds itself, deconstructs itself, reshapes itself. There are no foundations, there is no final form. Raw samples (digital woodwind, dry synthetic percussion) enter one at a time and slide into place, methodically adding new angles and asymmetries to the overall shape, compounding individual decisions into a network of intersecting pulses and chimes. The 4/4 clicks into place as the rhythm shunts into a continuous stomp, finding momentary alignment before Belisha Beacon starts to pick apart the shape all over again. This methodical transparency is what sold me on the accessibility and open possibilities of live-coding. While some of tonight’s performances explore the potential of enacting numerous ideas simultaneously, splaying the code like projected firework embers, others explode the software mechanics into a series of singular steps. It’s like witnessing a film and its “making of” simultaneously, with my enjoyment of each beat enriched by the ability to share in the very spell that brought it to be.

Realignment

The app is now available! Please download and install for Android phones and tablets.

During the recent dash to pretotype the Anchorholds app for Creeknet, we have been chopping up html and processing images to retrofit our fork of Open University project Salsa.

This requires rewriting of the templates to build the first 16 sets of pages with matching images etc. I can’t say it’s been easy collaborative process, even with great services at hand from Sandstorm and Google, so thanks to all concerned for their tireless support and patience.

Overall we were getting on fine with Sandstorm until some gremlins in the Davros share, made the files read only! With time lapping at our heels we made a swtich to Google Drive to complete the task but got into a synchronisation battle with one another. In the end we have resolved to build a staging server from where future versions of the app html will be tested. This could all have been handled better, so lessons learned!

During the last 12 months SPC has been working with individuals and groups based along Deptford Creek who are invested in local, social and technological networks. Some are resolutely off grid, harvesting energy and resisting normalisation pressures. Others take their time to make changes and take on new ideas but almost all have an investment in networks of one sort or another that they strive to build, maintain and protect.

This fantastic pictogram map of Creeknet was drawn by the Minesweeper Collective interns, working out of Deckspace media lab over the summer. It’s their impression of the people and spaces of the area as they have experienced them, where did all those bats come from ?

As a whole it has been a very interesting, complex and sometimes confusing process of exchange that we have been careful to nurture rather than project into, with insensitive and inappropriate energy. Rather we have attempted a participatory engagement in local activities, supporting initiatives and sharing trust, listening to needs of those we meet and working to understand the changing conditions.

It’s also been an opportunity to revisit some of the great relationships established in earlier network projects the most recent of which OWN had fallen out of use in recent years. New mesh network equipment has been installed along the length of the Creek in support of the Mazizones some have already begun cutomising to meet their specific network needs.

The current version of the resulting Anchorholds app will be available for download in time for the first day of Creeknet Symposium 20th June. For now it works only with Android smartphones and tablets. Once installed and running it will push location specific information to your screen when in proximity to a trail of Bluetooth beacon responders along Deptford Creek.

Today there is just sprinkling of information preloaded in the app but it is intended as one mechanism to promote public awareness of DIY networks of Deptford Creek. We hope to extend it’s scope to list local resources and report on collected data that may be critical to future well being of all those who live and work in the area, cross it’s bridges and moor on its shores.

How much of this is fiction?

We took our students to FACT Liverpool to see the How much of this is fiction exhibition, where !Mediengruppe Bitnik has been exhibiting Delivery for Mr. Assange (2013).

Julian Assange has been living at the Ecuadorian embassy in London since June 2012. In early 2013, !Mediengruppe Bitnik sent a parcel to the WikiLeaks founder, in a work entitled Delivery for Mr. Assange. The parcel contained a camera which broadcast its entire journey through the postal system live on the
Internet. Delivery for Mr Assange is presented here in three parts, including an X-Ray of the original package sent to the Embassy during the mail-art performance, and a text written by Daniel Ryser in 2014, which captures the extraordinary delivery and the uproar that followed on the Internet.
The largest element is Assange’s Room: a striking, sculptural 1:1 reproduction of Assange’s office at the embassy. The room is meticulously constructed entirely from memory (photography is not allowed in embassy rooms) after several visits made by the artists to the office. The disparity between Assange’s
lack of freedom is emphasised by the visitors freedom to walk in and out of the uncannily normal space. The physical restrictions placed on a seeker of political asylum stand in stark contrast to the reach offered by Wikileaks, and the Internet, as a platform designed for free speech.

Accompanying the installation is digital work Skylift (VO.2) by Adam Harvey, a geolocation spoofing device that virtually relocates visitors to Assange’s residence at the Ecuadorian Embassy.

How much of this is fiction.is a touring exhibition, programme of events, and media campaign exploring the art and activist movement, Tactical Media, which emerged in the late 90s. Specifically, the project investigates the ways in which one of the key legacies of Tactical Media (namely, the politically inspired
media hoax) exploits the boundary between fiction and reality. How much of this is fiction. examines the role, and social purpose, of the artist as Trickster.

This exhibition looks at the legacy of the initial projects, moments and acts within Tactical Media’s ‘history’, as well as how these approaches have altered in today’s era of mass self-mediation through the widespread availability of social media and other decentralised communication platforms. It also presents new works by contemporary artists who are working within the areas of politically engaged (media disseminated) art, using approaches which resonate with the ethos of Tactical Media, shifting public perception and awareness of issues through artistic experimentation and a call to the imagination.

Creeknet XF Symposium

Its been a very hectic few weeks at SPC as we bring focus onto DIY networks of Deptford Creek at the first Creeknet Symposium on 20th and 21st June.

The poster here for you to print and put up in your window, outlines event details which can be found in full on the SPC event listings and at http://deptfordcreek.net

The Creeknet friends have been meeting regularly at venues up and down the creek. We have been exploring the fast changing environment and revisiting access points onto the river, crossing bridges and improving an understanding of local concerns and ambitions. The last of these before summer takes hold is on Monday 12th June at noon, in the Undercurrents gallery inside the Birdsnest pub on Deptford Church Street. We will be collecting together images and stories to publish at the local network Anchorholds, a trail of information points along the creek, so please do come along to contribute your experiences !

Rapid progress was made by the very energetic Hoy Steps clear-up group on Monday 5th June. The huge overgrowth of Buddleia clogging views, was cut down and disposed of in a flurry of action and enthusiasm. The vigorous roots of this plant have got deep into and have damaged the sea wall and will continue to regrow unless more drastic measures to remove remnants are adopted soon, even then they are likely to return!

Wooden pallets stored at street level have been sorted and stacked ready for re-use or removal and the rubbish sheet materials, plastic wrappers and polystyrene are bagged ready for disposal. We return early on Tuesday 13th to complete the clean-up process in preparation for a public viewing during Creeknet Symposium the following week.

Friends of Deptford Creek is a community group set up to support, represent and protect the human, natural and built environment of Deptford Creek, London. How do these two different groups work together? How does the changing landscape affect them? What technologies can help?

Find out by joining us over an exciting two days of public meet ups and workshops to exchange ideas and explore the DIY networks of Deptford Creek<http://deptfordcreek.net/>.

Meet MAZI<http://mazizone.eu/> partners from around Europe, Chat to local community groups, and play with our technology that support local networks and discuss what’s next for Deptford. You can attend all or parts of these events over the two days by registering with Eventbrite here:

Tuesday 21st June 2017
Wednesday 22nd June 2017

For further information please visit this website.

This week starting Monday 12 June, we have a busy schedule to install equipment, complete work and do last minute promotion. (really!). Today we are meeting at the Undercurrents Gallery in the Birdsnest pub to update the mazizone prototype there and meet with local mariners and artists to discuss their network systems. On Tuesday it’s an early Lowtide and 10AM return to the Hoysteps to complete clearup work and prepare for a visit the following week, refreshments provided. After lunch we will be installing bluetooth beacons along the creek to mark out the Anchorhold locations

Wireless Wednesday at http://bit.spc.org this week is dedicated to preparation of print materials for distribution during the Creeknet Symposium so please come along and help out, but please hold off on the broken PC’s for a couple of weeks! On Thursday and Friday, We will be testing the Creeknet Anchoholds app, a guide to the DIY networks of Deptford Creek. If you would like to help out please call for more details as we will be working along the length of the tidal creek from Brookmill Park to the Swing bridge.

http://friends.deptfordcreek.net

Don’t forget to tell us you are attending Creeknet Symposium, not least so we can arrange catering! Please register.

Creeknet meet-up @ Hoy

The MAZI Project is working on an alternative technology, Do-It-Yourself networking, a combination of wireless technology, low-cost hardware, and free/libre/open source software (FLOSS) applications, for building local networks, known as community wireless networks.