
March 25/26, 2021
Theme 2021:
Bridging Distance
Inter/sections is the almost annual event organised by the PhD students of the Media and Arts Technology CDT at Queen Mary University London. Due to the COVID-19 pandemic we skipped 2020, but are back this year with a fully-online two-day event of talks and workshops.
Technology is what keeps us connected throughout lockdowns, where we find ourselves physically separated from family, friends and colleagues. With the desire to connect with our community and expand our network during these isolated times we are very happy to be able to welcome practitioners from various fields of media and arts technology to work and talk with us about and with emerging technologies. The schedule for Inter/sections 2021 reflects the interdisciplinary nature of the PhD programme and touches on topics from making, to e-textiles, to sensory devices and 3D audio. While most of us spend hours and hours staring at screens at the moment, this showcase of ideas and hands-on workshops will stimulate conversations of multisensory, embodied experiences that technology can offer and invite us to dream about the multisensory experience of being together in a physical space.
The event is split into three blocks. On day 1 we will kick off with an e-textile workshop where you can learn how to craft a pompom musical instrument. This will be followed by an interactive panel discussion about technologically mediated bodies. Day 2 will be a full day focused on producing and discussing spatial audio.
Schedule (all times GMT)
Thursday, March 25
- 11:00 - 14:30h DIY e-Textiles Workshop Craft a Pompom Musical Instrument
- 15:30 - 17:00h Bodies/technologies A panel discussion about technologically mediated bodies
Friday, March 26
- 10:00 - 15:00h Soundstack Audiovisual Pointclouds Workshop
- 15:00 - 17:15h Soundstack Panel discussion about spatial sound through the browser
DIY e-Textiles Workshop
DIY e-Textiles Workshop: Craft a Pompom Musical Instrument
The event is facilitated by Samantha Topley, a sound artist from Leicester, UK, who works with textiles to create handmade electronic musical instruments and interactive sound artwork. The workshop will be a hands-on activity during which we will build some pompom synths and perform them in a remote and collaborative fashion. The focus of the activity is on crafting the interface, as a playful introduction to working with e-textile materials and DIY instrument building. The activity will be online - 3.5 hours long (with a short break). We will be using a pompom musical instrument kit designed by Sam. Participants won't be required to have any particular tool or prior knowledge in textile or musical instrument making. The kits will provide everything necessary to build a novel and fun textile musical instrument which everyone will be able to keep.
Workshop Leader
Sam Topley is a sound artist and educator from Leicester (England, UK). She works with textiles to create handmade electronic musical instruments and interactive sound artwork, including giant pompom musical instruments, knitted or 'yarnbombed' loudspeakers, and electronic musical instruments with e-textile interfaces.
Topley shares her work internationally through performances, exhibitions, workshops, and presentations. Her work has received recognition and awards: AHRC Cultural Engagement Award 2019; BBC micro:bit Featured Artist 2019; Dubai Maker Faire Featured Project Award 2019; Best Paper and Best Workshop prizes at New Interfaces for Musical Expression 2016 and 2020; and features in Nicolas Collins' Handmade Electronic Music (3rd Edition).
Sam is a doctoral candidate at the Music, Technology and Innovation - Institute for Sonic Creativity
(MTI2), De Montfort University, where she also lectures in experimental music, creative music technology,
and community arts practice. Her PhD is co-supervised by Nottingham Trent University and funded by the
Arts and Humanities Research Council (Midlands4Cities Doctoral Training Partnership).
samantha-topley.co.uk
Time March 25, 11:00 - 14:30h
Organsied by Antonella Nonnis & Giacomo Lepri


Bodies/technologies
Bodies/technologies
Technologies affect the embodied experience of being a human in this world. Technologies read, interpret and mediate bodies. Technological enhancement, care, surveillance and control are topics that play a role in the works of the artists Nicola Woodham, Sophie Hoyle and Katie Tindle. We are happy to welcome three artists to each present an aspect of their work and join a panel discussion, where we will explore and consider how technology can be both empowering and oppressive.
Panelists
Nicola Woodham composes experimental music, bringing in free-improvisation with treated voice and
noise. A year ago she began an intensive journey into creative technology and now hand-makes wearable
etextile sensors and codes for embodied audio performances. Pre-Covid, she performed in music venues and
galleries, where she aimed to scale up her audibility and visibility as a disabled womxn. In real
space/online hybrids she’s enjoying cracking open ways to create presence through haptic feedback and
sensory ways to make improvised music during her performances. Maximalist in approach, Nicola’s work
weaves together disparate threads including the governance of disabled bodies,
neutralising trauma through ritual, the slippery source of the voice. She documents her making on
www.nicolawoodham.com and her recent release ‘Buffer’ EP is via Bandcamp.
nicolawoodham.com
Sophie Hoyle is an artist and writer whose practice explores an intersectional approach to
post-colonial, queer, feminist, critical psychiatry and disability issues. Their work looks at the
relation of the personal to (and as) political, individual and collective anxieties, and how alliances can
be formed where different kinds of inequality and marginalisation intersect. They relate personal
experiences of being queer, non-binary and part of the MENA (Middle East and North Africa) diaspora to
wider forms of structural violence. From lived experience of psychiatric conditions and trauma, or PTSD,
they began to explore the history of biomedical technologies rooted in state and military surveillance and
control.
sophiehoyle.com
Katie Tindle is an artist, organiser and educator living in London, originally from the North
East of England. Her practice is based in writing, installation using moving image and sound, and web
technologies. This work centres on wellness/illness, the body, poetics, feminist thought and data
justice. Tindle’s curatorial practice is based around democratisation and demystification of art spaces,
both online and in person. Tindle studied at Central Saint Martin’s (CSM) and Goldsmiths University of
London, and currently works at CSM and the Society for Research into Higher Education, and is a member
of the artist collective in-grid .
katietindle.co.uk
Time March 25, 15:30 - 17:00h
Organsied by Anna Nagele


Soundstack
Soundstack Introduction
Soundstack is a free event focused on the art and technologies of spatial sound. This year on Friday 26th March, Soundstack brings you an online workshop about audio-visual point clouds in Unity using photogrammetry, as well as a session dedicated to spatial sound in the browser (we now all need to do it, and this is unlikely to change).
This is an intermediate-level event, and requires some understanding of spatial sound. The sessions will introduce you to artist-engineers working at the cutting edge of spatial sound for VR, AR, installations and performance.
You will hear about specific software and techniques, as well as the aesthetic potential of working with immersive sound in fixed and real-time settings. In the workshop setting you will have hands-on instruction, as well as a demonstration of work, and discussions.
Soundstack will help you have a better understanding of how to approach sound as space. Due to online delivery, the usual limitations on numbers don’t apply - so spread the word, and join in.
Schedule on March 26 (all times GMT)
Session A Audiovisual point-clouds workshop
- 10:00 - 10:15h Introductions
- 10:15 - 11:15h 3D scanning using your phone
- 11:15 - 11:45h Rendering your scan data / video presentations
- 11:45 - 12:45h Preparing your data for Unity
- 12:45 - 13:15h Break / video presentations of spatial sound hubs
- 13:15 - 14:45h Using your data in Unity + Q&A
Session B Spatial sound through the browser
- 15:00 - 15:15h Introduction to the session
- 15:15 - 16:00h Part 1 ‘In practice’: case studies Assembly 2020 from Call & Response with Tommie Introna from Black Shuk using Google’s Omnitone + Acoustic Atlas from Cobi van Tonder using Web Audio API)
- 16:00 - 16:30h Part 2 ‘Ambisonics through the browser’ IEM’s HOAST with Thomas Deppisch and Nils Meyer-Kahlen + Leslie Gaston-Bird + Envelop’s Earshot with Chris Willits & Roddy Lindsay
- 16:30 - 17:10h Part 3 ‘Web Audio API’ Queen Mary University of London’s Josh Reiss with Nemisindo + Imperial College London’s Lorenzo Picinali with Pluggy Project + High Fidelity’s Philip Rosedale with Spatial Audio API + Leslie Gaston-Bird
- 17:10 - 17:15h Wrap up
Session A: Audiovisual Pointclouds
The workshop Audiovisual Pointclouds will allow participants to make 3D scans of any objects to which they have access, using free software and their smartphones. They will then bring these objects into the games engine Unity, as photogrammetry data. Finally, they will be able to manipulate these data, in order to create fully three-dimensional digital artwork which can be taken in any number of creative directions.
You can either watch the session (passive participation) with an opportunity to ask questions, or get feedback from Kathrin as you go (active participation) by applying for a place. Places are limited and will require you to submit a Unity project to demonstrate your basic Unity skills.
Workshop Leader
Kathrin Hunze is a Media Artist and Artistic Researcher. She studied Sound Design and Communication
Design
at the Hamburg University of Applied Sciences,2016, is a graduate of the Art and Media degree program at
the Berlin University of the Arts, 2019,
and a distinguished graduate of the Art and Media program of the Berlin University of the Arts 2020.
Resident at the Academy of Applied Arts Vienna, 2020
and the Institute for Electronic Music and Acoustics (IEM), 2019. Lecturer for Art and Media, Fashion
Design and Computation & Design at the Berlin University of the Arts
and for Computing and the Arts, at the Berlin School of Popular Arts. She lives and works in Berlin.
raumperspektive.com
Workshop requirements
Knowledge- Good base knowledge of Unity
- Basic C# programming skills
- Smartphone
- Second screen for better workflow (recommandation not a must!)
- Computer with access to the internet, and sufficient CPU to run Unity, Zoom, and third-party software in real-time
- Unity, version will be announced soon
- Meshlab
- CloudCompare
- Regard3d
Time March 26, 10:00 - 15:00h
Organsied by Angela McArthur


Session B: Spatial sound through the browser
The past year has necessitated new ways of working spatially, specifically through the browser. Yet this is a layer of additional practice and learning which can be an obstacle for the creative practitioner. What tools are available to overcome the limitations of defaulting to static binaural renders of spatial sound works? What ecosystems do these tools sit within, or integrate with? Where should we start, and what do we need to know (how much additional time do we need to invest in learning the technologies, can we integrate existing workflows, and what outputs are possible if we work through the browser)? These questions will be tackled during Soundstack’s afternoon session.
Panelists
Part 1 'In practise': case studies
Tommie Introna collaborates with artists, working predominantly with sound and programming. He is a
member of Black Shuck a co-operative that produces moving image, audio and digital projects. He also works
with young people, facilitating peer led artistic projects.
blackshuck.co
callandresponse.org.uk
Dr. Cobi van Tonder is a creator, composer, and Marie S Curie Research Fellow at the University of York.
Acoustic Atlas is a browser-based platform for virtual acoustic simulations of natural and cultural heritage sites. Many heritage sites are documented in great detail from a visual perspective, but sonically there exists little data. Acoustic Atlas is a collaborative archive in progress, for acousticians, archaeologists, and sound artists to share sound data as immersive, real-time auralisations.
Listen with headphones (best to use non-Bluetooth) to avoid feedback here:
acousticatlas.de
More info:
acousticatlas.info
Part 2 'Ambisonics through the browser'
Thomas Deppisch is a PhD student at the Applied Acoustics division of Chalmers University of Technology,
working in the realm of spatial audio.
GitHub
University Profile
Nils Meyer-Kahlen is a PhD student at the Aalto Acoustics Lab. Since his masters, his main interest has been spatial audio processing and perception. His current aim is to faithfully reproduce the acoustics of different spaces for Mixed Realities.
Leslie Gaston-Bird (AMPS, MPSE) is a Dante Level-3 Certified audio engineer specializing in
5.1 re-recording mixing (dubbing) and sound editing.
She is a former Governor-at-Large for the Audio Engineering Society,
and author of the book Women in Audio.
She is a member of the Recording Academy (The Grammys®),
a member and councilperson of the Association of Motion Picture Sound (AMPS), and member of Motion Picture Sound Editors (MPSE). She has worked for National Public Radio (Washington, D.C.), Colorado Public Radio, the Colorado Symphony Orchestra, Post Modern Company, and was a tenured Associate Professor at the University of Colorado Denver.
Mixmessiahproductions
Christoper Willits is a pioneering electronic musician, producer, educator, and co-founder &
director of Envelop, a nonprofit with the mission to unite people through immersive listening experiences.
As one of the core artists on the Ghostly International label, Willits’ immersive
ambient music has reached millions of listeners and includes collaborations with Ryuichi Sakamoto and
Tycho.
envelop.us
christopherwillits.com
Roddy Lindsay - Entrepreneur and software engineer. Co-founder of Envelop, Envelop board member
and co-founder of Hustle. Performs live immersive electronic music as The Ride.
envelop.us
Part 3 'Web Audio API'
Josh Reiss - Professor, Queen Mary University London / Co-founder Nemisindo
Josh Reiss is a Professor with the Centre for Digital Music at Queen Mary University of London.
He has published more than 200 scientific papers, and co-authored the book Intelligent Music Production,
and textbook Audio Effects: Theory, Implementation and Application. He is the President-Elect and
a Fellow of the Audio Engineering Society (AES). He co-founded the highly successful spin-out company,
LandR, his second start-up Tonz has received investment and he recently launched a third, Nemisindo.
Academic Profile
Lorenzo Picinali - Reader in Audio Experience Design and I lead the Audio Experience Design (AXP)
research theme with the Dyson School of Design Engineering.
In the past years he has worked in Italy (Università degli Studi di Milano), France (LIMSI-CNRS and IRCAM)
and
UK (De Montfort University and Imperial College London) on projects related with 3D binaural sound
rendering,
interactive applications for visually and hearing impaired individuals, audiology and hearing aids
technology,
audio and haptic interaction and, more in general, acoustical virtual and augmented reality.
The research he’s been involved in the past years focussed mainly on the implementation of a binaural
spatialisation tool, which also integrates a hearing loss simulation and virtual hearing aids, and
on Head Related Transfer Functions selection, evaluation and adaptation.
Academic Profile
Dyson School of Design
Engineering
Philip Rosedale is the cofounder and CEO of High Fidelity. The company’s API that allows developers to integrate its patented real-time spatial audio — originally developed for immersive VR experiences — into their apps, games, and websites. In 1995, Rosedale created FreeVue, one of the first internet videoconferencing apps, which was acquired by RealNetworks. He founded Linden Lab in 1999, the creators of Second Life, which has become a home for millions of people and has a multi-billion dollar virtual economy. Philip holds a B.S. in Physics from University of California, San Diego.
Software Information
Ambisonics
Organisation: IEM
App: HOAST
HOAST360 is the open-source, higher-order Ambisonics, 360° video player with acoustic zoom. HOAST360 dynamically outputs a binaural audio stream
from up to fourth-order Ambisonics audio content. Technical details are explained in an
AES eBrief.
hoast.iem.at
GitHub
aes.org
Organisation: Envelop
App: Earshot
A free and open-source transcoder for live streaming Higher-Order Ambisonics. It is based on nginx,
MPEG-DASH, and the Opus codec which supports up to 255 audio channels (or 14th-order Ambisonics.)
Earshot comes with an intuitive web application that allows developers to debug and monitor their
multichannel audio DASH streams, and easily test different dash.js client settings to optimize their end
user experience.
envelop.us
GitHub
Organisation: Google
App: Omnitone
Omnitone is a JavaScript implementation of an ambisonic decoder that also allows you to binaurally render
an ambisonic recording directly on the browser.
GitHub
Google Blog
Web Audio API
Organisation: Queen Mary University of London
App: Nemisindo
Nemisindo Ltd is a high tech start-up, spun-out from academic research, offering sound design services
based around innovative procedural audio innovations, see here.
They recently secured an Epic Megagrant to provide procedural audio for the Unreal Game Engine,
their online system offers real-time sound effect synthesis in the browser.
The system is comprised of a multitude of synthesis models, with post-processing tools
(audio-effects, temporal and spatial placement, etc), for users to create scenes from scratch.
Each of these models can generate sound real-time, allowing the user to manipulate multiple parameters
and shape the sound in different ways.
nemisindo.com
Organisation: Imperial College London
App: Pluggy / PlugSonic
Pluggy is a web app allowing users to import their own audio files (only MP3 is supported), create
soundscapes and interact with them (this is also hosted on Heroku). PlugSonic is a suite of web- and
mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic
narratives in (and beyond) the cultural heritage context.
Project page
PlugSonic Soundscape Web
PlugSonic Sample
See two 2 short demos here
and here
Two brief conference papers describing functionalities, together with the functionalities of a web-based
audio editor created with the Web Audio API: AES and
MDPI
Organisation: High Fidelity
App: Spatial Audio API
Real-time spatial audio API for websites, apps, games etc.
highfidelity.com/api
highfidelity.com/zaru (demo)
“You may be surprised — our API is super simple. We have had people get a simple web app up and running
within 15 min! Here's a link to a
number of sample Guides”,
Time March 26, 15:00 - 17:15h
Organsied by Angela McArthur

