Acoustic Telepresence
Current remote monitoring interfaces position operators as observers looking at screens that represent vessels at a distance. Information is abstracted into numbers, graphs, and status indicators. The operator receives data but remains separate from the vessel's physical reality.
This project investigates presence through sound.
When an operator connects to a vessel, they enter an acoustic environment. Telemetry streams (GPS, engine, motion, weather) and captured audio (deck, engine room, hydrophone) arrive as raw material. The operator shapes this material by adjusting tempo, filtering frequencies, spatializing sources, and mixing levels. They craft a soundscape that expresses the vessel's state in terms they can inhabit.
This crafting is attunement: the operator calibrating themselves to the vessel until its rhythms feel legible, until they can sense its state without active scrutiny, until they notice immediately when something shifts because the soundscape they've grown accustomed to has changed.
Multi-Vessel Presence
Operators typically manage 3-5 vessels simultaneously. In the acoustic telepresence paradigm, this means maintaining 3-5 distinct soundscapes, each one a crafted expression of a vessel's state, each one a place the operator can step into.
Context switching becomes spatial. Moving attention from Vessel 1 to Vessel 2 means transitioning between acoustic environments, from one vessel's characteristic drone and rhythm to another's. The operator moves between vessels, present with each in turn while maintaining peripheral awareness of others through ambient sound.
AI as Participant
Alongside alerts, notifications, and visual overlays, the AI also communicates through the soundscape itself.
When the AI detects something worth attention, it inflects the soundscape. Perhaps the relevant stream brightens slightly, or shifts forward in the mix, or develops a subtle textural emphasis. The operator may not consciously register the intervention, but their attention moves.
This creates a feedback loop. The operator shapes the soundscape; the AI responds to what it observes in the data; the operator notices shifts and investigates or ignores them; the AI learns which inflections catch attention. Over time, operator and AI develop a shared vocabulary of acoustic emphasis, a mode of communication that operates below explicit instruction.
The Research Topic
The project investigates whether this mode of relation to the vessel—presence through acoustic attunement in combination with observation through visual dashboards—supports different capacities: sustained awareness without fatigue, pattern recognition through embodied familiarity, sensitivity to change through dwelling in a soundscape over time.
Visual Noise as Resource
Current remote vessel interfaces actively cancel forms of "visual noise" to prevent operator discomfort. This erased data contains valuable information that OpenAmbient proposes to sonify:
| Visual "Noise" | Currently Suppressed | Sonification Mapping |
|---|---|---|
| Boat Motion (pitch/roll/yaw) | Stabilized to prevent dizziness | Spatial audio panning |
| Engine Vibrations | Dampened in control rooms | Rhythmic texture/drone |
| Wave Impact Patterns | Smoothed visually | Percussive feedback |
| Radar Sea Clutter | Algorithmically removed | Ambient texture layer |
| Spray & Wake | Repetitive patterns ignored | White noise textures |
| Hull Stress/Flex | Invisible to cameras | Low-frequency drone |
| Weather Interference | Rain/glare filtered | Environmental atmosphere |
| Horizon Drift | Artificially stabilized | Subtle pitch shifts |
Technical Architecture
Version Evolution: OAv1 → OAv2 → OAv3
The project evolves through three hardware iterations, each adding capability while maintaining the open-source ethos:
| Version | Hardware | Software | Cost | Status |
|---|---|---|---|---|
| OAv1 | Logitech MX Keys + MX Master (owned) | Ableton Live Suite (licensed) | €0 | Active — Jan 2026 |
| OAv2 |
Intech Studio Grid: • VSN1 ($233) • TEK2 ($206) • EF44 ($161) • KNOT ($99) • PBF4 ($143) |
Open-source Lua + Grid Editor | ~$842 | Planned — Feb-Mar 2026 |
| OAv3 |
Monome ecosystem: • Norns Shield • Arc • Crow • Teletype |
Open-source SuperCollider + Lua | ~€1,500-2,000 | Planned — 2026 |
Each iteration increases openness: OAv1 uses commercial software with proprietary protocols; OAv2 moves to open-source firmware (Lua) with commercial hardware; OAv3 achieves fully open hardware and software through the Monome ecosystem's transparent designs.
Data Streams
The vessel generates multiple data streams that can be shaped into sound:
Telemetry: Navigation (position, heading, speed, course changes), propulsion (engine RPM, temperature, fuel flow, load), motion (pitch, roll, yaw, acceleration), environment (wind, pressure, sea state).
Captured Audio: Deck ambience (wind, spray, operational activity), engine room (mechanical signature), hydrophone (underwater acoustic environment).
Supplementary Sensors: Hull stress, vibration, raw radar feed.
How these streams become sound—what maps to pitch, to rhythm, to texture, to spatial position—is not prescribed. The system provides mappings as starting points; operators adjust until the soundscape feels legible to them.
Crafting the Soundscape
Operators shape data streams through familiar audio parameters:
Tempo can be adjusted globally or per-stream. Slowing a stream reveals detail; speeding it up compresses time. What tempo feels right depends on the operator, the vessel, the operational context.
Filtering isolates frequency bands, allowing operators to foreground or suppress aspects of a stream.
Time-stretching uses different algorithms (granular, spectral, transient-preserving) that each produce distinct textures from the same source material.
Effects (reverb, delay, compression) shape the acoustic character of streams—making them more or less prominent, more or less spacious.
Mixing (levels, mute, solo, spatial position) determines how streams relate to each other within the overall soundscape.
These are not controls for optimising detection metrics. They are means of attunement—ways the operator calibrates the soundscape until they can dwell in it.
Psychoacoustic Resources
Research in psychoacoustics offers resources for thinking about acoustic monitoring, though how operators actually use them remains an empirical question.
Tempo and attention. Faster rhythms tend to increase alertness; slower rhythms support relaxed awareness. Operators may find themselves adjusting tempo in response to operational demands, or they may settle into configurations that suit their working style regardless of context.
Time dilation. Slowing audio reveals microstructure that real-time playback compresses. Whether this aids pattern recognition in vessel telemetry, and under what conditions, is part of what this project investigates.
Habituation. Static acoustic environments fade from attention over time. Variation—whether introduced by the operator, by the AI, or by actual changes in vessel state—may help maintain awareness. The dynamics of habituation in crafted operational soundscapes are not well understood.
These principles inform the system's design but do not determine how operators will use it.
System Architecture
Evolving from TAO
Project TAO established foundational infrastructure and insights that inform OpenAmbient.
Audio Travels Lighter Than Video
In bandwidth-constrained transmission, compressed audio (OGG Opus at 192kbps) requires approximately 24 KB/s with decoding latency under 50ms. Compressed video (H.264 at 500kbps) requires 62.5 KB/s with decoding latency of 100-200ms. For offshore vessels relying on satellite or cellular links, this difference matters: audio streams arrive faster and more reliably. This suggests audio may serve as the more responsive channel for real-time vessel state.
Techniques Transfer
TAO developed methods for acoustic capture in exposed environments—wind management, power-constrained transmission (store-and-burst protocols reducing modem power by 95%), autonomous operation without continuous maintenance. These apply directly to vessel-mounted microphones operating in similar conditions.
An Ongoing Project
TAO continues as autonomous listening infrastructure, live at tao.groise.no. OpenAmbient extends its approach from environmental soundscapes to operational contexts.
Connected to BRINE
BRINE (presented at Uroboros Festival, December 2025) explored human-AI collaboration through whale communication data. The project fine-tuned the VampNet neural audio architecture on Project CETI sperm whale codas to generate "whale-textured techno"—music that retains cetacean acoustic signatures while remaining performable by human artists.
The methodological connection lies in how both projects approach acoustic material that is not structured for human perception. BRINE worked with whale codas, shaping them through iterative adjustment until they became performable as music. OpenAmbient applies a similar process to vessel telemetry: operators shape data streams that have no inherent sonic form, configuring them into soundscapes through ongoing attunement.
TAO and BRINE were preparatory investigations. TAO developed acoustic telepresence above water, establishing methods for remote presence through sound. BRINE extended this underwater, working with whale communication to understand the acoustic life of the ocean medium itself in the context of AI. OpenAmbient applies the learnings of both to the remote operation of vessels.
Trajectory: ABIS
BRINE's methodology extends toward ABIS (Ars Biologica Independent Studies), an educational programme organised by Uroboros in collaboration with TBA21 as part of České Budějovice European Capital of Culture 2028. AHO is a partner institution. The programme focuses on transdisciplinary research into environmental topics, and offers a context for investigating how AI-mediated acoustic environments might inform approaches to ecological sensing and multispecies awareness.
Strategic Positioning
Extends OpenBridge
OpenBridge is an open-source design system for industrial and maritime interfaces developed since 2017 by the OICL 45-partner consortium (across OpenRemote, OpenAR, OpenZero, OpenDesign, and the JIP project) through NFR and EU funded research. OpenBridge establishes visual interface standards for safe, efficient operations. OpenAmbient adds the missing auditory dimension with acoustic design guidelines complementing visual standards, multimodal alert hierarchies, and joint evaluation frameworks.
Demonstrates MishMash Impact
MishMash Centre for AI & Creativity (NOK 173M, 2025-2030) explores AI-creativity intersections. OpenAmbient bridges three work packages:
- WP1: Real-time embodied AI performance
- WP2: Sound art + AI in production workflows
- WP7: Industrial operators + creative problem-solving (OICL leads this!)
Partners & Collaborators
Uroboros Collective
Festival for Design and Art — BRINE Presentation at Uroboros 2025
2025.uroboros.designHEMS
Helsinki Electronic Music Studio — Creative + Technical Consulting
helsinkielectronicmusicstudio.orgImplementation Timeline
Phase 1: OAv1 Build + DIS 2026 Paper writing
Working prototype with Logitech MX controller + Ableton Live Suite. Mock telemetry generator, OSC receivers, AI performance logger. Zero additional hardware cost.
Phase 2: Lyra Workshop + Transmediale + OAv1 Tests
Lyra Pramuk workshop (Jan 6-10), CTM Festival (Jan 24 - Feb 2), Transmediale (Jan 31 - Feb 2). Test psychoacoustic methodology with artistic material. Initial OAv1 validation.
Phase 3: OAv2 Build
Intech Studio Grid integration (VSN1, TEK2, EF44, KNOT, PBF4). Open-source Lua firmware. Professional demo ready for Ocean Industries Lab.
Phase 4: Field Testing
Operator trials with real telemetry. Validate with Ocean Industries Lab partners. Ville field deployment in Finland.
Phase 5: DIS 2026 Singapore
Long paper + workshop submission. Combines Groise framework presentation with TAO tropical deployment and maritime industry engagement (Port of Singapore).
Phase 6: OAv3 Build
Monome ecosystem integration (Norns, Arc, Crow, Teletype). Fully open-source SuperCollider + Lua. Target: ABIS 2026 presentation.
Resources & Budget
Infrastructure
Kosmos ONGOING
Kosmos is an AI service providing inference capabilities for research projects. Used for AI-in-the-loop prototyping and experimental interaction modes.
$200/month
Claude API RESEARCH
AI-in-the-loop prototyping, analysis pipelines, experimental interaction modes.
€500
platform.claude.com →Hardware (OAv2/v3)
Intech Studio Grid (OAv2)
VSN1 ($233) + TEK2 ($206) + EF44 ($161) + KNOT ($99) + PBF4 ($143). Open-source Lua firmware.
~$842
intech.studio →Monome Ecosystem (OAv3)
Norns Shield, Arc, Crow, Teletype. Fully open hardware + SuperCollider. Phase 6 target.
~€1,500-2,000
monome.org →Events & Workshops
Lyra Pramuk Workshop JAN 6-10
Voice synthesis + experimental audio. Test psychoacoustic methodology with artistic material.
€319
campfr.com →CTM + Transmediale Berlin JAN 24 - FEB 2
Bringing OAv1 prototype to Berlin for feedback from art, industry, and academia communities before OAv2 development. Combined trip for CTM Festival + Transmediale.
~€250-350
ctm-festival.de →DIS 2026 Singapore SUMMER 2026
Long paper + workshop. Groise framework + TAO tropical deployment + Port of Singapore engagement. See TAO research plans.
€2,500-3,500
dis.acm.org →HEMS Consultation JAN 2026
Consulting by Ville Hyvönen during January, culminating in Transmediale. Ville heads the Helsinki Electronic Music Studio where they have developed sonification projects with industry partners.
€500-1,000
helsinkielectronicmusicstudio.org →Software Stack
| Component | Purpose | Cost |
|---|---|---|
| Ableton Live Suite | Sonification engine (already licensed) | €0 |
| Python + OSC/MIDI | Data processing, telemetry bridge | €0 |
| Intech Grid Editor + Lua | OAv2 controller firmware | €0 |
| Norns/SuperCollider | OAv3 standalone engine | €0 |
Funding Strategy
Please get in touch with enrique.encinas@aho.no if you can contribute to this project.