company
type
role
team
-
space | installation | new media
tech lead | researcher | UX designer
Munakhom, Proybhun, Laura Lengua, Gabriel Venegas
2025

concept
Language is humanity’s primary tool for communication, yet it is fundamentally flawed. It operates by compressing the depth of human experience into symbols—words, sentences, and structures—that, while useful, cannot fully encapsulate the nuances of thought, emotion, or subjective reality. Slavoj Žižek captured this limitation succinctly: “What we can articulate in language is always a kind of reduction of what we experience.” The unspoken, the ineffable, and the deeply personal often remain just outside the grasp of linguistic expression, leaving gaps in understanding. These gaps can make genuine connection between individuals elusive, fostering a profound sense of isolation.
This isolation mirrors the experience of being lost in a vast desert. The desert, an empty and endless expanse, becomes a metaphor for the void that exists between us. In such an environment, the mirage of connection appears—a fleeting illusion of understanding or togetherness that dissolves upon approach. This metaphor underscores the tension between our innate desire for connection and the inherent limitations that keep us apart.
While the water reflects the effects of sound and visuals, the stations themselves cannot directly interact. They belong to entirely separate realms—acoustic and visual—and their efforts to create harmony are poetic but futile. This design emphasizes the illusion of connection: the stations “speak” to the water and to each other in their own ways, but no true unity emerges. The water becomes a metaphorical mirage, capturing the beauty of attempted communication while highlighting its inherent limitations.
This installation is a meditation on the solitude of human existence. It invites viewers to reflect on their own experiences of attempting to connect with others, only to find that their most profound feelings and thoughts often remain inexpressible. The sound, light, and water evoke both the beauty of these attempts and the poignancy of their futility. Ultimately, the project serves as a reminder that, despite our longing to bridge the gaps between us, we are often left grasping at illusions, inhabiting isolated worlds that may never fully converge.

Reaserch
Our project focuses heavily on cymatic resonance, which led us to delve deeply into research and experimentation. Cymatics is the study of visible sound vibrations, where sound frequencies create intricate patterns in a medium, such as water.
For cymatic resonance to occur effectively, the right frequencies must be used, and the material of the container plays a critical role. If the container is too thick, the vibrations won’t translate into visible patterns. Similarly, improper placement of the subwoofer or an incorrect water level can disrupt the formation of cymatic patterns.
Good lighting also plays a crucial role in making the vibrations of cymatics visible. We experimented with both natural light and artificial light to highlight the patterns. Additionally, we tried changing the water’s color to black, which produced beautiful results. However, we valued the natural essence of the project and decided not to over-enhance or alter the water’s appearance, keeping the focus on the purity of the resonance itself.


As part of our research, we documented the shapes and patterns of sand created during our interactions with it. By collecting and analyzing these images, we developed the idea of transforming the texture of sand into the texture of our visualization. This visual element is then projected directly onto the surface of the water, allowing the sand-inspired textures to interact dynamically with the ripples and vibrations of the water.
Our intention was to ensure that every element in the installation is interconnected in some way. This approach aligns with our main concept, which focuses on the idea of connection—finding ways to link different elements, both visually and conceptually, to create a cohesive and immersive experience. By projecting the visuals onto water, we amplify this sense of connection, blending texture, movement, and light into a unified, resonant language.
When it came to materials and objects, we carefully selected each element to ensure they harmonized with one another. We believe that every detail in the installation, no matter how small, holds significance and should never be overlooked. These small details, in fact, are where the true beauty of the project lies.
Most importantly, we wanted our installation to complement the space where it would be displayed. With this in mind, we intentionally chose materials that aligned with the aesthetic and texture of the room, creating a seamless connection between the project and its environment.

Methodology & Process
Sonic Analysis
- Iterative Playback and Recording
Following Alvin Lucier’s method in I Am Sitting in a Room, the initial step involved recording an audio signal played in the room and re-recording it iteratively. This process emphasized the natural resonant frequencies of the space, as non-resonant frequencies gradually attenuated through repeated cycles.
- Spectral Analysis
Once a sufficiently resonant signal was obtained through iterative recording, a spectrum analyzer was used to visualize the decibel levels at different frequencies. This analysis helped identify the dominant resonant frequencies of the room by observing peaks in the frequency response.
- Frequency-Based Sound Synthesis
To further manipulate and explore the room’s resonance, the following digital tools were employed:
Holy7 (Max Device by Akihiko Matsumoto): This device enabled fine-tuning of seven distinct voices to match the dominant resonant frequencies identified in the spectral analysis.
Air 4.1 (Max Device by Akihiko Matsumoto): This device functioned as a trigger mechanism for the synthesized voices. It received control input from TouchDesigner, allowing for dynamic real-time modulation of the synthesized sounds based on visual or interactive parameters.
- Results and Observations
The iterative playback process successfully emphasized the natural resonance of the room, with specific frequencies becoming more prominent over successive cycles. Spectral analysis revealed distinct frequency peaks corresponding to the room’s acoustic characteristics. Using Holy7, the resonant frequencies were precisely mapped to synthetic voices, allowing controlled reinforcement and manipulation of the room’s natural harmonics. Air 4.1, in conjunction with TouchDesigner, introduced an interactive element to the synthesis, allowing external data sources to influence the resonant voices dynamically.
- Conclusion
This method effectively combined physical and digital acoustic techniques to explore and manipulate room resonance. The iterative recording technique served as a foundation for spectral analysis, which in turn informed a controlled synthesis process. The use of Max devices (Holy7 and Air 4.1) alongside TouchDesigner provided a flexible and interactive system for real-time resonance manipulation, demonstrating the potential for further artistic and scientific applications in spatial sound design.



Physical Analysis
The physical scanning process involved 3D scanning the room to capture its geometry and spatial characteristics accurately. This method provided a digital representation of the space, which was later used for analysis and scene design. Below is a detailed breakdown of the steps involved in this process.
- Equipment and Software Used
Scanning Device: An iPhone equipped with LiDAR technology (such as iPhone Pro models) was used for 3D scanning. The built-in LiDAR sensor allows for depth sensing and precise spatial mapping.
Scanning Application: A 3D scanning app, Scaniverse was used to generate a point cloud of the room.
- Scanning Procedure
Room Preparation: Before scanning, the room was cleared of obstructions that could interfere with the scan. Any reflective or transparent surfaces were noted, as they might cause distortions in the scan results.
Capturing the 3D Scan: The iPhone was moved systematically around the room to cover all surfaces, corners, and architectural features. The scanning app captured depth data in real-time, generating a point cloud as movement progressed. Special attention was given to walls, doors, windows, and furniture to ensure they were accurately recorded. Generating the Point Cloud: Once the scan was completed, the app processed the depth data and created a point cloud model of the room. The point cloud consists of thousands (or millions) of individual points in 3D space, representing the surfaces and features of the scanned environment.
- Analyzing the Point Cloud in Rhino 3D
Dimension Analysis: Rhino’s measurement tools were used to verify room dimensions (length, width, height) and compare them to existing architectural plans. Additionally, surface irregularities, ceiling heights, and spatial relationships were analyzed.
Potential Scene Design for Installation: The point cloud data served as a reference for conceptualizing different scene design layouts for the installation. Various design possibilities, including object placement, lighting arrangements, and interactive elements, were explored within the space.
Visual of Sand to Sound

In this part of the project, we developed a system that
translates visual data of sand patterns into sound parameters using a webcam-based image analysis approach. This method involved scanning an input image, segmenting it into a structured grid, and mapping pixel density to sound
properties. The process was executed using Python scripts
within TouchDesigner, enabling real-time analysis and
interaction.
Process of Image Scanning and Analysis
Capturing the Input Image
A webcam was used to continuously capture images of the sand distribution. These images served as the input for
the analysis, where the density of sand in different areas of
the image was measured.
Grid Segmentation and Cell Analysis
To analyze the sand distribution, the captured image was
divided into a structured grid of cells. Each cell was scanned
individually to determine the number of pixels representing
sand particles. The cell with the highest pixel density indicated the area
with the most sand accumulation.
This pixel density value was then mapped to sound properties,
influencing both volume and frequency.
Mapping Sand Distribution to Sound Parameters
The spatial position of each detected sand cluster within the
grid was used to define sound characteristics:
Row Position → Volume of Subwoofer: The vertical position
of the densest cell determined the amplitude (volume) of the subwoofer. Higher row numbers resulted in increased volume levels.
Column Position → Frequency of Sound: The horizontal
position of the densest cell was used to define the frequency of the sound wave. Different column positions corresponded to varying frequency values, creating an interactive sonification of the sand pattern.
This mapping created a direct relationship between the physical distribution of sand and the auditory
output, allowing for real-time interaction between visual
patterns and sound modulation.
Implementation in Python and TouchDesigner
Python was used to handle the image processing and data
extraction, while TouchDesigner was used for the real-time visual and audio processing pipeline. Python: Handled grid segmentation, pixel counting, and
mapping values to sound parameters.
TouchDesigner: Received the computed data and translated it into real-time sound generation and visual feedback.

Prototyping





Instalation










