Sound Communication
AI-Powered Sentiment Analysis, Meaning-Based Multimodal Interaction, Multi-players, Experimental Communications (2024)

The experiment in sound communication transforms text into sound and visual, using AI-powered sentiment analysis. It questions our verbal language exchange, where the meaning of words can be interpreted differently depending on the individual. Inspired by the emotional sympathy we get from wordless music, the project attempts communication through sound instead of conventional language. Participants type messages, which are processed by the sentiment analysis model to categorize the emotions behind the words. The spectrum of emotions is then classified and translated into corresponding musical tones, and the length of the word is matched with the length of the tone. The rhythm is generated depending on the prompter. This aims to create emotional connections and empathy between communicators. The project explores ways to communicate without expressing word-meaning directly.
Central Question
If we communicate with sounds without words that contain meaning, can we get close to the feelings of others? Instead of using words to understand each other, use sounds to empathize with one another.
Motivation
Music is a more intuitive tool, leaving impressions on our minds without going through the thinking process, regardless of whether it is intended to communicate or not. It led to the idea of converting the text, which has meaning at the same time as it comes out, into sound to express it. What does it sound like?
My interests are interaction among sound, visuals, and movements. As one of the attempts for this, in this project, I am going to try to use text as an instrument to play music. It will be in the form of a multi-player ensemble, overlapping the sound elements created by the players’ words. The outcome can be a harmonic sound or a noisy sound.
If the machine learning model such as Sentiment Analysis Using Transformers that can cluster the moods of the words can involved here, additional ideas can be unfolded.




︎ Role
- UI design, Experience design, Interaction design, Development.
- Content creator: Developed AI-driven text-to-sound interaction.
︎ Tools & Methods
Ap5.js, CSS, html, Sentiment analysis model, Node.js, Socket.io, Glitch.
︎ Exhibition
DEC 2023 - ITP Winter Show 2022, Brooklyn, New York, USA
︎ Credits
Edward Zhou
︎ Advisor
Daniel T. Shiffman
- UI design, Experience design, Interaction design, Development.
- Content creator: Developed AI-driven text-to-sound interaction.
︎ Tools & Methods
Ap5.js, CSS, html, Sentiment analysis model, Node.js, Socket.io, Glitch.
︎ Exhibition
DEC 2023 - ITP Winter Show 2022, Brooklyn, New York, USA
︎ Credits
Edward Zhou
︎ Advisor
Daniel T. Shiffman