Our Research
The Center of INTERDISCIPLINARY RESEARCH in MUSIC and MIND
-
Research
Our Research
Our research aims to uncover new paradigms that will augment creative expression through understanding the common underlying generative mechanisms that coexist in both music and mind. We will examine these mechanisms in three primary contexts that are closely interrelated: creation, experience, and engagement. This is because we believe creation and experience cannot exist without the other, and that engagement plays an important role in both. We view creation as an encoding process that takes place in the context of the composer's predicted experience; we view the generation of a listener's experience as a creative process of decoding that takes place as a performance unfolds and in the context of the listener's own predictions. In this sense creation and experience may also be examined in the context of Shannon's Mathematical Theory of Communication (1948), with a strong caveat that information is objective whereas the musical experience is highly subjective.
Broadly speaking we approach the problem from two distinct directions: music and mind. Our explorations spans across many domains, including therapy and education. Our research is not only interdisciplinary but also collaborative. Collaboration enables each domain to learn from others so it can contribute back more effectively.
Music
We want to build upon current theory that is extremely robust, though essentially descriptive. Our goal is to create new models that are dynamic and predictive. In particular, we want to understand music in terms of primitive building blocks that have distinct generative potentials (forces), how they relate to experience, and from there to understand their role in generating unique deep and surface structures. Moreover, we want to understand the role of culture, genre, and individuality (thumbprint) on the generative process. Through this we hope to develop a more general model with pluggable sub-modules that account for such generative differences.
To this end, the fields such as machine learning, deep learning, and music information retrieval (MIR) are showing promise. Yet reversing deep networks to generate music, which is at the core of recent AI, is not producing the musical results many are hoping for (at least yet). We will utilize deep learning with two fundamental additions: to each music input we will also add (1) its structure and (2) the features that describe a listener's experience over time. Structure can mostly be extracted automatically. Adding features relating to the experience will be challenging.
We also hope to utilize such technologies to be able to uncover underlying compositional and generative principles in a way that can be clearly understood by humans.
Mind
Exploring the mind is more challenging. Empirical studies of the mind's generative mechanisms are in their infancy. Moreover, as we hypothesize that generative mechanisms in creation and experience are related (possibly an encoding vs. a decoding process), the two need to be studied in relation to each other. The problem gets even more complex when try to understand what accounts for unique individual differences in creation and experience, and the role of engagement in both: the mind's mechanisms for context.
There is a significant body of work relating perception and cognition, and lately also on emotion. But as this work is typically carried out in silos, there are no models that bring the three together, let alone in the broader context of generative mechanisms. Fields such as computational creativity and cognitive psychology do offer some high-level models of creation. However, they are hard to validate as neuroscience is just beginning to explore cause and effect between perception and behavior in single neurons.
Our approach is to work collaboratively around four models: generative models of creation, generative models of experience, models of engagement in both creation and experience, and a model of context, that we hope can interoperate with the other three. Eventually we hope to unify these models into a single theory.
Therapy
Connect with us
- Google+
Design: Shai Cohen 2017