Sound and Music Computing - Dates Signes Chinois Astrologiques

Summer School

The goal of this summer school is to promote interdisciplinary education and research in the field of Sound and Music Computing. The School is aimed at graduate students working on their Master or PhD thesis, but it is open to any person carrying out research in this field.

While the two first editions the Summer School (Genova 2005 and Barcelona 2006) has been sponsored and organised by the S2S² EU project, the objective is to create a self-maintain yearly summer schools in Sound and Music Computing and signes chinois date.

Genova 2005

organized by the 6th Framework Programme IST FET Open Coordination Action S2S² Project

July 25 - 29, 2005

InfoMus Lab - DIST - University of Genova
Genova, Italy

Schedule
Monday 25: Control Session

9:00 - 9:15
Welcome Address
09:15 - 10.45
Presentations
Gesture in interaction: expressive control strategies, G.Volpe (DIST) , R.Bresin (KTH)
The interactive book, D.Rocchesso, A. De Gotzen (VIPS)
10:45 - 11:00
Coffee break
11:00 - 13:00
Invited speakers:
New trends in Dynamic Instrumental arts: Enactive design, A. Luciani (INPG, Grenoble, France)
Tangible Acoustic Interfaces for Computer-Human Interaction, (the TAI-CHI Consortium)
13:00 - 14:00
Lunch
14:00 - 17:00
Workshop: One to many or Many to one: Mapping strategies for the future
Presentation, R. Bresin (KTH)
Demonstration: Home conducting, A. Friberg (KTH)
Opponent, E. Bigand (LEAD)
Discussion
Tuesday 26: Music Session

09.00 - 10.45
Presentations
Making Sense of Sound and Music: An artificial intelligence View, G. Widmer (ÖFAI)
Sound and Sense: historical and philosophical view point, M. Leman (IPEM)
Musical Creation and Technological Innovation, N. Bernardini (MIU-FT)
Musical Learning and new technologies, E. Bigand (LEAD)
10.45 - 11.00
Coffee Break
11:00 - 13:00
Invited speakers
The Music Access Problem, F. Pachet (Sony CSL)
Music and research in the 21st century, T. Myatt (University of York)
13:00 - 14:00
Lunch
14:00 - 17:00
Workshop: Creativity and Innovative Technology
Presentation, M. Leman (IPEM)
Focus on Music similarity in:
Interactive Systems, F. Pachet (Sony CSL)
Music Information Retrieval, G. Widmer (ÖFAI)
Music Composition, T. Myatt (University of York)
Opponent, N. Bernardini
Discussion
Wednesday 27: Audio Session 1

09.00 - 10.45
Presentations
Content-based Audio Processing, X. Serra (UPF)
Physics-based Sound Synthesis, C. Erkut (HUT)
Sound Design and Auditory Displays, P. Polotti (VIPS)
Auditory Perception/Cognition: Cochlea to Cortex - A. de Cheveigné (ENS)
Interactive sound, F. Avanzini (DEI)
10.45 - 11.00
Coffee Break
11:00 - 13:00
Invited speakers
Perception and recognition of sounding objects, S. McAdams (McGill University)
title Audio Engineering in Music Information Retrieval, M. Sandler (Queen Mary, University of London)
13:00 - 14:00
Lunch
14:00 - 17:00
Workshop: Application of auditory models in audio DSP
Presentation, A. de Cheveigné (ENS)
Demonstration: Real-time auditory processing based on auditory models, D. Pressnitzer, D. Gnansia (ENS)
Opponent, G. De Poli (DEI)
Discussion
Thursday 28: Audio Session 2 and FET session

09:00 - 12:00
Workshop: Physics-based sound synthesis of plucked string instruments
Presentation, V. Valimaki (TKK)
Demonstration, H. Penttinen (TKK)
Opponent, D. Rocchesso (VIPS)
Discussion
12:00 - 13:00
General discussion/informal demos
13:00 - 14:00
Lunch
14:00 - 17:00
Future and Emerging Technology session (invited speakers)
title Intentional attunement: neural mechanisms of intersubjectivity., V. Gallese (Universita di Parma)
Architecture of Dissonance, R. Pierantoni
Vision-graphics convergence techniques for immersive videoconferencing, E.Trucco (Heriot-Watt University)
Friday 29: Roadmap session

09:00 - 12:00
Toward a Research Roadmap
Keynote speaker David Vernon (CAPTEC Ltd., EC Vision Network)
S2S²: report on the first year activities, N. Bernardini, D. Cirotteau (MIU-FT)
Merging and future collaborations
Critical evaluation of the summer school (UPF)
Steering Comittee
Nicola Bernardini and Damien Cirotteau, Media Innovation Unit, Firenze Tecnologia, Firenze, Italy, Coordinator of the S2S^2 project
Roberto Bresin and Anders Friberg, KTH-Kungl Tekniska Högskolan, Stockholm, Sweden
Giovanni De Poli and Federico Avanzini, CSC-DEI, University of Padova, Padova, Italy
Davide Rocchesso and Pietro Polotti, DI-VIPS, University of Verona, Verona, Italy
Antonio Camurri and Gualtiero Volpe, DIST-InfoMus Lab, Università degli Studi di Genova, Genova, Italy
Vesa Valimaki and Cumhur Erkut, Helsinki University of Technology - Laboratory of Acoustics and Audio Signal Processing, Espoo, Finland
Alain de Cheveigné, PECA-DEC, Ecole Normale Supérieure, Paris, France
Marc Leman, IPEM, Ghent University, Ghent, Belgium
Emmanuel Bigand, LEAD, Université de Dijon, Dijon, France
Xavier Serra and Xavier Amatriain, Universitat Pompeu Fabra - Music Technology Group, Barcelona, Spain
Gerhard Widmer, ÖFAI, Austrian Research Institute for Artificial Intelligence, Vienna, Austria
Local Organizing Committe
Antonio Camurri
Ginevra Castellano
Roberto Chiarvetto
Barbara Mazzarino
Francesca Sivori
Ilaria Vallone
Gualtiero Volpe
Registration
People interested in attending the 1st S2S2 Summer School are required to register by sending an e-mail to info-summerschool@s2s2.org

The e-mail should include the following information:

Title
First names
Family name
Organisation/Department
Street/PO Box
Postal Code
City
Country
Other contact information (fax, telephone), if available.
Web page, if available

The email should also include a short curriculum highlighting research interests and the reasons for participation. The steering committee will select participants depending on such information.

Note that participants are asked to actively contribute to the preparation of the school, for example by sharing material using the Internet infrastructure of S2S2.

Registration must be performed on or before May 1st, 2005. The Steering Committee decisions on confirmation of registration will be made on or before May 15th, 2005.

Registration Fees

Participants from the S2S2 partners and from other participating EU project (EU IST TAI-CHI, ENACTIVE, HUMAINE and ConGAS)

Free
External participants 200 Euro
Dates
Registration June 25, 2005
First Confirmation of registration June 15, 2005
Second Confirmation of registration June 30, 2005
S2S2 Summer School July 25-29, 2005

Barcelona 2006

Tue, 2006-03-28 10:59 — xserra
Summer School in
Sound and Music Computing
Pompeu Fabra University
Barcelona, Spain
July 24-28, 2006
This Summer School is organized by the S2S² project and the Music Technology Group of the Pompeu Fabra University in Barcelona, with the goal to promote interdisciplinary education and research in the field of Sound and Music Computing. The School is aimed at graduate students working on their Master or PhD thesis, but it is open to any person carrying out research in this field.

This is the second Summer School organized by S2S², last year it took place in Genova (2005 Summer School).

Teachers

Invited experts

Program

Application

Registration fee

Travelling

Social events

Venue

Report

Teachers
Roberto Bresin ( Royal Institute of Technology, Stockholm)
Nicola Bernardini (Conservatory of Padova)
Antonio Camurri (University of Genova)
Alain De Cheveigné (Ecole Normale Supérieure, Paris)
Henkjan Honing (University of Amsterdam)
Marc Leman (University of Ghent)
Xavier Serra (Pompeu Fabra University, Barcelona)
Giovanni De Poli (University of Padova)
Davide Rocchesso (University of Verona)
Vesa Valimaki (Helsinki University of Technology)
Bill Verplank (Stanford University)
Gerhard Widmer (Johannes Kepler University Linz)
Invited Experts
Jyri Huopaniemi (Nokia Research Center, Helsinki)

Leigh Landy (Music, Technology and Innovation Research Centre, De Montfort University, Leicester)

Fabien Levy (Columbia University, New York)

Pierre Louis Xech (Microsoft Research, Cambridge)
Academic Program
6 hours of lectures by Bill Verplank on Interface Design and 6 hours of lectures by Henkjan Honing on Music Cognition.
9 hours of presentations by the participating students and discussions on their research work.
20 hours of presentations and discussions related to the S2S² Sound and Music Computing Roadmap.
The lectures are designed to be of interest to any graduate student or researcher in the field of Sound and Music Computing. The topics chosen for this year are Interface Design and Music Cognition; relevant topics in our research fields which have particular methodologies and research strategies. The lectures will present these particular methodologies and their application in Music related problems.

All the participating students will give short presentations on their current research. The emphasis will be given to methodological and context issues. Thus each presentation should emphasize the methodological approach chosen and the scientific, technological and industrial context of the research. The discussions will give feed back to the students that should be useful for the continuation of their research.

The main topic of the summer school will be the Roadmap on Sound and Music Computing that is being written as part of the S2S² project. There will be special lectures by invited experts and discussions on two major parts of the Roadmap, the industrial and the cultural contexts of the field. In particular the focus will be given to the academic research and both its relationship with the industrial exploitation and its use in contemporary music production. The resulting discussions will contribute to the roadmap.

Preliminary program schedule:

Monday 24th
Tuesday 25th
Wednesday 26th
Thursday 27th
Friday 28th
9:00
Music Cognition
Henkjan Honing
Interface Design
Bill Verplank

Music Cognition
Henkjan Honing
Workshop:
Social and cultural context for Sound and Music Computing
Moderator: Marc Leman

Workshop:
Industrial context for Sound and Music Computing
Moderator: Xavier Serra

11:00
Coffee break
11:15
Interface Design
Bill Verplank

Music Cognition
Henkjan Honing
Interface Design
Bill Verplank

(+) Workshop:
Social and cultural context for Sound and Music Computing

(+) Workshop:
Industrial context for Sound and Music Computing

13:00
Lunch
14:00
Scientific context of research
Moderator: Alain de Cheveigné

Presentation by students

Social context of research
Moderator: Nicola Bernardini

Presentation by students

Industrial context of research
Moderator: Vesa Valimaki

Presentation by students

(+) Workshop:
Social and cultural context for Sound and Music Computing

Critical evaluation and discussion
about the summer school
Moderator: Roberto Bresin
15:45
Coffee break
16:00

Discussion

Workshop: Towards a shared and modular curriculum on SMC
Moderator: Giovanni de Poli
Visit to the Music Technology Group, Universitat Pompeu Fabra

17:00
PhD defense

21:00
Banquet

Concert

Student presentations (15 minutes each):

Scientific context: Monday 24th of July

“Evolving populations of computational models and applications to expressive music performance” - Amaury Hazan
In the context of expressive performance modeling, we aim to induce expressive performance models using a performance database which was extracted from a set of acoustical recordings. We propose a new approach called Evolutionary Population of Generative Models (EPGM) based on Evolutionary Computation (EC). We present a first instantiation of EPGM based on Strongly Typed Genetic Programming (STGP), in which the evolved programs are constrained to have the structure of Regression Trees. We show this approach is more flexible than well-established machine learning approaches because (i) it evolves a population of models which may produce different predictions, (ii) it enables the use of custom data types at different levels (primitives inputs and outputs, prediction type), (iii) it enables the use of elaborate and possibly domain-specific accuracy measurements. We illustrate this latter point by presenting a fitness function based on melodic similarity which was fit to human judgement based on a listening experiment. We finally show this approach can be applied to high level transformations (e.g. mood) and present some future EPGM extensions.
“Depth perception” - Delphine Devallez
The present research work deals with depth perception, and how to render sound sources spatially separated in distance and give a sense of perspective. Over the past decades, the majority of research on spatial sound reproduction has concentrated on directional localization, resulting in increasingly sophisticated virtual audio display systems capable of providing very accurate information about the direction of a virtual sound source. However it is clear that full 3-dimensional rendering also requires an understanding of how to reproduce sound source distance. Since a few years a couple of researchers in psychology, neuroscience and computer engineering have shown interest for this third dimension, that could further enlarge the bandwidth of interaction in multimodal display and provide newly designed interfaces. Moreover since display technology is already able to produce visual depth, it seems natural to enrich the sounds of objects and events with information about their relative distance to the user. From a technological point of view, the auditory-visual interactions resulting from this multimodal presentation of information should then be taken into account and further scientifically investigated , since they are still poorly understood in particular with regard to depth perception.
“Gesture based instrument synthesis” - Alfonso Pérez
Synthesis of traditional music instruments has been an active research area and there exist successful implementations of instruments with low degree of control like non-sustained excitation instruments. But for instruments with sustained excitation, such as bowed strings or wind instruments, where the interaction between instrument and performer is continuous, the quality of existing models is far from realistic. In general, musical instrument synthesis techniques try to model the instrument but forget about the interaction between performer and instrument, that is musically much more relevant than the instrument itself. This interaction covers expressivity, the intentional nuances and gestures that make the performer, but also what we call naturalness, that is, non intentional gestures made by the performer due to the physical constraints of the instrument, the playing technique, etc. These non-intentional gestures give a specific flavor to the sound of the performance, that make it sound natural and realistic. We can roughly classify the existing synthesis techniques into two categories: Physical models that focus on physical phenomena of sound production, and spectral models that focus on sound perception. With physical models naturalness and expressivity can hardly be reached without the need of controlling a huge amount of parameters, that require the instrument itself, as well as a mastery comparable with the traditional performer and spectral models lack of performer interaction and articulation, that is, gestures. The aim of this work is to try to improve the quality in instrument sound synthesis, specifically for the violin family. We propose a hybrid model between spectral and physical models to take advantage of the characteristics of both approaches, focusing on the gestures of the performer with the objective to provide naturalness in the synthesis.
“Expressive gesture and music: analysis of emotional behavior in music performances” - Ginevra Castellano
I present some examples of analysis of music performances aiming at investigating the role of expressive gesture in music, with a special focus on recognition of emotions. I performed an experiment in which two musicians, a pianist and a cello player, played an excerpt from the Sonate no 4 op 102/1 for piano and cello from L. van Beethoven in different emotional conditions. I show how to extract expressive movement features from music performance and preliminary results from the analysis of such data. The experiment has been carried out in collaboration with GERG. Feature extraction is performed in real-time by the new EyesWeb 4 open platform (available at www.eyesweb.org).
“Mapping from perception to sound generation” - Sylvain Legroux
“The role of audiofeedback to improve motor performance of subjects” - Giovanna Varni
Social & Cultural context: Tuesday 25th of July

“Musical interfaces accessible to novices” - James Mc Dermott
Our research focusses on one sub-task of musical composition, that of setting synthesizer parameters. We use interactive Evolutionary Computation (iEC) to aid inexperienced users in controlling synthesizers: it allows an iterative design process in which the user’s main task is judging results, rather than constructing solutions. We discuss potential advances in iEC, including a new interface component and a method of supplementing it with non-interactive EC. We also present results on non-interactive EC performance. We discuss the possibilities of applying the same approach to other sub-tasks of composition; and finally we imagine the implications of using the iEC approach to remove the constraints of skill and prior knowledge from the composition process, so that it becomes purely a matter of taste.
“Voice analysis for singing education” - Oscar Mayor
The current research in tools for singing education consists mainly in real-time tools with visual feedback giving information about tuning and tempo of the singing performance and voice quality characteristics, referring to timbre and formants of the singer’s voice. These tools mainly use real-time visualization of pitch curve against time and short-term spectrum or spectrogram giving instantaneous visual feedback to the performer. In this talk a system for evaluating singing performances is presented where the singing performance is analyzed using a MIDI score as reference and a visual expressive transcription of the performance is given as a result. The expression transcription consist on the notes in the MIDI score aligned to the user performance and each note segmented into sub-regions (attack, sustain, release, transition, vibrato). Each region is labeled with the kind of expression detected by the system following a set of heuristic rules based on analysis descriptors. The expression labels assigned to each sub-region are based in a previous expression categorization done manually from a large set of singing performance executions in order to distinguish between common resources used by singers in pop-rock music. Some analysis descriptors can be also visualized simultaneously by the performer to have a rich visual feedback of the performance.
“Visual feedback in learning to perform music” - Alex Brandmeyer
The use of visual feedback to aid musicians in improving their performances has recently been researched using different visual representations and analysis techniques. We recently conducted experiment in which percussion students imitated different patterns recorded by a teacher with and without the use of visual feedback. In the experiment we used a real drum kit with contact microphones attached to record data about the timing and dynamics of the performances. We provided 2 different forms of visual feedback as well as a control condition with no visual feedback to test the effects of visual feedback and the type of visual representation on performance accuracy. The first form of feedback, analytic, utilized a scrolling display similar to a musical score, while the second, holistic, presented a changing shape drawn using probabilities generated by a real time statistical analysis of the incoming notes. Qualitative feedback from the subjects indicated that the visual feedback was found to be useful. We are currently doing further analysis of the data collected to see if the visual feedback improved performance, and if so, in what ways.
“The rigid boundaries of musical genres” - Enric Guaus
One of the most active areas in Music Information Retrieval is that of building automatic genre classification systems. Most of their systems can achieve good results (80% of correct decisions) when the number of genres to be classified is small (i.e. less than 10). They usually rely on timbre and rhythmic features that do not cover the whole range ofmusical facets, nor the whole range of conceptual abstractness that seem to be used when humans perform this task. The aim of our work is to improve our knowledge about the importance of different musical facets and features on genre decisions. We present a series of listening experiments where audio has been altered in order to preserve some properties of music (rhythm, timbre, harmonics…) but at the same time degrading other ones. The pilot experiment we report here used 42 excerpts of modified audio (representing 9 musical genres). Listeners, who had different musical background, had to identify the genre of each one of the excerpts.
“Programming for the Masses - Computer Music Systems as Visual Programming Languages” - Guenter Geiger
“Intonation and expression: a study and model of choral intonation practices” - Johanna Devaney
The modeling of choral intonation practices, much like those of non-fretted string ensembles, presents a unique challenge because at any given point in a piece a choir’s tuning cannot be consistently related to a single reference point; rather a combination of horizontal and vertical musical factors form the reference point for the tuning. The proposed methodology addresses the conflict through a combination of theoretical and technological approaches. In the theoretical approach, the vertical tendencies are addressed in relation to the harmonic series and theories of sensory consonance, while the horizontal tendencies are examined in terms of recent theories of tonal tension and attraction. The technological, or computational, approach uses statistical machine learning techniques to build a model of choral intonation practices from the microtonal pitch variations between recorded choral performances. The observed horizontal intonation practices may then be examined as expressive phenomena by taking the horizontal tendencies inferred from the tension models as a norm, and then viewing musically appropriate deviations from this norm as expressive phenomena. Thus horizontal intonation practices may be related to not only to musical expectation but also musical meaning or emotion, as it relates to performance.
“Object Design for Tangible Musical Interfaces”- Martin Kaltenbrunner
This research focuses on the design of passive tactile features for tangible user interface components and their relation to arbitrarily assigned acoustic descriptions. Tactile dimensions such as surface structure, temperature, weight, the global shape and size allow the classification of passive tangibles into generic object classes and specific object instances. Within the context of the reacTable, a modular electro-acoustic synthesizer with a tangible user interface, these tactile features can be used to encode the various synthesizer components in the haptic domain allowing the easy object identification with a simple grasp or hand enclosure. The acoustic properties of the synthesizer components will be defined with adjectives describing the perceptive quality of the resulting sound. The current design of the reacTable tangibles defines a series of acrylic objects in different geometric shapes with attached colour or symbol codes, which proved to be problematic in a dark concert environment as well as for sight-disabled users. A user study shall clarify if the assigned object descriptions and the chosen hypothetical mappings between the tactile perception and sonic behaviour of a chosen synthesis component are valid and should eventually lead to an improved design of the tangibles for the instrument.
Industrial context: Wednesday 26th of July

“Free software and music computing” - Pau Arumí
“Toys and video games” - John Arroyo
“Scratching and DJs” - Kjetil Falkenberg Hansen
“Leisure and voice control” - Jordi Janer
The role of Sound and Music Computing in the industry has evolved over the last decades in the three typical targets: studio equipment, musical instruments and home entertainment. While studio equipment and musical instruments have already massively incorporated SMC technologies, home entertainment systems will be presumably our main target for the next years. Is in this context that we can use the term “leisure”, which can be applied to a convergence of home media centers and game consoles.
This presentation addresses voice control as a way to transmit musical information to a musical system. The main application of voice control is instrument synthesizers, useful for instance in karaoke devices. Nevertheless, the research outcome can be also applied to control conducting or visualization systems. This research consists of two parts: voice gesture description and definition of adequate mapping strategies. Studying instrument imitation, we can define a voice gesture as a sequence of consonant-vowel phonemes. Phonetic segmentation and classification in broad phonetic classes are being developed. In addition, slow-varying perceptual envelopes are added to the voice gesture. Summarizing, a voice gesture is described by context descriptors and continuous envelopes. Mapping these voice gestures to the instrument control will depend on the instrument and the technique employed. Here, instead of constraining voice description to MIDI messages, we propose to do a more adequate mapping for signal-driven synthesis that can be either knowledge-based or based on machine learning. The talk will conclude looking at current commercial systems and potential use-cases of voice control in a leisure context.
“Music recommendation systems” - Marco Tiemann & Oscar Celma
“Audio melody extraction: the importance of high level features in music information retrieval applications” - Karin Dressler
Workshop: Social and cultural context for Sound and Music Computing
Morning:
S2S2 presentation- Nicola Bernardini
“Social context for Sound and Music Computing”- Marc Leman
“Is a Science without Conscience a support for music?” - Fabien Levy
I will first show how both composition and science are related with the more general problem of their representations, the first playing with signs to build new music-worlds (cf. the couple graphemology/grammatology in semiotic), and the latter being deeply united with its representations (cf. Derrida). Without a high conscience of the episteme implied by those representations, composing and doing musical sciences are “but the ruin of the soul”, to parody Rabelais. To exemplify my position, I will then try to “deconstruct” different scientific models working on the controversial notion of “musical consonance” (historical musicology, acoustics and psychoacoustics).
“Investigating a Sound-based Paradigm and its Social Implications” - Leigh Landy
Afternoon:
Panel: Social and Cultural Context for Sound and Music Computing: Does technology drive music or viceversa?
Nicola Bernardini, Marc Leman, Fabien Levy, Leigh Landy, Davide Rocchesso, Roberto Bresin
Workshop: Industrial context for Sound and Music Computing
“Initial ideas for the Industrial Context of the S2S2 roadmap” - Xavier Serra
“Sound, Music and Mobility - Key Challenges for Future Research” - Dr. Jyri Huopaniemi, Head of Strategic Research, Nokia Research Center
In this presentation, I will give an overview of relevant research challenges for sound and music in future mobile devices. The background and history of mobile computing will be explained, and the presentation is augmented by current research examples. Key issues in technology, user experience and business outlook will be covered. Finally, recommendations for concentration areas in future research of sound and music will be given.
Panel: Industrial Context for Sound and Music Computing: Is technology transfer working?
Xavier Serra, Pierre-Louis Xech, Jyri Huopaniemi, Vesa Valimaki, Antonio Camurri, Alain de Cheveigné

Application
A maximum of 20 students will be admitted to the school. The candidates will be evaluated by the teachers and the application should include the following documents in pdf format:

Curriculum vitae (max. 1 page)
Certified copy of academic degree
Summary of the research proposal (max. 2 pages)
Students have to send their applications to Xavier Serra before May 1st. Notification of acceptance will be given no later than May 15th.

For people not wishing to make research presentations during the school, a brief curriculum vitae is sufficient and the deadline for application is June 30th.

These people should also send their applications to Xavier Serra or Emilia Gómez.

Registration Fee
The regular registration fee is 300 €. This fee also covers the costs for lunch and various evening social events.

The registration fee for students is 200 €. This fee also covers the costs for lunch and various evening social events.

There will be a few student scholarships that will cover the registration fee.

The deadline for registration is June 30th.

Traveling and Accommodation
Participants will have to arrange their own travel and accommodation. University dorms are available at a special rate. For additional information contact Cristina Garrido.

Social events
Banquet, Tuesday 25th: El Chiringuito de Escribá

Concerts at Metronom:

25th of July at 21:00 “Deriva del Cristal Sonoro” (IUA-Phonos grant): by Carmen Platero and Cristián Sotomayor
Installation - Performance

26th of July at 21:00 ReacTable and Ensamble Crumble

27th of July at 21:00 Concert around Harry Sparnaay, supervisor: Harry Sparnaay and performed by Harry Sparnaay students at ESMUC:
Irene Ferrer Feliu, flute
Alejandro Castillo Vega, clarinet
Victor de la Rosa, Daniel Arias Romeo, Gerard Sibila Roma, bass clarinet

Search for other events in Barcelona during the summer school

Venue
School: the school sessions will take place in the França Building of Pompeu Fabra University (at the Auditorium), as well as coffee breaks.
Passeig de Circumval·lació, 8. 08003 Barcelona (map)

Lunches: Navia restaurant, in front of the França building.
Comerç 33. 08003 Barcelona (map)

Banquet: El Chiringuito de Escribá.
Bogatell beach. (map)

Concerts: Metronom.
C. Fusina 9 - 08003 Barcelona (map)

Stockholm 2007

Summer School in Sound and Music Computing
KTH Royal Institute of Technology
Stockholm, Sweden
July 2-6, 2007
This Summer School is organized by the Music Acoustics Group of the KTH in Stockholm, with the goal to promote interdisciplinary education and research in the field of Sound and Music Computing (SMC). The School is aimed at graduate students working on their Master or PhD thesis, but it is open to any person carrying out research in this field.

This is the third SMC Summer School. The first two were organized by the S2S² Coordination Action, last year it took place in Barcelona (2006 Summer School) and two years ago in Genova (2005 Summer School).

Teachers

Invited
experts

Program

Application

Registration
fee

Travelling

Social
events

Venue

Teachers
Anders Askenfelt (Royal Institute of Technology, Stockholm)
Elvira Brattico (University of Helsinki, Finland)
Roberto Bresin (Royal Institute of Technology, Stockholm)
Nicola Bernardini (Conservatory of Padova)
Cumhur Erkut (Helsinki University of Technology)
Federico Fontana (University of Verona, Italy)
Anders Friberg (Royal Institute of Technology, Stockholm)
Lalya Gaye (Viktoria Institute, Gothenburg, Sweden)
Minna Huotilainen (University of Helsinki, Finland)
Olivier Lartillot (University of Jyväskylä, Finland)
Pietro Polotti (Conservatory of Como)
Davide Rocchesso (University of Verona)
Stefania Serafin (Aalborg University Copenhagen)
Patrick Susini (IRCAM)
Johan Sundberg (Royal Institute of Technology, Stockholm)
Sten Ternström (Royal Institute of Technology, Stockholm)
Gualtiero Volpe(University of Genova)
Invited Experts
Jonas Engdegård (Senior Research Engineer, Coding Technologies, Stockholm)
Staffan Ljung (Manager for Entertainment Solutions, Ericsson, Stockholm)
Ernst Nathorst (Propellerhead Software, Stockholm)
David Åström (Kocky / Soul Supreme, Stockholm)
Academic Program
6 hours of lectures by Minna Huotilainen and Elvira Brattico on Neurosciences and Music and 6 hours of lectures by Lalya Gaye on Mobile Music Technology.
3 hours of oral presentations by the participating students and discussions
on their research work.
poster presentations by the participating students during coffee breaks (1.5 hour every day of the School)
12 hours hands-on activities by the participating students in collaboration with the teachers of the Summer School
4 hours of oral presentations and discussions by experts from industries in the field of Sound and Music Computing.
2 hours of oral presentations by leaders of EU funded projects in the field of Sound and Music Computing.
The lectures are designed to be of interest to any graduate student or researcher in the field of Sound and Music Computing. The topics chosen for this year are Neurosciences and Music and Mobile Music Technology; relevant topics in our research fields which have particular methodologies and research strategies. The lectures will present these particular methodologies and their application in Music related problems.

All the participating students will give short presentations on their current research during a speed-talk of four minutes. The emphasis will be given on research questions and particularly on methodological issues related to their research project. Students will receive a written feedback from the teachers that should be useful for the continuation of their research.

All the participating students will present a poster about their PhD work. The posters will be on show for all the duration of the Summer School, with discussions during coffee breaks.

All students will work on mini-projects focusing on the two themes of the Summer School. Results of the mini-projects will be presented on the final day of the School. Mini-projects will give the opportunity for hands-on activities such as testing software tools or planning of experiments.

Preliminary program schedule:

Monday 2nd
Tuesday
3rd
Wednesday 4th
Thursday 5th
Friday 6th
9:00
Neurosciences and Music
Minna Huotilainen & Elvira Brattico

Mobile Music Technology

Lalya Gaye

Neurosciences and Music
Minna Huotilainen & Elvira Brattico

The Future Sessions
The future of music: What do we need? How will it be?

Presentations of EU projects in the field of Sound and Music Computing

11:00
Coffee break & Poster presentations
Click here for the schedule of the Future Sessions
Coffee break & Poster presentations
11:15
Mobile Music Technology

Lalya Gaye

Neurosciences and Music
Minna Huotilainen & Elvira Brattico

Mobile Music Technology

Lalya Gaye

The Future Sessions
The future of music: What do we need? How will it be?

to be decided

13:00
Lunch
14:30
Speed talks

Short 4 min presentations by students

Speed talks

Short 4 min presentations by students

Speed talks

Short 4 min presentations by students

Hands-on sessions
Mini-projects

Mini-projects final presentations by students

15:30

Coffee break & Poster presentations
16:00
Hands-on sessions
Mini-projects

Hands-on sessions
Mini-projects

Hands-on sessions
Mini-projects

Hands-on sessions
Mini-projects

Mini-projects final presentations by students

18:00

Get together drink!
19:00
Banquet

Jam Session

Speed Talks
In a maximum of 4 slides in total (remember that you will have 4 minutes for your presentation) present your research questions/problems/crazy ideas/etc. on which you would like feedback from those present.
A printout of your slides will be available to the participants. Please notice that presentations longer than 4 slides will not be accepted!!!
Download here the powerpoint file with the instructions and the official format for the slides.

Poster presentations
Students are invited to present a poster about their research work.
Download here the powerpoint file with the instructions and the official format for the poster.

The Future Sessions, Thursday July 5th
The future of music: What do we need? How will it be?
9:00 Semi-parametric audio coding - Today and beyond
Jonas Engdegård, Senior Research Engineer, Coding Technologies

9:40 The future of music software
Ernst Nathorst, Propellerhead Software

10:20 Coffee break

10:40 The future of music on mobile devices
Staffan Ljung, Strategic Product Manager for Music, Ericsson

11:20 Independent music production
David Åström, Kocky/Soul Supreme

12:00 Panel discussion

Each students will:

Mail a question to the speakers before the Summer School start
Prepare a question for the panel discussion
Application
A maximum of 20 students will be admitted to the school. The candidates will be evaluated by the teachers and the application should include the following documents in pdf format:

Curriculum vitae (max. 1 page)
Certified copy of academic degree
Summary of the research proposal (max. 2 pages)
Students have to send their applications to Roberto Bresin before May 1st. Notification of acceptance will be given no later than May 10th.

For people not wishing to make research presentations during the school, a brief curriculum vitae is sufficient and the deadline for application is May 15th.

These people should also send their applications to Roberto Bresin.

Registration Fee
The regular registration fee is 300 €. This fee also covers the costs for lunch and various evening social events.

The registration fee for students is 200 €. This fee also covers the costs for lunch and various evening social events.

The deadline for registration is May 31st.

NEW! Please find payment instructions here.

Social events
Tuesday July 3rd, from 17:00 to 1:00: Bring the Noise!, DJ evening at Debaser, free entrance
Thursday July 5th at 19:00: Banquet at Rosendals Trädgård with DJ and jam session
Friday July 6th from 22:00 to 3:00: Summer Adventure, Club at Debaser, free entrance
A kayak tour?
A discgolf game?
Search for other events in Stockholm during the summer school
Venue
School:

the school oral sessions will take place in lecture room F2 at KTH central campus, as well as coffee breaks.
Address: Lindstedtsvägen 26 SE-100 44 Stockholm (map)
the school hands-on sessions will take place in Department of Speech Music and Hearing
Address: Lindstedtsvägen 24, 100 44 Stockholm (map)
Lunches: Brazilia restaurant, in the KTH Campus. Here is the menu, if you can Swedish.
Address: Brinellvägen 64, 100 44 Stockholm (map)

Banquet: Rosendals Trädgård, with facilities for organizing a jam session.
Address: Rosendalsterrassen 12, 115 21 Stockholm (map)

Traveling and Accommodation
Participants will have to arrange their own travel and accommodation.

We advise you to book accomodation early as the first week of July is very busy in Stockholm.

Booking services
Stockholmtown.com
Visit Stockholm
Stockholm budget accomodation

Use hitta.se to look up the location. The summer school will take place at Lindstedtsvägen 24. The service is in Swedish, but is easy to use. “Vad söker du?” means “what do you look for”. Type in name, company or telephone number. The second field “Var?” means “Where”, and here you type in address. Use “street ‘street number’ stockholm”. You only need to fill out one field.

Camping

Close to summer school
Östermalms Camping, just 5 minutes walk away.

Less central
From www.visit-stockholm.com, all are quite far from the school.

Bed and breakfast/Hostels

Close to summer school
Bed & Breakfast
Rehnsgatan 21
Tel: +46815 28 38
Sleeping halls or single-rooms.
hitta.se

City Backpackers
Upplandsgatan 2 A
Tel: +46820 69 20
2-, 4- or 8-bed rooms
E-mail: city.backpackers@swipnet.se
www.citybackpackers.se
hitta.se

Columbus Hotell och Vandrarhem
Tjärhovsgatan 11
Tel: +468644 17 17
2-, 4- or 6-bed rooms
Getting there: Subway to Medborgarplatsen, Bus 46, 48, 52.
hitta.se

Hotel Mitt i City
Västmannagatan 13
Tel:+46821 76 30
Sleeping halls, 12 girls/18 boys or single-rooms.
hitta.se

Långholmen Vandrarhem och Hotell
Gamla kronohäktet
Tel: +468668 05 10
Getting there: Subway to Hornstull or Bus 40.
www.langholmen.com
hitta.se

Less central
Bredängs vandrarhem
Stora sällskapets väg 51
Tel: +46897 62 00, 97 70 71
2- or 4-bed rooms.
Getting there: Subway to Bredäng, short walk (5 min).
hitta.se

Solna Vandrarhem
Enköpingsvägen 16
Tel: +468655 00 55
Getting there: Subway to Solna Centrum, bus 505 (10 min) to Råstahem.
www.solna-vandrarhem.se
hitta.se

Ängby Camping
Blackebergsvägen 24
Tel: +46837 04 20
Cottages. 4 beds in each.
Getting there: Subway to Ängbyplan, short walk (5 min)
hitta.se

Hotels

Close to summer school
Art Hotel, Johannesgatan 12
Stureparkens gästvåning, Sturegatan 58
Elite Hotel Stockholm Plaza, Birger Jarlsgatan 29
Rex Hotel, Luntmakargatan 73
Hellsten Hotel, Luntmakargatan 68
Hotel Tapto, Jungfrugatan 57
Hotell Arcadia, Körsbärsvägen 1

Roadmap

Sun, 2007-02-04 21:39 — cirotteau
The version 1.0 of the roadmap has been released and successfully launched in Bruxelles the 16th of April 2007. You can download the pdf version at:

http://www.soundandmusiccomputing.org/filebrowser/roadmap/pdf

and browse the LaTeX sources at:
http://www.soundandmusiccomputing.org/filebrowser/roadmap/sources

Education and Research Institutions

Music Technology Education and Research Institutions

Europe

Austria

The Intelligent Music Processing and Machine Learning Group, Austrian Research Institute for Artificial Intelligence, Vienna
Studio for Advanced Music & Media Technology, Bruckner-Konservatorium, Linz
Institut für Elektronische Musik und Akustik, University of Music and Dramatic Arts in Graz
Belgium
Institute for Psychoacoustics and Electronic Music, University of Ghent

Denmark

DIEM (The Danish Institute of Electroacoustics Music), Aarhus
Finland
Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology
Audio Research Group, Digital Media Institute, Tampere University of Technology

France

IRCAM (Institut de Recherche et Coordination Acoustique/Musique), Paris
Sony CSL, Paris
GRM (Groupe de Researches Musicales), INA, Paris
Grame, Lyon
IMEB (Institut International de Musique Electroacoustique), Bourges
ACROE (Association pour la Création et la Recherche sur les Outils d’Expression), Grenoble
Laboratoire d’ Acoustique Musicale, Université Paris 6, Paris
Laboratoire de Mecanique et Acoustique, CNRS, Marseille
Groupe Art & Informatique de Vincennes à St Denis, Département Informatique, Université Paris-8
La Kitchen, Paris
GMEM, Marseille

Germany

ZKM, Karlsruhe
Experimental Studio, Freiburg
Institut für Medientechnik, Fraunhofer Institut, Ilmenau
Deutsche Gesellschaft für Elektroakustische Musik, Berlin

Great Britain

Center for Digital Music, Queen Mary, University of London
Electronic Studio, Department of Music, Leeds University
Music Technology Group York University
Electroacoustic Music Studios, Department of Music, Birmingham University
Electroacustic Music Studios, Faculty of Music, University of Edinburg
CTI Music, Music Dept., Lancaster University
Centre for Music Technology, University of Glasgow
Institute of Sound Recording, University of Surrey
Interdisciplinary Center for Computer Music Research , University of Plymouth
Acoustics Research Centre, Salford University

Ireland

Sonic Arts Research Center, Queen’s Univeristy, Belfast
Centre for Computational Musicology and Computer Music, University of Limerick

Italy

Center of Computational Sonology, University of Padova
Laboratorio di Informatica Musicale, University of Genova
Tempo Reale, Florence

Netherlands

STEIM, Amsterdam
Faculty of Art, Media & Technology, Hilversum
Institute of Sonology, Royal Conservatory, The Hague
Center for Electronic Music, Amsterdam
Music, Mind, Machine Group of Nijmegen Institute for Cognition and Information
Norway
Norwegian network for Technology, Acoustics and Music (NoTAM)
Portugal
GAUDIO, INESC-Porto

Switzerland

Centre d’informatique musicale, Conservatoire de Musique de Genève
Swiss Center for Computer Music, Zürich
Institute for Computer Music and Sound Technology, Zürich
Spain
Music Technology Group, Audiovisual Institute, Pompeu Fabra University, Barcelona
Sweden
Department of Speech, Music and Hearing, The Royal Institute of Technology

United States

California

CCRMA (Center for Computer Research in Music and Acoustics), Stanford University
CNMAT (Center for New Music and Audio Technologies), UC Berkeley
Center for Research in Computing and the Arts , UC San Diego
School of Music, California Institute of the Arts (CalArts)
CCMRC (Center for Computer Music Research and Composition), UC Santa Barbara
CREAM (Center for Research in Electro-Acoustic Music), San Jose State University
Center for Contemporary Music, Mills College
Electronic Music Studio, University of California Santa Cruz

Other states

Music, Mind and Machine Group, Media Lab, Massachusetts Institute of Technology
SoundLab, Princeton University
Studio for Electroacoustic Composition, Harvard University
CEMI (Center for Experimental Music and Intermedia), University of North Texas
CERL Sound Group, University of Illinois
Bregman Electro-acoustic Music Studio, Darmouth College
TIMARA (Technology in Music and Related Arts), Conservatory of Music, Oberlin College
Computer Music Center, Eastman School of Music, University of Rochester
Computer Music Project, School of Computer Science, Carnegy Mellon University
Center for Advanced Research Technology in the Arts and Humanities, University of Washington
Electronic Music Studios, University of Iowa
CECM (Center for Electronic and Computer Music), Indiana University

South America

Laboratório de Música e Tecnologia, Universidade Federal do Rio de Janeiro, Brazil

Canada

Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), McGill University, Montreal
Banff Center, Banff

Australia

Music Department, La Trobe University
The Australian Centre for the Arts and Technology, Australian National University in Canberra
The Golden Pages: University music departments’ and faculties’ home pages, University of London

Mailing lists

AUDITORY: A mailing list for the discussion of organizational aspects of auditory perception. Created in 1992 by Professor Albert S Bregman (McGill University Department of Psychology). Includes an archive of postings since the creation of the list, and other related information and links.
Sound to Sense, Sense to Sound discussion mailing list
Music-IR - Discussion of issues in theoretical and applied research and development in Music Information Retrieval, related announcements, etc.
MUSIC-DSP: focus on the sharing of music/sound related DSP (digital signal processing) strategies, techniques, code, etc. It may be of interest to anyone working with sound and computers, especially those involved in developing their own software or building custom hardware.

Publications

Journals on paper:

Electronic publications: