Performative musical expressiveness can be attributed to the manipulation of parameters associated with the macro categories of timing, dynamics, and timbre. The purpose of an expressive performance may vary depending on the specific musician and the sociocultural and stylistic context in which the performance takes place. Among the objectives commonly identified in the literature are the expression (and/or induction in the listener) of emotions, the clarification of the piece’s formal structure, the sonic rendering of concepts and sensory perceptions, and adherence to a specific stylistic current. Additionally, it is evident for some musicians that they imprint their performances with distinctive expressive traits that make them recognizable to the listener and that transcend the aforementioned objectives. The field of computational musical expressiveness aims to develop automatic models that are ideally capable of generating virtual musical performances endowed with expressiveness, similar to what occurs with human musicians. From the information engineering perspective, computational performative musical expressiveness has historically involved the fields of affective computing, artificial intelligence, human-computer interaction, biomechanical simulation, and robotics. Although these fields are undoubtedly at the heart of computational musical expressiveness, this research area is also inherently highly multidisciplinary, requiring expertise at least also in psychological and musicological areas. The stylistic reference domain has been, at least in a largely predominant manner, that of European art music from the 17th, 18th, and 19th centuries. This can be partially explained by a general academic cultural bias toward music commonly called “classical”. Another reason Euro-classical music has historically been a protagonist in the field of computational expressiveness is that it is based on the score/performance dualism, thus allowing a precise analysis of the performance’s expressive deviations compared to the score’s prescriptive dimension. The musical instrument most often referred to has been the piano. This research work, on the other hand, addresses the stylistic domain of contemporary popular music, specifically lead electric guitar parts, a field so far explored only very partially. This has firstly required understanding what “expressive performance” means in the reference context. Also, the physical characteristics of the instrument, the performative techniques used by guitarists, and the biomechanical limits or constraints that may arise in the interaction between performer and guitar had to be analyzed. The following step was an in-depth review of the scientific literature on computational musical expressiveness, also to understand which of the previous works could serve as inspiration, possibly with specific modifications, in the new application context. The development of the expressive model was thus based on the use of optimization techniques for the automatic generation of the basic virtual fingering, on a rule-based system for the automatic insertion of articulations and expressive techniques, on a Machine Learning approach for the simulation of timing and dynamics deviations primarily due to involuntary biomechanical and psycho-perceptive components, and finally on the user’s introduction of indications regarding the deliberate expressive component (specifically concerning loudness and positioning of the notes relative to the beat). The various modules of the model, based on an input melody in MIDI format, generate a detailed MIDI description of the guitar performance, which can then be sonified through a virtual instrument. The potential applications are numerous and include, in particular, the use of the model in educational and music production sectors.
An innovative computer-based model for the generation of expressive lead guitar performances
BONTEMPI, PIERLUIGI
2025
Abstract
Performative musical expressiveness can be attributed to the manipulation of parameters associated with the macro categories of timing, dynamics, and timbre. The purpose of an expressive performance may vary depending on the specific musician and the sociocultural and stylistic context in which the performance takes place. Among the objectives commonly identified in the literature are the expression (and/or induction in the listener) of emotions, the clarification of the piece’s formal structure, the sonic rendering of concepts and sensory perceptions, and adherence to a specific stylistic current. Additionally, it is evident for some musicians that they imprint their performances with distinctive expressive traits that make them recognizable to the listener and that transcend the aforementioned objectives. The field of computational musical expressiveness aims to develop automatic models that are ideally capable of generating virtual musical performances endowed with expressiveness, similar to what occurs with human musicians. From the information engineering perspective, computational performative musical expressiveness has historically involved the fields of affective computing, artificial intelligence, human-computer interaction, biomechanical simulation, and robotics. Although these fields are undoubtedly at the heart of computational musical expressiveness, this research area is also inherently highly multidisciplinary, requiring expertise at least also in psychological and musicological areas. The stylistic reference domain has been, at least in a largely predominant manner, that of European art music from the 17th, 18th, and 19th centuries. This can be partially explained by a general academic cultural bias toward music commonly called “classical”. Another reason Euro-classical music has historically been a protagonist in the field of computational expressiveness is that it is based on the score/performance dualism, thus allowing a precise analysis of the performance’s expressive deviations compared to the score’s prescriptive dimension. The musical instrument most often referred to has been the piano. This research work, on the other hand, addresses the stylistic domain of contemporary popular music, specifically lead electric guitar parts, a field so far explored only very partially. This has firstly required understanding what “expressive performance” means in the reference context. Also, the physical characteristics of the instrument, the performative techniques used by guitarists, and the biomechanical limits or constraints that may arise in the interaction between performer and guitar had to be analyzed. The following step was an in-depth review of the scientific literature on computational musical expressiveness, also to understand which of the previous works could serve as inspiration, possibly with specific modifications, in the new application context. The development of the expressive model was thus based on the use of optimization techniques for the automatic generation of the basic virtual fingering, on a rule-based system for the automatic insertion of articulations and expressive techniques, on a Machine Learning approach for the simulation of timing and dynamics deviations primarily due to involuntary biomechanical and psycho-perceptive components, and finally on the user’s introduction of indications regarding the deliberate expressive component (specifically concerning loudness and positioning of the notes relative to the beat). The various modules of the model, based on an input melody in MIDI format, generate a detailed MIDI description of the guitar performance, which can then be sonified through a virtual instrument. The potential applications are numerous and include, in particular, the use of the model in educational and music production sectors.File | Dimensione | Formato | |
---|---|---|---|
PhD_Thesis_PDF-A.pdf
accesso aperto
Dimensione
14.98 MB
Formato
Adobe PDF
|
14.98 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/202448
URN:NBN:IT:UNIPD-202448