The smile of Velonjoro: Bi-musicality and the use of artificial intelligence in the analysis of Malagasy zither music
Marc Chemillier
Abstract
Mantle Hood’s notion of bi-musicality was coined on the model of bilingualism and applied to people who play music from two different cultures. This notion is still used and discussed today although it should be redefined in the era of globalization and the postcolonial where cultural boundaries tend to become blurred. Artificial intelligence could bring new developments in this area with artificial creativity and machine learning being used as extensions of human musicality and to aid the learning of musical instruments. This paper explores such ideas with a case study analysis of marovany zither music. This traditional zither from Madagascar is played during trance rituals for very long periods of time. To make transcriptions of these long pieces, we have designed sensors adapted to the instrument with the goal of collecting MIDI date from the playing of Malagasy musicians. This data is then processed by a computer program devoted to music improvisation which can play in the style of these indigenous musicians. The outputs of the program are good enough to allow duets between a musician and the computer. Musicians reacting to the outputs of the machine can shed new light on the analysis of their repertoires. By refining the generation parameters, we aim to get closer to an optimal characterization of the music studied.
Keywords
Bi-musicality, Artificial intelligence, Machine learning, Computer improvisation, Malagasy zither, marovany, tromba rituals, Ternary-binary ambiguity.
Introduction
[1] The notion of bi-musicality was first introduced by Mantle Hood in his influential paper published in 1960 and this notion is still used and discussed today. The term “bi-musicality” was itself coined on the model of “bilingualism.” The latter refers to someone who is fluent in two different languages while belonging to different communities, underlining the fact that both music and language are attached to human groups or communities. Hood’s idea was not directly related to fieldwork but was mainly concerned with a general notion of musicianship and its relation to the training in music and its benefit for the comprehension of theoretical aspects of music. As he wrote:
“The training of ears, eyes, hands and voice and fluency gained in these skills assure a real comprehension of theoretical studies, which in turn prepares the way for the professional activities of the performer, the composer, the musicologist and the music educator” (Hood 1960, 55).
[2] When applied to fieldwork in ethnomusicology, the idea of bi-musicality refers to the four roles that are theoretically possible for researchers conducting fieldwork in the social sciences: complete participant, complete observer, participant-as-observer, observer-as-participant (Gold 1958, 217). Becoming bi-musical means that the researcher reinforces his/her role as a participant. John Baily outlined the benefits of this approach to the researcher of gaining status in the community and finding opportunities to be involved into performance event, a central issue for study in ethnomusicology (Baily 2001, 95-96).
[3] Today the term bi-musicality is now a taken-for-granted part of the ethnomusicologist’s methodology (Rice 2014, 21). But in the era of globalization and post-colonial issues, some new perspectives have appeared. When the term was introduced more than sixty years ago, what is today called “world music” was not yet established. Nowadays cultural boundaries tend to become blurred, and the idea of learning the music of non-Western cultures is no longer the work of ethnomusicologists. Individual musicians may decide to become masters of a music of which they are not native (Deschênes 2018, 276). In this article, we would like to show that technology can also bring new questions to this subject through the use of artificial intelligence and machine learning. In fact, recent developments in computer science have brought to the fore the notion of “artificial creativity” (Assayag 2021), allowing us to question the possible “musicality of machines”. So, it’s natural to ask whether a machine can also become bi-musical.
[4] This article is devoted to the analysis of the repertoire of a traditional zither from Madagascar called the marovany. In the first part, we will describe the design of sensors adapted to the instrument in order to make automatic music transcription and then to process the resulting MIDI data in a generative improvisation software. In the second part, we will analyze some features of the music played on this instrument mainly from a rhythmical point of view. In the last part, we will show how artificial intelligence and its machine learning capacities can be used in the study of this music in a way similar to the learning of music by ethnomusicologists as suggested by the term bi-musicality.
[5] It may help the reader if as the author of this article I explicitly acknowledge my own position as a European scholar analyzing Malagasy music. I am a French musician and researcher (EHESS, School for Advanced Studies in Social Sciences in Paris) with degrees in ethnomusicology, computer science, mathematics and philosophy. The work I present here is highly multidisciplinary, with a primary focus on ethnomusicology, but with collaborations establishing strong connections with artificial intelligence (IRCAM) and acoustics (LAM at Sorbonne University). The methodology, analysis and results described here are based on fieldwork in Madagascar that began over twenty years ago, and on long-term collaborations with Malagasy traditional (Velonjoro) and world music artists living in Madagascar (Rajery) or Europe (Justin Vali, Charles Kely Zana-Rotsy, Kilema). For the past ten years, I’ve been working with some of them on artistic projects performed in concert.
1) Sensors and machine learning about Madagascar marovany zither
[6] The marovany zither (usually made of wood) is derived from the valiha (made of bamboo), that is considered the national musical instrument of Madagascar. The marovany is a tall zither in the form of a rectangular box made of plywood. The metallic strings, measuring up to 120 cm, and mostly coming from brake cables for motorcycles, are stretched on each side of the box. They are nailed at each end of the sides and are raised by bridges made of rosewood. The position of the bridge along a string determines its pitch. Thus, when you play a piece, a single pitch corresponds to each string. However, musicians may switch pitches and the tuning of their marovany between different songs. The type of wood, as well as the size and number of strings of a zither are not fixed. Generally, there are about twenty strings with around ten on each side. Each set of strings of the marovany forms, like the famous tubular zither valiha, an alternating diatonic scale, although tuning deviations from the well-tempered scale are often observed.
[7] Originally, the marovany zither was a traditional instrument of Madagascar mainly played during trance rituals called tromba. In this context, the zither is played for extremely long periods of time (several hours without interruption) with the accompaniment of rattles and the clapping of the audience. During the few possession sessions that I attended in the south of the country, I was struck by the very strong interaction between the zitherist and the person who was to fall into trance, the latter reacting with controlled and synchronized twitches to the music while sometimes violently inveighing against the musician to demand a more active participation in the coming of the trance. The zither can also be replaced by other instruments like the accordion.
[8] Nowadays, the Malagasy zither has been exported outside the country, mainly by native musicians participating in world music festivals, subject to the influences of different musical genres and practices. One of them, Rajery (Germain Randrianarisoa), is a virtuoso of the valiha who managed to overcome the fact that he lost his left hand as a child. Justin Vali is another great player of both valiha and marovany who collaborated with Peter Gabriel and Kate Bush. Although the valiha zither has already been studied by past researchers (Domenichini 1984; Razafindrakoto 1999), there are only a few published transcriptions of the traditional marovany repertoire available (Schmidhofer 2005). A specific issue raised by the study of this repertoire is the extremely long duration of the performances on the zither in the traditional context of trance rituals (several hours in duration, as mentioned above). How to transcribe such long musical pieces? This question led us to explore solutions in the field of Automatic Music Transcription (AMT).
[9] We brought back to Paris several marovany built in Madagascar to think about the development of sensors on the instrument. This study was carried out at the LAM acoustics laboratory (Sorbonne University, Paris) under the supervision of Olivier Adam. The goal was first to design sensors specifically adapted to the marovany instrument. When installing pickup devices on an acoustic instrument, a constraint of non-invasiveness must be respected. The system must not be too cumbersome and disturb the playability of the instrument. Specific playing techniques such as palm muting and excitation point displacements require a playing zone which must not be disturbed by the devices. Various types of sensors were tested. Electromagnetic sensors offer the best channel separability, while optical and piezoelectric sensors appear to be a little more affected by the signal leakage between each string (Cazau et al. 2016a, 2016b).
[10] Experimental results have concluded that the piezoelectric sensors present very satisfactory signal criteria for transcription and are very convenient for musical playing. One of the constraints of their use was that a hole had to be carved in the bridge to fix them in place. Thus, we used our own bridges instead of the original ones. We designed 24 such bridges with piezoelectric sensors, one for each string, connected to cables terminated by jacks plugged into two Octamic preamps linked to a Fireface sound card. The capture of these piezoelectric microphones provided as many audio signals as there were strings on the instrument (one audio file per string, see the marovany used on Figure 1 below).
[11] Notes are detected on the basis of these signals, and the instrumentalist’s playing is then reconstructed as a piano-roll. Since the multiple piezos are placed on the body of the instrument, crosstalk is quite substantial, as one would expect. Our first detection programs ten years ago used hidden Markov chains to identify the musical events. Although the transcription results showed great promise in terms of onset and pitch identification, they still raised some issues about note duration and amplitude. To address this problem, the system has been used in a semi-automatic way, i.e., transcription outputs have been rectified manually to enhance their quality, focusing on notes presenting outlier values in duration and amplitude. This also implies that the system is not real time, it requires a small delay before the outputs of the AMT system could be used. Currently, we are working on new models involving neural networks, as it is the trend in AMT (Benetos et al., 2019).
[12] In the AMT domain, there is a perennial debate on the place of musical transcription in ethnomusicology for faithfully representing musical events (Holzapfel & Benetos, 2019). This question applies more generally to MIR (Music Information Retrieval) which, according to (Born 2020), assumes “universalized music ontologies” that might differ from the “ontology” of non-Western musics. Actually, the risk is that these technologies are already closely attuned to Western ways of thinking (Magnussons, 2019) and for instance, the keyboard-centricity of MIDI is well known (Diduck, 2018). Let us have a look at the Malagasy music case. The MIDI representation of pitches is not a problem, as the diatonic scale is a well-established cultural model in Madagascar that probably stems from the influence of 19th-century missionaries, in particular the London Missionary Society (Ranjeva-Rabetafika, 1971). The problems in transcribing Malagasy music mostly come from the specificity of its rhythmic aspects that we shall describe further. As a result, when we record data on our sensor-equipped zither, we always have a hand-clapping part that gives us the reference for the rhythmic placement of events. Certain aspects are lost in MIDI transcription, such as those mentioned above concerning playing techniques (palm muting, shifting the excitation point). A transcription is always a reduction, and the question is not whether we get a complete transcription in MIDI data, but whether the part we get is faithful or not.
[13] As soon as digital data are available from the AMT system described above, one can imagine ways to process them with some generative system. Indeed, we have been working for years on the development of a family of software devoted to music improvisation called OMax, ImproteK, Djazz, among others, and we presented these systems at an international workshop entitled IMPROTECH, which we organized at IRCAM in Paris in 2004, New York in 2012, Philadelphia in 2017, Athina in 2019 and Uzeste near Bordeaux in 2023, and that gathered researchers and artists working on improvisation with the computer. The CD-book Artisticiel has been published recently with musician Bernard Lubat including various texts addressing the issue of improvising with a machine (Assayag 2021; Lewis 2021).
[14] George Lewis, who has been a pioneer in this field, contributed to the book Artisticiel. A well-known trombonist, Lewis is also a computer scientist who created his first improvising software at the end of the 1970s. According to his point of view, working on this kind of computer system is a way of learning interesting aspects of improvisation. This recalls the idea of bi-musicality presented in the introduction, according to which learning to play exotic instruments is a way of learning how the music of these instruments is conceived. Lewis wrote:
“Working on improvisation with machines over the last forty years has taught me a great deal about how I and others improvise musically, although there is still much to be learned. In fact, this quest for knowledge of the improvising self is one of the major impetuses behind work in this area” (Lewis 2021, 111).
[15] In the field of artificial intelligence, the OMax-ImproteK-Djazz improvisation project belongs to new trends in human-computer interaction in which technology evolves towards the status of cyber-partners engaging with humans in a co-creative way. This family of computer applications devoted to musical improvisation (Assayag et al. 2006) was created at the beginning of the 2000s and developed since then by the collaboration of IRCAM and EHESS in Paris with international collaborations at UCSD and CNMAT Berkeley. Its framework is a multi-agent architecture for an improvisation-oriented musician-machine interaction system that learns in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. It consists of two parts, a learning algorithm, that builds the statistical model from musical samples, and a generation algorithm, that walks through the model and generates a musical stream by predicting at each step the next musical unit from the already generated sequence. Playing music with such systems is a way of distributing musicality between humans and machines as explained by Gérard Assayag:
“In our opinion, these symbolic systems in which “musicality” is distributed between human and artificial agents grant them, a possibility of co-creativity” (Assayag 2021, 146).
[16] More precisely, the principle of the improvisation software described above is to capture a sequence played by a musician and to produce in return an improvisation by recombining fragments of what has been played. In doing so, the computer materializes a process inherent to the activity of improvisation itself, because an improviser never starts from nothing when he improvises but always uses to varying degrees musical phrases already recorded in his memory, if only because mastering his instrument requires learning such sequences in the form of body automatisms. Djazz, the successor to the ImproteK prototype, is one of these computer environments devoted to improvisation, and its main characteristic is that it takes account of the rhythmic framework defined by a regular pulse. It can also follow a given chord progression. When the software is being trained, the pulses (and chords if a chord progression is given) that are played are stored as labels along with the musical data. Later, when the software performs recombinations, it can remain consistent with these parameters (rhythms, harmonies). Note that this is possible even if the software has no musical knowledge, as it simply follows a series of clicks and chord labels using pattern matching algorithms. This lack of musical knowledge means that the software can adapt to any musical style, as long as the data used to train it belongs to that style. In addition, from a human-computer interaction perspective (HCI), the conception of Djazz involves another layer. Beyond the automatic recombination layer, consistent with rhythm and harmonies, as we have shown, Djazz features a manual layer, using a pad-type interface. Indeed, the user can “perform” with the software, deciding which parts of memory to use for recombination and which effects to add to the algorithm’s output, such as looping a pattern, modifying playback speedm or transposing pitches in different registers. Thanks to this manual layer, the long-term vision of improvisation is handled manually by the user.
[17] Djazz’s own synchronization capacities described above allows it to be used in music where the pulse and the rhythmic setting play a preponderant role, such as jazz or certain traditional music, notably Malagasy zither music.
[18] Thanks to these Djazz features, AI can do something specifically adapted to the music of Madagascar. An important aspect of improvisation that is dealt with in the use of Djazz software is the idiomatic aspect. This term refers to a notion introduced by Derek Bailey who described idiomatic improvisation as “the expression of an idiom – such as jazz, flamenco or baroque” (Bailey 1993, xi). Since the term “idiom” is used for both words and music, this notion is interesting because it recalls the relation between music and language introduced at the beginning about bi-musicality and bilinguism. It highlights the fact that in both cases, there is a community of people who share a given idiom. Thus, one can apply to music analysis the concept of ‘acceptability’ used in linguistics, where it denotes the intuitive judgements by users of a language on how acceptable a linguistic utterance is. In a similar way, although there remain important differences between music and language, one can refer to the judgements of music listeners on how acceptable a music sequence is according to cultural norms of a given community. This is especially true for the kind of music where rhythm plays a dominant role: in this case being in place rhythmically is a condition for the music produced to be acceptable. Such cultural norms are related to the cultural conditioning of history, that is a pre-established sound world, which can be identified with a particular genre, whether it is jazz, Malagasy trance music or anything else, and bound up to the institutions where music is practiced such as festivals in the case of jazz, tromba rituals in the case of Malagasy music, and so on.
2) Analysis of zither transcriptions
[19] Our first sessions of experiments with the automatic music transcription system and its application to the analysis of Malagasy zither music were made at the laboratory LAM in Paris. After preliminary tests with the Malagasy zither player Kilema (Clément Randrianantoandro), in May 2014, we made consistent transcriptions of Charles Kely Zana-Rotsy’s playing in February 2015 described in this section. In the last part of this article, we will mention two other great Malagasy musicians with whom we’ve worked, Velonjoro and Justin Vali. We’ll also introduce them here.
[20] Born in Antananarivo, the capital of Madagascar, in 1965, Charles Kely Zana-Rotsy (Jean-Charles Razanakoto) is a professional singer and guitar virtuoso, and he also plays the marovany. He began his career with his brother by covering traditional Malagasy songs. In 1997, Charles was chosen by Rajery, the great valiha player, to tour around the world with his band. Together, they play in Paris, Chicago, Seattle, New Orleans and at the International festival of Louisiana. In the middle of the 2000s he moved to France and since 2008, as a guitarist, Charles has toured with prestigious world music artists such as Tony Rabeson, Mounira Mitchala, Razia Said, and Jaojoby. He began his solo career with the release of his CD Anilanao in 2002 and continued with Zoma Zoma in 2011 in his own unique manner: the open gasy style, an acoustic music from Madagascar, with a touch of bossa nova, jazz, blues, funk and subtle African influences (Prado, 2023). Apart from simulating the zither playing, the Djazz software has also been used with Charles’s group in artistic projects (Chemillier, 2023; Barillé et al., 2018) involving Julio Rakotonanahary on bass and Fabrice Thompson on drums (https://www.youtube.com/watch?v=tsTI2M0OBWg&t=217s).
[21] Velonjoro (1963-2017) died on January 10, 2017 at the age of 54. He was living in the village Ankili Mahafahitsy near Ambovombe in the south of Madagascar in Antandroy country (one of the island’s ethnic groups). His main professional activity was to play zither during tromba rituals with possessed people in his neighborhood. I first met him during such a trance ritual in August 2000. He had never been to the capital Antananarivo before the first experiments conducted with him in 2011. After several years developing of the sensor system, it was possible to carry out tests of musical interaction between him and the computer which are evoked in the last part of this paper. On May 19, 2016, I played in public with him and the Djazz software at the conference Jazz and improvisation(s) in Madagascar organized by Julien Mallet (IRD), Claude Alain Randriamihaingo (Antananarivo University) and Philippe Bataille (AUF) at the Institute of Civilization, Museum of Art and Archaeology in Antananarivo.
[22] Justin Vali (Justin Rakotondrasoa), born in 1963, who lives in France, ranks among the greatest players of the valiha, the bamboo tube zither, and he also performs on the marovany. He contributed to several compilations in the late 1980s and played with Kate Bush on her album The Red Shoes in 1993. He began to release his own albums in the 1990s and recorded Ny Marina (The Truth) in 1994 under Peter Gabriel’s Real World Records. In 1999 he released The Sunshine Within, a collaboration with Paddy Bush (brother of Kate). In 2008 he collaborated with Erick Manana and other prominent Malagasy artists to record an album as the Malagasy All Stars. We started working as a duet with him and the Djazz software (http://digitaljazz.fr/2023/06/19/videos-djazz-avec-justin-vali/#masoala) and played live at the Festival de l’imaginaire in Paris in June 2023.

Figure 1: Photo of Charles Kely Zana-Rotsy playing the marovany with sensors at LAM, February 2015.
[23] The piano roll reproduced in Figure 2 gives an overview of Charles’ interpretation on the marovany of a famous theme entitled “Rakotozafy” in homage to a legendary master of this instrument, Rakotozafy (1938-1968), who died in the jail of Toamasina, one year after having involuntarily killed his own son. One of the main advantages of the automatic transcription system is that it provides a very easy way to obtain such a general shape for long improvisations whatever their duration. Any MIDI sequencer loading the data obtained from our AMT system can display the whole piece. By zooming in and out, you can go from an overall view showing the structure of the piece to detail views showing certain passages. Objective observations can be made about this representation of the performance and the long-term vision of improvisation (for instance the highest pitch that occurs in the middle of the piece) and this may suggest new hypotheses about the conduct of improvisation.

Figure 2: Overview of “Rakotozafy” as played by Charles Kely Zana-Rotsy.
[24] Since Charles played this piece by listening to a pre-recorded polyrhythmic hand clap sequence with the pulses played in one part, we know exactly where the pulses fall (represented at the bottom of the piano-roll in Figure 3). Thanks to these pulses, we can write the zither part in music notation with a correct metrical structure. The passage displayed in Figure 3 is taken from the beginning of the piece. It is a kind of thematic motive which presents an interesting ambiguity for listeners who are not acculturated to Malagasy music.

Figure 3: Detail view of the beginning of “Rakotozafy” played by Charles Kely Zana-Rotsy.
[25] The melody of this passage is transcribed into music notation in Figure 4. When listening to this melody, a European listener can be induced to recognize a waltz motive as notated in the upper example. The quarter note plus four eighth note durations fit nicely into a 3/4 signature with the first beat falling on the quarter note. But in fact, it is not the correct Malagasy metric structure indicated by the hand claps associated with the MIDI data. As can be seen in the piano-roll notation above with the beats indicated by the hand claps, and as it is common in African music, the positions of the beats are in the only eighth notes that do not correspond to any attack, i.e., the second halves of the quarter notes. Note also that it has a ternary metric structure of 6/8, as shown in the lower example in figure 4, where the beats are divided into three parts and notated as measures with two compound beats, and not a binary structure where the beats would have been divided into two or four parts, as shown in the upper example (waltz motif). This, too, is a common feature of African music. When a sequence can be interpreted from the two points of view, the ternary one and the binary one, it is always the ternary one that corresponds to the correct African metric structure (i.e., the regular clapping that goes with it). A well-known example of this statement is the pygmy hindehu flute featured in the famous introduction to Herbie Hancock’s “Watermelon Man” from the Head Hunters album (1973). It was performed by the percussionist Bill Summers with a binary subdivision of the pulse whereas the pygmy original is ternary (Arom 1998; Chemillier 2008; Tenzer 2015). The ambiguity of ternary and binary rhythmic structures in the context of African musical traditions, and our observation of a kind of African ‘preference’ for ternary structures over the positioning of regular handclaps, is linked to a debate in contemporary African rhythm studies on the question of the polymeter (Avorgbedor, 2013). This quite difficult question is beyond the scope of this article.

Figure 4: Ambiguity between binary and ternary interpretation of a melody from “Rakotozafy” (the correct Malagasy version indicated by regular handclaps is the ternary one below).
[26] The analysis that we have done here is based on a relatively straightforward MIDI transcription, without any evident AI. It allowed us to identify properties such as the ternary-binary ambiguity. But to analyze this music more deeply and to understand how it works and how it is organized on a broader level, we should also ask ourselves questions about how it deals with the temporal unfolding over very long periods of time. In this direction experiments with artificial intelligence could bring new insights in the analysis of this music as we shall see in the next section.
3) Duet between the Indigenous musician Velonjoro and an improvisation software
[27] In the following section, we will show examples of music generated by the Djazz improvisation system in the style of the zither marovany from Madagascar, and discuss the reactions of Indigenous people listening to these virtual improvisations, particularly from a rhythmic point of view, in situations where they are asked to play in synchronization with the machine. Therefore, since Djazz copes with idiomatic improvisation, our interest focuses on the judgement of musicians about its outputs. These experiments were conducted in Madagascar with traditional zither player Velonjoro. His listening of the improvisation generated by Djazz based on our sensor transcriptions of his playing were made during two fieldworks that were conducted in July 2014 and May 2016.
[28] When he listened to the first improvisations generated by Djazz based on his own playing in May 2014, Velonjoro’s reaction seemed to be related to the correct rhythmic placement of the motives played by the computer (https://www.youtube.com/watch?v=fJXLcTmDnXs). From a rhythmic point of view, it should be noted that when the recording of Velonjoro’s playing was made, he was hearing a pre-recorded rattle part in the headphones so that the position of the pulses could be accurately associated with what he was playing by the recording system. When the system records a musician’s playing, it keeps track of the pulse positions with an accuracy of a few milliseconds so that in the recombination process, the rhythmic placement of the notes is maintained with the same accuracy. The fact that the computer was playing well in rhythm (i.e., with this millisecond precision) triggered a positive reaction because it is very difficult to do if you are not a seasoned musician. First, the tempo is quite fast, around 200 BPM, with a duration of 300 ms for the pulse which means only one tenth of a second for its ternary subdivision. Second, the part played by the rattle is accentuated in such a way that it is very difficult to clap once hands at the correct positions of the pulses and even Malagasy people who are not trained to do so cannot do it properly.
[29] The rattle part realizes the ternary subdivision of the pulsation with a movement in three parts: (1) strike of the handle with the left hand, (2) shock of the seeds against the box upwards, (3) shock of the seeds against the box downwards. The pulsation is given by the clapping of the hands. The remarkable point is that this pulsation does not fall on the subdivision which is the most accentuated in the rattle part (the first one with the beat of the left hand on the neck), but on the subdivision which precedes it immediately, i.e. the last of the three eighth notes in Figure 5. The pulse and the rattle accent which follows it thus form two unequal short-long durations which are called a iambic rhythm. This rhythm is widespread throughout Madagascar, where it can be found in the modern salegy repertoire, but also elsewhere in the Indian Ocean. On Reunion Island, it is played in the genre maloya by a raft-shaped rattle called kayamb.

Figure 5: Analysis of the ternary rattle part and associated hand claps.
[30] This iambic rhythm is very difficult to beat when one is not sufficiently acculturated to Malagasy music. Invariably, one tends to slide the pulse one eighth note too late to coincide with the accent of the rattle that is just behind. The short-long iambe then becomes a long-short trochee. This is a good example of the idiomatic aspect of music and how the perception of music can vary depending on which community one belongs to. The fact of having clapped since childhood within one’s community leads to the creation of ‘habitus’, in the sense of Bourdieu, which is very difficult to acquire when one comes from outside (Bourdieu 1977). This situation makes the study of music perception outside of the cultural context problematic. For example, the theory of Lerdahl and Jackendoff proposes rules for the perception of rhythm which the authors affirm are intended to apply universally. One of these rules indicates that there is a tendency to prefer a metrical structure in which the attacks that are accentuated are strong points of the metre which include pulsations, among others (Lerdahl and Jackendoff 1983, 79, MPR rule 4). However, the Malagasy rattle shows that this rule is inoperative in this context because in the rhythm of the rattle, precisely, the accented attacks are not the strong points of the metre.
[31] In Africa, there are several examples of iambic patterns similar to the rattle formula, all of which have in common that they contradict the preference rules of Lerdahl and Jackendoff. In a comparative study (Chemillier et al. 2014), we observed a certain convergence between the Malagasy rattle, Malian scraper and Moroccan lute: the accent of intensity or lengthening of duration always falls on the second eighth note of a ternary subdivision, i.e., the one immediately following the pulse. A similar phenomenon is observed by John Blacking in his study of Venda children’s songs in northern South Africa (Blacking 1967, 157). He indicates that most of these ternary songs have alternating short and long notes (the long note may be a quarter note or two repeated eighth notes), with the pulse placed on the short note (tempo between 112 and 116 BPM on the dotted quarter note). The pulse does not fall on the elongated or repeated notes in the melodic sequence which, in Lerdahl and Jackendoff’s theory, should induce a strong point in the meter (Lerdahl and Jackendoff 1983, 84, rule MPR 5). As in Madagascar, the hands anticipate this accent by clapping an eighth note earlier. Another form of iambic rhythm that also contradicts this rule is mentioned by Martin Scherzinger. It concerns the kaganu pattern in Gahu, a Southern Ewe dance (Agawu 2003, 81), which is in the form of two eighth notes followed by an eighth-note rest, thus also short-long. Unlike the Malagasy rattle, the pulse does not fall on the short, but is placed in the middle of the long (on the eighth-note rest). Thus, it has in common with the other African examples we have quoted that the pulse does not coincide with the long duration contrary to the MPR 5 rule:
“Once again, subject to the analytic grip of Lerdahl and Jackendoff’s metric preference rules alone, kaganu comes to imply a radically different meter than the correct African meter. According to the rules, kaganu’s ‘short-long’ structure conspires to placing a strong beat on the second of the two notes in each of the rhythmic groupings, precisely the weakest beats in the actual music.” (Scherzinger, 2005, 149).
[32] Going back to the improvisation software, it is possible to simulate the citharist’s playing by following patterns in the database captured from what Velonjoro has played (periodic formulas or freer patterns as we shall see) and thus produce a kind of virtual improvisation. The challenge in this situation is to have the computer play a duet with Velonjoro himself. Indeed, the bi-musicality described by Mantle Hood has as its main objective the ability to integrate into a group of Indigenous musicians. If the Indigenous people are willing to play with the ethnomusicologist who is trying to become bi-musical, then he or she is on the right track. If they stop playing, it means there are problems and the causes can be multiple (rhythmic placement, pitch accuracy, accentuation, etc.). It was in May 2016 that we were able to perform duets between Velonjoro and the computer (an example of this duet is https://www.youtube.com/watch?v=xApyhRgMSFU, we analyse another one below). Velonjoro had not stopped playing and the computer had managed to follow him on several pieces. The zither player had even at times explicitly validated what was coming out of the machine with easily recognizable signs, as can be seen in Figure 6. At this precise moment when the intervention of the computer must have seemed relevant to him, he had turned around and smiled. One can think that the productions of the machine, at this moment at least, were culturally acceptable.

Figure 6: Photo of a zither-computer duet with Velonjoro smiling, May 2016.
[33] Velonjoro sadly passed away a few months after these experiments. Unfortunately, we were not able to make an interview with him to comment on the video of this duet and to discuss the exact reasons for his smile. But as we have said before, we have been working with him for a very long period of time (from 2000 to 2016). Ethnomusicological fieldwork is made of hours spent with musicians discussing, listening to music, recording them while they play their instrument. Musicians don’t always talk easily about music, but they react to it a lot. Filming their reactions is an important part of the ethnomusicological method. These reactions can take the form of glances, facial expressions, gestures. When Velonjoro first heard Djazz’s improvisation based on his data (two years before the duet in Figure 6), his reactions were very positive. On hearing the recombination of his zither formulas, he said in French “Ça va” (“It’s fine”). I then performed a few manual transformations using the pad interface applied to Djazz’s output (i.e., purely digital effects such as looping, accelerating, register jump). He was surprised, but after a brief moment, he gave a thumbs-up (Figure 7 left) and said “Mahay raha avao longo io” (“He knows how to do it well, comrade”), then applauded (Figure 7 right) and said “Mety zao e!” (“It’s good!”). These ethnographic observations show that Djazz’s recombination process is culturally acceptable. They also show that the machine’s productions, even if they are sometimes external to Velonjoro’s culture (i.e., pad transformations), can be integrated into it. Perhaps this is because his playing comprises two modes: a repetitive one and a freer one that serves as interludes to relaunch the musical discourse. Velonjoro probably accepted pad transformations as free passages of the second mode.


Figure 7: Photos of Velonjoro reacting to Djazz during a listening session, July 2014.
[34] The part played by the computer exactly at the moment when Velonjoro smiled is represented as a piano-roll in Figure 8. One can see the two types of modes of playing mentioned above that are sampled and re-used by the improvisation software. The periodic formulas of the first one are played several times before turning to another one. Most of these formulas have a four-beat periodicity (as can be seen in the second half of the piano-roll in Figure 8) whereas some of them may have an eight-beat periodicity. Velonjoro’s second mode of playing introduces breaks or cadenzas that occur when he performs brilliant patterns such as scales or series of alternating thirds (see F#-A# / G#-B in the first half of the piano-roll). During these passages, the fast tempo is strictly maintained but the periodicity is broken, i.e., the pulses are no longer grouped by four or by multiples of four.
[35] The challenge for this duet with Velonjoro was that the computer was able to ‘follow’ him in his playing. As can be seen in the picture Figure 6, there is a matrix of pads that controls the generation process of the Djazz software. These pads can operate as rules guiding the computation of the system. Some of the pads are time markers that allow the user to manually synchronize the phrases generated by the computer. Other pads are used to activate memory areas in which patterns computed by the program are stored. The user of the software can listen to what Velonjoro is playing and direct the computer’s productions in real time to follow the musician. When Velonjoro smiled, he had just played an interlude-like free-style phrase as described above. The computer had followed him with the same kind of patterns. Then Velonjoro had returned to periodic formulas while the computer played the repeated thirds F#-A# / F#-A# / G#-B / G#-B… displayed on the piano-roll. The computer then returned to periodic formulas similar to those played by Velonjoro, but with a delay of a few beats. From what we’ve observed of Velonjoro’s reactions during the many work sessions we’ve conducted with him, it’s likely that his smile indicates his satisfaction at hearing the computer join him after a break formula.

Figure 8: Piano-roll of the part played by the computer while Velonjoro smiled.
[36] At this point we can summarize the methodological ideas underlying this work (Chemillier 2014a, 2014b). A computer program that is able to generate music according to certain explicit rules, as Djazz can do, makes it possible to produce new artificial sequences, then to test them by having them listened to by musicians who are experts in the culture concerned. The explicit rules mentioned here are: 1) the recombination process, 2) operations added manually via the pad interface. All these aspects can be traced and are reproducible. If the tests fail, the comments of the musicians justifying their rejection provide precious indications for improving and refining the rules. In a way, this is a process of successive refinements, starting with initial intuitions about the structure of this music and gradually leading to its increasingly complete description. This amounts to defining successive classes of musical sequences (each class being determined by the set of rules we know at a given moment) and to progressively restrict these classes until they coincide as well as possible with the studied repertoire. We go from one step to the next by randomly picking a representative of the current class (corresponding to a state of knowledge about the rules), thanks to the Djazz generation software, and we have this representative listened to and validated by an acculturated musician, in order to highlight, if necessary, new rules and to refine the contours of the repertoire. The choice of successive representatives amounts to making spot checks in classes of artificial models. This is a general idea used in combinatorial optimization in computer science where the search spaces are generally much too large to be exhaustively traversed. One uses a heuristic method called a ‘greedy algorithm’ which consists in making choices inside these spaces to get closer to the sought solutions. We can see that in the musical field, similar methods can be used to study repertoires for which we do not know the acceptability criteria precisely enough. We test sequences at random and each sequence elicits reactions that allow us to specify these criteria.
Conclusion and discussion
[37] We have described our research in the analysis of the repertoire of the Madagascar marovany zither. Since this instrument is played during trance rituals for very long periods of time, we have designed sensors adapted to the instrument in order to make automatic music transcription of its repertoire and to collect MIDI date from the playing of Malagasy musicians. These data are then processed by a computer program devoted to improvisation which is able to play in the style of the Indigenous musicians. The outputs of the program are good enough to allow duets between a musician and the computer. Thus, the possibility of such duets involving the use of artificial intelligence may be regarded as an extended form of the notion of bi-musicality introduced by Mantle Hood. Musicians reacting to the outputs of the machine can shed new light on the analysis of their repertoires. By refining the generation parameters, we can hope to get closer to an optimal characterization of the music studied.
[38] What are we to think of introducing a computer in the context of music in oral tradition? From a decolonial perspective, this encounter may appear excessively Eurocentric, exemplifying a kind of caricature of the Great Divide between traditional societies and technologically developed societies (Descola 2014). Questions about AI as a decolonial project should be situated briefly in the context of the emerging literature on the subject (Adams, 2021; Hassan, 2022; Paraman & Anamalah, 2022; Bjola, 2022; Munn, 2023; Farrow, 2023; Zembylas, 2023). For example, Rachel Adams (2021) presents two critiques of AI from a decolonial perspective: firstly, discussions of AI ethics involve a notion of universal ethics that is Euro-American in conception and is viewed as “colonial rationality,” and secondly, the use of AI technologies, such as facial recognition for example, are seen as “racializing dividing practices.” Corneliu Bjola (2021) tackles the issue of AI in “international development policies” by looking at technical aspects such as data, processes and decision-making. More generally, there is a lack of dialogue between technological and social knowledge, and companies developping AI “care little for public dialogue of any kind” (Goodlab 2023). These critiques take a global view of AI systems of very general scope. They do not deal with particular AI prototypes and their use in a person-to-person relationship, as is done in an anthropological investigation such as the one we have described.
[39] I was confronted with questions of decolonialism in another domain, ethnomathematics, in which postcolonial and decolonial critiques have been developing for longer. I worked on a subject where I also applied a similar method involving computer simulation: Malagasy sikidy divination. In Madagascar, diviners use arrays of seeds constructed according to mathematical rules. The prestige of a diviner is based on his knowledge of many arrays with certain remarkable properties (Chemillier 2008b, 2009). I have realized a computer program that calculates these arrays. In this context, the computational power of machines can be the object of accusations of a postcolonial nature, criticizing both a posture of domination related to the use of technology and an intrusion of Western rationality into an anthropological approach that should, on the contrary, strive to penetrate local ways of thinking from within. On these questions, the most diverse positions coexist in the various domains. In ethnomusicology, Kofi Agawu defends the idea that there is no difference between the conception of African music and that of Western music (Agawu 2003). In the field of ethnomathematics, to which the studies on divination belong, a commonly put forward idea is, on the contrary, that there is a radical opposition between the ways of thinking of societies without writing and those of modern societies, to the point that they cannot communicate (Radford 2020, 272, “there can be no real dialogue”). In reality, in our experiments, the computer only plays a role of revealing what we have in common between different societies and cultures, the universal aspects of the human mind. In my opinion, Velonjoro’s smile, and the interpretation of that smile that has been induced by many working sessions with him over the years, are exactly the proof that communication is possible across human cultures and communities. More precisely in the work described above, one can say that the recombination process is a shared element whatever the differences between the traditional musician (searching for spirits) and the Western ethnographer (testing algorithms). My experience of the fieldwork, and I presume it is the same for many anthropologists, at least when they are working on technical aspects of culture, is that we have a lot of examples of such situations where there is an obvious connivance between the researcher and his or her Indigenous informants, and that it often happens beyond language with simple mimics, gestures and smiles.
[40] Artificial intelligence also brings new perspectives on the issues of archiving and safeguarding musical heritages. One of the challenges of ethnomusicology is indeed to collect certain repertoires that are on the verge of extinction. Audio and video recordings allow us to keep a trace of these endangered music heritages. But in the same way that a sound or visual archive preserves the memory of someone after his or her death, improvisation software keeps a trace of a musician’s playing as a kind of avatar, with the difference that the software can imitate him or her, in some way, as if he or she were playing again post mortem. Can we go so far as to say that the software is able to ‘play’ Velonjoro despite his death? This question opens up a major can of worms and even risks causing offense. It should be noted, however, that digital life after death is a major issue in the current development of AI. Everyone has experienced how deceased people can continue to send messages on social networks when others take over their accounts. In addition, there is a “digital afterlife industry” offering to interact with deceased people using AI based on their personal online data, e.g. Eter9, LifeNaut, Eternime, with unknown consequences for the disruption imposed on mourning processes (Nakagawa & Orita, 2022; Savin-Baden, 2019). I had concrete feedback from Malagasy experts about the avatar of Velonjoro created by Djazz. In 2022, many years after my last meeting with him, the famous Malagasy musician Justin Vali listened to Velonjoro’s recordings. He was challenged by his very fast playing and asked to make a duet with him. Thus, we worked together on a kind of ‘virtual duet’ using the improvisation software trained on Velonjoro’s data (http://digitaljazz.fr/2023/06/19/videos-djazz-avec-justin-vali/#sojerina). We made a live demonstration of this duet during a conference-concert at the second edition of the ICTM SoMoS Symposium in Barcelona on October 26, 2022 where Justin Vali performed with the system (Chemillier, 2023).
[41] This possibility raises troubling questions about the notion of presence in music at a time when live performances are increasingly integrating virtual artifacts (hologram concerts of deceased singers or avatar concerts in video games, virtual stars like Hatsune Miku, voice synthesis of singers like Drake and The Weeknd) and the lockdown resulting from to the covid pandemic has favored the development of dematerialized and deterritorialized musical relationships.
[42] We have made various experiments related to this kind of virtual presence. One of them concerns the stride piano style represented by the great master of the genre Fats Waller (1904-1943). I played in duet with pianist Louis Mazetier, a specialist, providing the software with transcriptions of Fats Waller made by Paul Marcorelles, whose notes I had adjusted one by one from Fats’ original recordings. Louis Mazetier’s positive judgment showed that we could somehow resurrect something of this great musician (https://www.youtube.com/watch?v=9XerUUxMa-0). More recently for the centennial of harmonica player Toots Thielemans (1922-2016), we also proposed an experiment consisting in creating a musical avatar of Toots based on his recording with Bill Evans (Chemillier et al. 2022). In addition to the MIDI features of the Djazz software used for the marovany zither, its machine learning capacities allows to capture the phrases played by an instrumentalist directly as audio signal, and to extend them by a virtual improvisation restoring the sound of the musician, his phrasing and his accents, but playing something different (https://www.tiktok.com/@digitaljazz/video/7061712025054940421). This raised interesting questions that have not yet been answered. Will an amateur who knows the playing of the original artist well be taken in by listening to this fake improvisation? By setting the artificial improviser to produce solos more or less like the model, at what threshold will the fan consider that these improvisations are not real?
Funding
This research is supported by the European Research Council (ERC) REACH project, under Horizon 2020 programme (Grant 883313) and by Agence Nationale de la Recherche (ANR) project MERCI (Grant ANR-19-CE33-0010).
References
Adams, Rachel. 2021. “Can artificial intelligence be decolonized?.” Interdisciplinary Science Reviews 46(1-2): 176-197.
Agawu, Kofi. 2003. Representing African Music. Postcolonial Notes, Queries, Positions. New York: Routledge.
Arom, Simha. 1998. “L’arbre qui cachait la forêt. Principes métriques et rythmiques en Centrafrique.” Liber Amicorum Célestin Deliège, Revue belge de musicologie 52: 179-195. https://doi.org/10.2307/3686924
Assayag, Gérard. 2021. “Human-Machine Co-Creativity. A Reflection on Symbolic Indisciplines.” In Artisticiel. Cyber-improvisation, edited by Bernard Lubat, Gérard Assayag, Marc Chemillier, bilingual CD-book French-English, 141-153. Phonofaune.
Assayag, Gérard, Georges Bloch, Marc Chemillier, Arshia Cont, Shlomo Dubnov. 2006. “OMax brothers: a dynamic topology of agents for improvization learning.” AMCMM ’06: Proceedings of the 1st ACM workshop on Audio and music computing multimedia, October 2006: 125-132.
Avorgbedor, Daniel. 2013. “Book Review of Remains of Ritual: Northern Gods in a Southern Land by Steven M. Friedson (2009).” Ethnomusicology Forum 22(3): 379-38. https://www.jstor.org/stable/43297407
Bailey, Derek. 1993. Improvisation, its nature and practice in music. Da Capo Press.
Baily, John. 2001. “Learning to Perform as a Research Technique in Ethnomusicology.” British Journal of Ethnomusicology Vol. 10, No. 2: 85-98.
Barillé, Bénédicte, Serge Blérald, Philippe Kergraisse. 2018. Live session Djazz. Improvisations numériques et traditions malgaches, Film produced by the Image and Audiovisual Department, EHESS. https://www.canal-u.tv/video/ehess/live_session_djazz.47181
Benetos, Emmanouil, Simon Dixon, Zhiyao Duan, Sebastian Ewert, “Automatic Music Transcription: An Overview.” IEEE Signal Processing Magazine, 36 (1), January 2019.
Blacking John. 1967. Venda Children’s Songs. Johannesburg: Witwatersrand University Press.
Bjola, Corneliu. 2022. “AI for development: implications for theory and practice.” Oxford Development Studies 50(1): 78-90.
Born, Georgina. 2020. “Diversifying MIR: Knowledge and Real-World Challenges, and New Interdisciplinary Futures.” Transactions of the International Society for Music Information Retrieval 3(1): 193-204. https://doi.org/10.5334/tismir.58
Bourdieu, Pierre. 1977. Outline of a Theory of Practice. Translated by Richard Nice. Cambridge: Cambridge University Press.
Cazau, Dorian, Yuancheng Wang, Marc Chemillier, Olivier Adam. 2016a. “An automatic music transcription system dedicated to the repertoires of the marovany zither.” Journal of New Music Research 45 (4): 343-360. https://doi.org/10.1080/09298215.2016.1233285.
Cazau, Dorian, Marc Chemillier, Olivier Adam. 2016b. “Design of an Automatic Music Transcription System for the Traditional Repertoire of the Marovany Zither from Madagascar: Application to Human-Machine Music Improvisation with ImproteK.” In Trends in Music Information Seeking, Behavior, and Retrieval for Creativity, edited by Petros Kostagiolas, Konstantina Martzoukou, Charilaos Lavranos, chapter 10, 205-227. IGI Global.
Chemillier, Marc. 2008a. “Le jazz, l’Afrique et la créolisation : à propos de Herbie Hancock. Entretien avec Bernard Lubat.” Les Cahiers du jazz 5: 18-50.
Chemillier, Marc. 2008b. Les Mathématiques naturelles. Paris: Odile Jacob.
Chemillier, Marc. 2009. “The development of mathematical knowledge in traditional societies. A study of Malagasy divination.” Human Evolution 24(4): 287-299. http://www.pontecorboli.com/digital/he_archive_issues/2009-DHE4.pdf.
Chemillier, Marc. 2014a. “La machine aksak et les fascinantes formules asymétriques du petit luth de Turquie (à propos du livre de Jérôme Cler : Yayla, musique et musiciens de villages en Turquie méridionale).” L’Homme 211: 129-140.
Chemillier, Marc. 2014b. “Ruse et combinatoire tsigane. De la modélisation informatique dans les répertoires musicaux traditionnels (à propos du livre de Victor A. Stoichita : Fabricants d’émotion. Musique et malice dans un village tsigane de Roumanie).” L’Homme 211: 117-128.
Chemillier, Marc, Jean Pouchelon, Julien André, Jérôme Nika. 2014. “La contramétricité dans les musiques traditionnelles africaines et son rapport au jazz.” Anthropologie et Sociétés 38(1): 105-137. https://www.erudit.org/fr/revues/as/2014-v38-n1-as01471/1025811ar/.
Chemillier, Marc. 2017. “De la simulation dans l’approche anthropologique des savoirs relevant de l’oralité : le cas de la musique traité avec le logiciel Djazz et le cas de la divination.” Transposition. Musique et sciences sociales, Hors-série 1, Musique, histoire, société. https://journals.openedition.org/transposition/1685.
Chemillier, Marc, Ke Chen, Mikhail Malt, Shlomo Dubnov. 2022. “A posthumous improvisation by Toots Thielemans.” Accepted submission to the conference Toots Thielemans (1922-2016). A Century of Music across Europe and America.Bruxelles, 9-11 May 2022.
Chemillier, Marc. 2023. “Bi-musicality at the age of artificial intelligence.” SoMoS, Proceedings of the Second Symposium of the ICTM Study Group on Sound, Movement, and the Sciences, Barcelona, Octobre 26-28, 2022.
Deschênes, Bruno. 2018. “Bi-musicality or Transmusicality: The Viewpoint of a Non-Japanese Shakuhachi Player.”International Review of the Aesthetics and Sociology of Music 49(2): 275-294.
Descola, Philippe. 2014. Beyond Nature and Culture. Foreword by Marshall Sahlins, translated by Janet Lloyd. University of Chicago Press.
Diduck, Ryan. 2018. Mad Skills : MIDI and Music Technology in the XXth Century. Repeater.
Domenichini, Michel. 1984. “Valiha.” In New Grove Dictionary of Musical Instruments, edited by Stanley Sadie, Vol. 3: 705-706. London: Macmillan.
Farrow, Robert. 2023. “The possibilities and limits of XAI in education: a socio-technical perspective.” Learning, Media and Technology 0(0): 1-14.
Gold, Raymond. 1958. “Roles in Sociological Field Observations.” Social Forces 36(3): 217-223.
Goodlad, Lauren M. E. 2023. “Editor’s Introduction: Humanities in the Loop.” Critical AI 1(1-2), https://doi.org/10.1215/2834703X-10734016
Hassan, Yousif. 2022. “Governing algorithms from the South: a case study of AI development in Africa.” AI & Society41.
Holzapfel, Andre, Emmanouil Benetos. “Automatic music transcription and ethnomusicology: a user study.” 20th International Society for Music Information Retrieval Conference, Delft, The Netherlands, 2019.
Hood, Mantle. 1960. “The Challenge of “Bi-Musicality”.” Ethnomusicology 4(2): 55-59.
Lerdhal, Fred and Ray Jackendoff. 1983. A Generative Theory of Tonal Music. Cambridge: The MIT Press.
Lewis, George. 2021. “Co-Creation: Early Steps and Future Prospects.” In Artisticiel. Cyber-improvisation, edited by Bernard Lubat, Gérard Assayag, Marc Chemillier, bilingual CD-book French-English, 106-115. Phonofaune.
Magnusson, Thor. 2019. Sonic Writing. Technologies of Material, Symbolic, and Signal Inscriptions. Bloomsbury Academic.
Munn, Luke. 2023. “The five tests: designing and evaluating AI according to indigenous Māori principles.” AI & Society46.
Nakagawa, Hiroshi & Akiko Orita. 2022. “Using deceased people’s personal data.” AI & Society. https://doi.org/10.1007/s00146-022-01549-1
Paraman, Pradeep, Sanmugam Anamalah. 2022. “Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils.” AI & Society 46.
Prado, Yuri. 2023. Open Gasy (in progress). Film, https://vimeo.com/785788475
Prado, Yuri. 2023. Djazz (in progress). Film, https://vimeo.com/853606697
Ranjeva-Rabetafika Yvette. 1971. “L’influence anglaise sur les cantiques protestants malgaches.” Annales de l’Université de Madagascar, Sér. Lettres et Sciences Humaines 12 : 9-25. http://madarevues.recherches.gov.mg/?L-Influence-nglaise-sur-les-cantiques-protestants-malgaches
Radford, Luis. 2020. “L’ethnomathématique au carrefour de la recolonisation et la décolonisation des savoirs.” In La décolonisation de la scolarisation des jeunes inuit et des Premières Nations : sens et défis, edited by Gisèle Maheux, Glorya Pellerin, Segundo Enrique Quintriqueo Millán, Lily Bacon, chapter 10, 247-276. Montréal: Presses de l’Université du Québec.
Razafindrakoto, Jobonina. 1999. “Le timbre dans le répertoire de la valiha, cithare tubulaire de Madagascar.” Cahiers d’ethnomusicologie 2: 2-16.
Rice, Timothy. 2014. Ethnomusicology: A Very Short Introduction. Oxford University Press.
Scherzinger, Martin. 2005. Book Review of Kofi Agawu’s Representing African Music (2003), Interventions: International Journal of Postcolonial Studies Volume 7 (1): 147-150.
Schmidhofer, August. 2005. Musik – Bewegung – Trance: Zur tranceinduzierenden Wirkung des Rhythmus. Symposium ‘Im Zwischenreich – Musik und Trance’. Donau-Universität Krems, Zentrum für zeitgenössische Musik. https://www.avmm.org/biblio/images/Musik_Bewegung_Trance.pdf
Tenzer, Michael. 2015. “Meditations on Objective Aesthetics in World Music.” Ethnomusicology Vol. 59, No. 1: 1-30. https://doi.org/10.5406/ethnomusicology.59.1.0001
Zembylas, Michalinos. 2023. “A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialism.” Learning, Media and Technology 48(1): 25-37.