(updated 2018-06-24 23:10 utc by goh kawai)

Formatting in progress. Over the next few days, this article might become (a bit) prettier. The content is ready.

This is the full version of the paper that will appear in the IEICE Technical Report, 2018-07-25. This full version of the paper includes audio files, more figures, medium-resolution color images, and URL links to acoustic tools. The Technical Report paper was reduced to meet the IEICE file size limit of 3 megabytes per article.

To cite this article: Goh Kawai (2018) "Musicians deserve to know what acoustic engineering has to offer". IEICE Technical Report 2018-07-24/5.

Musicians deserve to know what acoustic engineering has to offer
Hokkaido University, Center for Language Learning, Sapporo, Hokkaido 060-0817 Japan
email: goh@kawai.com

I seek to motivate acoustic engineers in assisting manufacturers and students of musical instruments by sharing knowledge of acoustical analyses and tools. This article is not an engineering paper that solves acoustic problems, or a scientific paper that tests hypotheses, or a tutorial on acoustic engineering. I state this because this article appears in a publication for acoustic engineers where such articles are expected.


basic knowledge of acoustics, tools for acoustic analyses, learning music

北海道大学 外国語教育センター 060-0817 北海道 札幌市 北区 17西9-1-1
email: goh@kawai.com





1. Introduction
4 years ago, I started taking music lessons for the first time in my life. I was surprised to find that tools for acoustical analyses (such as software applications for displaying spectrograms) are rarely used in music education. I wondered why manufacturers of musical instruments do not publish frequency response graphs or acoustic impedance charts for their products. I was puzzled as to why articles that review products for music magazines or online publications do not describe instruments in terms of acoustic physics. I was amazed that retail stores distinguish beginner and professional instruments mostly based on price. Some retail stores carry instruments tested and recommended by professional players (this practice seems common in Japan but not so much in the USA). Customers lack objective, quantifiable knowledge of the product's capabilities.

I believe that science and engineering can help aspects of music. I propose to augment, not supplant, subjective evaluation of the sound of musical instruments and performance with objective measurements. Many software applications can evaluate the musical performance of both instruments and players. The music community can benefit from knowledge of acoustic engineering, which many readers of this articl can provide.

This article begins by describing the extent of technology use in the music community at this time (section 2). Then I show examples of analyses and tools that acoustic engineers can demonstrate and explain to the music community (section 3). I end with a technology that might not help (section 4), and conclusion (section 5).

2. Technology in use today
Hardware devices and software applications for acoustic training and analysis that are in current use by musicians include the following.

2.1.1. Metronomes
Almost every musician owns an electronic stand-alone metronome (e.g., Seiko "SQ70") or software application (e.g., Xiao Yixiang "Pro Metronome"). Software metronomes tend to be customizable. Still sold today are spring-wound mechanical metronomes that adjust tempo by sliding weights along a pendulum arm (e.g., Seiko "WMP2000").

Visible indicators are essential because metronome clicks are often inaudible when playing loud instruments or in a band. Analog metronomes swing arms. Some electronic metronomes flash light signals.
Metronomes do not indicate whether the player is playing ahead, on, or behind the beat. I address this limitation in section 3.1.2.

2.1.2. Chromatic tuners
Widely popular are electronic devices that combine functions of chromatic tuner and metronome (e.g., KORG "TM-60"). These functions are often used before performance to tune the instrument. Tools to train musicians in pitch perception and production are discussed in section 2.1.3.

Speech engineers might be surprised that electronic chromatic tuners cost much less and operate much faster than pitch measurement devices for spoken language science (e.g., Laryngograph "microProcessor Speech Studio"). The music industry enjoys a user base that is almost certainly more populous than acoustic scientists!

2.1.3. Pitch trainers
Musicians train their aural relative pitch perception (that is, their ears) using interval trainers (e.g., MusicTheory "Tenuto", Subree "EarBeater"), and train their pitch production (that is, their voice or instrument) using pitch production trainers (e.g., Roland "Vocal Trainer VT-12"). These 2 training functions can be combined into a pitch perception and production trainer, where (a) the software application plays an interval, chord, or melody, (b) the student plays what they heard on their instrument, and (c) the software application grades the student's rendition (e.g., SuperNonStop "Play by Ear").

Karaoke machines superficially resemble pitch production trainers because modern karaoke machines grade novice singers based on pitch, dynamics, and timing. Although singers can train themselves to score high on karaoke machines, karaoke machines are not educational devices in the strict sense because (a) the manufacturers do not publish their grading algorithms (i.e., singers do not know what skills are being measured), and (b) the entire song is graded as opposed to individual phrases or notes. Regardless, the gaming aspect of karaoke machines is irresistible, and has been incorporated into a recent music production trainer for brass (Yamaha "KanKara!").

2.1.4. Rhythm trainers
Musicians learn the relative lengths of musical notes and their various combinations using rhythm trainers (e.g., Rolfs Apps "Rhythm Sight Reading Trainer", Jesse Clark "practicesightreading.com").

An arcade game, also available as a software application, plays music and visually signals the player to hit a taiko drum with drumsticks (Bandai Namco "Taiko Drum Master"). The game is musically exciting, probably trains motor control, and may implicitly teach rhythmic patterns.

2.1.5. Audio recorders
Popular choices are battery-operated stand-alone devices that record stereo sound sources with moderately high sound-pressure level (SPL) in MP3 or WAV file formats (e.g., Zoom "H1N").

Also popular are audio recording applications for smartphones and tablets (e.g., Turbokey Studio "Recorder Plus").
Professional equipment used at recording studios are sophisticated, expensive, and require trained technicians. Consumer-grade tools are perfect for students of music.

2.1.6. Audio players
Listening to performances is convenient and ubiquitous in the digitally networked world (e.g., YouTube, iTunes). Their advantages are compelling.
Some audio recorders playback sound at speeds faster or slower than real-time without altering pitch (e.g., audio recorder device: Olympus "LS-100", audio player software: VideoLAN "VLC media player").

2.1.7. Tools for the disabled
Technology helps challenged students. An example is a tablet allowing a disabled boy to play drums in a band (https://www.youtube.com/ watch?v=8Dt6zW2K8i4).

3. Acoustic technology that might help musicians
Section 2 looked at the technology for acoustic training and analysis that are used by musicians today. Section 2 describes how students of music can benefit further from technology to analyze acoustic signals of their musical instruments, practice renditions, or performances.

3.1.1. Spectrograms
The spectrogram is the number one tool I urge acoustic engineers to explain to music students. Viewing acoustical signals in the frequency and time domains offers outstanding benefits for learning music. Below are some examples.

3.1.2. See the duration of musical notes
The length of dotted notes, triplets, and a series of 1/8 notes in a scale all become clearer by looking at the time duration of each note. I confess I cannot hear the difference (maybe other students struggle also?), but I can see the difference.

Common questions faced by beginning students of music include the following: Am I playing in time with the metronome? Are my triplets 2/3 the duration of a 1/2 note? Is my dotted 1/4 note the duration of 3 1/8 notes? Are all my 1/8 notes the same duration?

The length of formants (i.e., thick bands corresponding to harmonics in the note, the lowest formant being the pitch of the note) show the duration of the notes. Metronome clicks appear as narrow vertical lines, because clicks are short in duration. This level of interpretation does not require sophisticated training to understand.

The examples in this article are based on recordings of trumpet pitched in Bb, which happens to be my instrument. Various real errors are included to demonstrate how to find and interpret them, plus I cannot play without making any!

Fig 1. Spectrogram of metronome clicking at 60 beats per minute and trumpet playing 1, 2, 3, and 4 notes per beat, which are respectively 1/4, 1/8, 1/12 (triplet), and 1/16 notes. The metronome clicks are in 4/4 time, its 1st beat frequency is approximately 1.5 kHz, and its 2nd, 3rd, and 4th beat frequencies are approximately 1.1 kHz. The trumpet pitch is shown in the 1st (lowest) formant, and stays at concert Bb (approximately 233 Hz). The 2nd and higher formants are harmonics of the 1st. Brass instruments have harmonics at integer multiples of the 1st formant. The short vertical lines descending from formants are breaks in notes caused by tonguing (i.e., the player briefly interrupts their breath by flicking their tongue; as the tongue starts blocking the airflow the airflow slows and the pitch drops, when the airflow stops the sound stops, and when the tongue re-opens the airflow the sound resumes). The spectrogram shows that the player (me) sometimes plays before or after the metronome. The notes are roughly of equal duration, although I did lengthen an 1/8 note to synchronize with the metronome click.

Audio file: fig_1_half_quarter_eighth_triplet_3
Stacks Image 5676
Fig 2. Spectrogram of trumpet playing 1/8 notes for a major scale, along with the written music that was intended to be played. No metronome clicks. The spectrogram shows that the 1st note started flat and weak (the upswing of pitch is easily seen in the higher formants), and the last note in the 3rd measure was lengthened (because I stopped to count the notes I played).

Audio file: fig_2_eighth_note_scale_4
Stacks Image 5680
Stacks Image 5684
3.1.3. See the articulation of musical notes
Seeing spectrograms tell students what they played. For instance, brass instruments are acoustic tubes flared at both ends so that they resonate at close to multiples of the fundamental frequency. Changing pitch is accomplished by changing the length of tubing or by changing the frequency of lip vibration. Both methods produce different sounds when properly or improperly executed. Other families of instruments undoubtedly have acoustic features that can be visualized.

Fig 3. Spectrogram of trumpet notes tongued and slurred (respectively meaning with and without interruption of the airflow by flicking the tongue). Lip vibrations were altered to change pitch. Trumpet was played with all valves in open position (hence valves did not interrupt airflow). Notice the short vertical lines (some faint or absent) of tongued notes, and the formants curving up or down in slurred notes. The objective is to stabilize pitch during the note, and transition pitch smoothly to the next note. The spectrogram shows that my tonguing and slurring sometimes look (and sound) the same. I may need to work on my technique to differentiate the 2 kinds of articulations.

Audio file: fig_3_tongue_vs_slur_3
Stacks Image 5688
3.1.4. See tone of musical notes
Spectrograms visualize tone quality. Musicians, particularly string and wind players, strive for pleasing tone. Clean tones consist of formants and desired white noise. Dirty tones consist of unstable formants (especially noticeable in the higher harmonics because the frequency fluctuations are multiplied and become visually apparent), and undesired noise (typically caused by uncontrolled abrasion or turbulence).

Fig 4. Spectrogram of ascending trumpet notes. The notes played are concert Bb4, F4, Bb5, D5, F5, Bb6, C6, D6, and Eb6. (Trumpet players will recognize that the Eb6 should be F6. I missed.) Between Bb4 and D5, there is no noise (i.e., the background is clear) but between F5 and Eb6 noise increases (i.e., the background is foggy). The Eb6 note is unstable (it alternates between Eb6 and D6), and noise is considerable. This means that notes at and above F5 have progressively poor tone and stability. Brass players seek to extend their pitch range by enabling their lips to vibrate over a wider range of frequencies. Tone degrades near the edges of pitch range. Spectrograms motivate brass players to play higher and cleaner notes. Perhaps students can be cheered on by keeping a photo album of spectrograms.

Audio file: fig_4_tone_3
Stacks Image 5692
3.1.5. See the slightest mistakes
Spectrograms (painfully) reveal the tiniest mistake. Teachers of music can ask their students to send spectrograms and recordings of homework. Listening to recordings takes the same amount of time as it did to play it, but viewing spectrograms can be faster. I am not suggesting that teachers should not listen. I do believe that visual inspection has 2 advantages: (a) students can see their mistakes even if they cannot hear them (and maybe play the etude one more time), and (b) teachers can visually point out where improvements should be made.

Fig 5. Spectrogram showing trumpet playing twice the phrase A of "Autumn Leaves". The cursor (at the crosshair location) shows a wrong note being played. 5 measures later, I repeat the mistake. The wrong pitch is easily identified when listening to the recording. Spectrograms are displayed in real-time. By watching the spectrogram while playing, the student can aurally and visually confirm mistakes and successes.

Audio file: fig_5_al_mistook_2_times_2
Stacks Image 5696
3.1.6. See rhythm
Spectrograms help visualize those rhythmic patterns. Sight-reading music depends more on understanding rhythm than pitch, partly because beginning students can convert written musical notes to sound but are incapable of comprehending the relative time duration of each note. Musical notes (which are loosely analogous to phonetic phones in human language) combine into chunks or phrases (which roughly correspond to syllables or words in human language). Students of music need to learn the spelling of words, as it were.

When I want to sight-read a new etude, I could play the piece, and look at the spectrogram to find the relative lengths of notes. By correcting the lengths such that they match the music, I can learn part of the song. Although I will not be perfect, I would play better than relying solely on ear. Without a visual reference, I would need to listen to a demonstration, which detracts from the learning experience because the demonstration gives the answer instead of allowing the student to learn by trial and error.

Fig 6. Spectrogram of trumpet playing measures 6 through 17 of written music shown below. No metronome clicks. The spectrogram shows that most of my notes are of the wrong length. The music is from Arban (1864) "Complete Conservatory Method for Trumpet" ("the art of phrasing, etude 14"), and is in the public domain.

Audio file: fig_6_arban_phrasing_14_oyt_ending_3
Stacks Image 5700
Stacks Image 5724
3.1.7. How to obtain software applications that create spectrograms
Acoustic engineers have access to professional-grade hardware and software. Music students can use consumer-grade software applications at a fraction of the cost (sometimes free) and enjoy the same benefits as engineers and scientists. Dedicated hardware devices for acoustic spectral analysis are for specialized use (e.g., sonar), not for the hobbyist or musicians.

Unlike spectrograms that acoustic engineers often use, some consumer-oriented spectrograms place the time axis vertically (the graph's origin is at the upper-left corner), and call the spectrogram the waterfall display (so named because the formants look like streams of falling water). Sonar operators would be familiar with this time-vertical orientation. The frequency axis scale might be linear, logarithmic, or musical (i.e., chromatic notation), of which the last is readily understandable to musicians.

I suggest that you explain to your music friends the tools you use yourself. Below are a few suggestions based on my use.

For tablets (and smartphones, except their tiny screens are hard to see), I use Audio Analyzer ($14.99, http://iaudioapps.com/) with an Apple 12-inch iPad. Figures 1 through 6 of this article were created using Audio Analyzer. Audio Analyzer (a) runs in real time, which means students see their music analyzed as they play, (b) displays spectrograms, spectra, and sound amplitude, (c) offers various settings, although not as customizable as professional tools, and (d) shows musical notes names in concert pitch by pointing at the spectrogram with a cursor. Speech engineers may find quirky that the graph's origin is at the lower-right corner, and time markers have negative meaning (e.g., "2" means "2 seconds before the end of the recording"). Shortcomings include (a) the lowest sampling rate is 44.1 kHz (much too high for practicing musical instruments), (b) the recording buffer is small (merely 30 to 60 seconds when settings are set to typical values), (c) the audio is output only from internal speakers (not to Bluetooth loudspeakers), (d) the display is fixed in landscape orientation, and (e) screen magnification settings are not remembered when the application restarts.

An alternative choice for tablets and smartphones is Spectrum View Plus ($7.99, http://www.oxfordwaveresearch.com/products/spectrumviewapp/). First try their free version, Spectrum View (without the Plus), that has some features of Spectrum View Plus turned off (buying Spectrum View Plus can be cheaper than unlocking individual components from Spectrum View). Their website includes a succinct and understandable explanatory manual. Their explanations of signal analyses are applicable to other spectrogram software applications.

Fig 7. Screenshot of software application Spectrum View Plus running on iOS. Audio signal is identical to Fig 2 with some preceding sound.
Stacks Image 5704
For desktop computers, the top choice is Audacity (free, https://www.audacityteam.org). Audacity is designed to record and edit speech and music. Audacity allows input and output various formats of audio files. The spectrogram is displayed in color by default, and in grayscale by selection. The user manual explains how to display spectrograms (https://manual.audacityteam.org/man/spectrogram_view.html).

Fig 8. Screenshot of software application Audacity running on MacOS. Audio signal is identical to Fig 2. The top frame shows the time waveform. The bottom frame shows the spectrogram in grayscale (in color by default).
Stacks Image 5708
Speech engineers (including me) may prefer Praat (free, http://praat.org). Praat adheres to a no-nonsense austere user interface and display. Praat is designed to record and label spoken language in hierarchical tiers (such as the phone, word, and intonation layers). Praat can handle long audio files. Note that the manuals are not written for musicians. Hands-on help is needed to learn how to use Praat.

Fig 9. Screenshot of software application Praat running on MacOS. Audio signal is identical to Fig 2. The top frame shows the time waveform. The bottom frame shows the spectrogram in grayscale. Overlaid on the spectrogram is the pitch (its value is read off the right vertical axis, here labeled 75 - 630 Hz).
Stacks Image 5712
3.1.8. Spectrograms help students of music
I suspect there is a substantial population of music students who, like me, have difficulty hearing music. These students do not have strong ears for music, but are probably capable in other ways. Aiding such students with visual representations of music could ease their learning. Instead of quitting, these students might continue to learn, appreciate, and support music.

3.1.9. Spectra
Spectra of musical sounds describe the timbre of the instrument.

For instance, brass wind instruments are characterized as having a "bright" or "dark" timbre, which many players prefer for classical and jazz music respectively. Bright vs dark corresponds to the presence vs absence of energy in the higher frequencies (i.e., higher formants are louder). Playing louder on brass instruments disproportionately produces more energy in the higher frequencies (i.e., playing loud introduces selectively more energy in the higher frequencies, instead of increasing energy equally across all frequencies). Hence all brass instruments play brighter at forte than at mezzo piano. The state of an instrument playing brighter is called "sizzle" by brass players. This means spectra of such instruments should be measured at various pitches and dynamics.

Spectrograms of steady-state sounds (i.e., sounds at the same pitch and amplitude) can be interpreted as spectra, because spectrograms are formed from a series of spectra. Most spectrogram software applications display spectra as well.

Comparing spectra of various instruments teach us why different instruments sound different, and why sometimes different instruments sound alike. Players can compare different instruments, and choose one that is similar or dissimilar to what their friends play (e.g., orchestra players might seek to "blend in with the section" meaning their timbre should be similar to their colleagues', while a lead jazz player might want to "cut through the band" with a unique timbre).

Spectra also let us compare sound-altering devices (e.g., mutes for brass instruments).

Fig 10. Spectra of trumpet concert Bb4 at 3 dynamic levels. The figures show the same pitch Bb4 being played at amplitudes of 0, 9, and 16 dB relative to the 1st figure. As audio amplitudes increases, the higher frequencies become taller (i.e., increase in amplitude) more than the lower frequencies. The tendency is most prominent in the 3rd figure that was played at forte: its higher formants are more visible and more audible.

Audio file: fig_10_1_2_3_spectrum
Stacks Image 5714

Stacks Image 5728

Stacks Image 5716
3.2. Frequency response and acoustic impedance
3.2.1. How to explain acoustic impedance to musicians

The University of New South Wales (UNSW) website (http://newt.phys.unsw.edu.au/music/) is an excellent resource of text and graphics that explain the acoustics physics of music to musicians. Acoustic engineers can use this website as teaching material for musical friends. UNSW states that "the art and science of music acoustics are presented here, in musician-friendly format, as is our research in music science".

One of the topics that UNSW clearly documents is acoustic impedance and its value (http://newt.phys.unsw.edu.au/jw/z.html).

Fig 11. Spectrum of acoustic impedance for Bg trumpet with all valves in open position. Obtained from UNSW (http://newt.phys.unsw.edu.au/jw/brassacoustics.html). The impedance peaks show frequencies that resonate easily for this trumpet in the valve-open configuration. The closer the peak frequencies to musical pitches, the better the instrument plays in tune. By creating charts like this for all valve configurations (there are 8 for most trumpets), we can visualize the trumpet's intonation (in music, intonation means "ability to play in tune"), and compare different valve settings ("fingerings") to achieve the best intonation.
Stacks Image 5720
Fig 12. Input impedance of a Bg trumpet. From Peterson (2012). The notes are written for Bb trumpet, which produces 1 whole note below concert pitch. As input impedance increases, more acoustic energy is reflected from slightly beyond the bell of the instrument, creating stronger standing waves. This allows the player to more securely produce that note. For the trumpet shown in this chart (and probably many other makes and models), the highest input impedance occurs at C6. Above C6, input impedance falls, meaning those notes resonate ("slot") poorly. Above G6, the player receives little feedback ("resistance") from the instrument to guide their pitch. Towards the lower frequency end of the chart, we see no peak for G3 or C3, which is why notes below G3 ("pedal tones") are difficult to play. Learning to play notes in low-impedance zones of the trumpet means notes below G3 and above G6 can be played. This is the acoustic basis of how playing pedal tones help play the upper register, and vice versa.
Stacks Image 5888
Acoustic impedance is measured in the laboratory. Some instruments (such as brass instruments) require measurements for all configurations (such as all possible tube lengths determined by slide positions) because the acoustic characteristics vary from configuration to configuration. Measuring acoustic impedance is not a task for the hobbyist. I urge manufacturers and professional reviewers to measure acoustic impedance either by themselves or outsource to laboratories that perform measuring services.

In the following paragraphs, I talk in terms of peaks heights and skirt widths. These correspond to the concept of Q factor and frequency response in electronic filter design.

3.2.2. Acoustic impedance helps musicians choose instruments
Acoustic impedance charts help choose instruments. Peaks in acoustic impedance show which notes play efficiently.

The taller the peak, the more readily the instrument resonates at that frequency, and the more efficiently that note is played (i.e., the instrument plays louder for the same acoustic input). Shorter peaks mean they are less efficient, and sound less loud.

The narrower the skirt of the peak, the narrower the range of frequencies (for wind instruments, reed or lip vibrations) that produce that note (brass players call this "tight slotting"). Wider skirts indicate more flexible input frequencies ("loose slotting").

If the peaks are aligned closely to the desired pitch (e.g., concert Bb4) then that instrument plays in tune for that particular configuration (e.g., trumpet valves open).

Brass players are interested in intonation because brass instrument design is a compromise (i.e., musical pitches form a geometric series while the lengths of brass instrument tubing form an arithmetic series).

If manufacturers published acoustic impedance charts, then customers could determine how well the instrument plays in tune. For instance, a trumpet having low peaks and wide skirts in acoustic impedance charts is an instrument that allows players to produce notes easier and to bend pitches wider, because the instrument resonates across a wider range of lip vibration frequencies. Conversely, a trumpet having tall peaks and narrow skirts in acoustic impedance charts provides greater acoustic output amplitude for the same input amplitude, and greater pitch stability because the instrument resonates across a narrower range of lip vibration frequencies.

These characteristics are not necessarily aligned with the player's skill level, but are more relevant to the player's intent. For instance, a novice might choose a forgiving low-peak wide-skirt instrument because their lip vibration is unstable, or choose a demanding high-peak narrow-skirt instrument to train themselves towards stable lip vibration. Advanced players might seek a low-peak wide-skirt flexible instrument that bends pitches readily, or play a high-peak narrow-skirt efficient instrument that projects sound farther.

3.2.3. Acoustic impedance helps manufacturers describe their products
By separately considering the capabilities of player and instrument, players can select instruments that match their needs and desires. One reason why some manufacturers of musical instruments do not publish recordings of their instruments is because the performance's acoustic signal is a product of the player, the instrument, and peripheral tools (e.g., mouthpieces, bows, sticks, picks). We want to consider these factors separately.

Although it is uncommon practice today, musical instruments can be manufactured and advertised such that musicians can compare different makes and models based on their technical specifications. Of course, the players themselves affect the sound -- ultimately, the instrument performs as nicely as the player. It would help musicians to know what the instrument itself is capable of independently of the player's skill.

I mention in passing that objective, quantitative measurements are commonplace in today's camera industry. Modular transfer function (MTF) charts describe the resolution of camera lenses. Years ago, few manufacturers published MTF charts, but today many firms do. I hope that a similar trend occurs in the musical instrument industry.

4. Acoustic technology that might not help
Time waveforms tell us when sound starts and ends, and are useful especially for selecting segments of the recording (e.g., extracting the best take). Time waveforms are best used in conjunction with spectrograms.

I doubt that musicians would find use for oscilloscopes, because instantaneous sound waveforms tell us (in practice) only instantaneous amplitude of sound. If you have access to time waveforms, oscilloscopes are unnecessary. There is a reason why storage oscilloscopes replaced non-storage oscilloscopes.

5. Conclusion
I hope that this article educates acoustic engineers as to what musicians deserve to know. The technical concepts are familiar to you, yet the needs of musicians and the applications of technology may not be. I urge you to spare your time and expertise with musicians. I am convinced that they will benefit.

6. References
Antoine Chaigne and Jean Kergomard (2016) "Acoustics of Musical Instruments" Springer, ISBN-13: 978-1493936779. English language translation of: Antoine Chaigne and Jean Kergomard (2013) "Acoustique des instruments de musique, 2nd edition".
Jürgen Meyer (2009) "Acoustics and the Performance of Music: Manual for Acousticians, Audio Engineers, Musicians, Architects and Musical Instrument Makers" Springer, ISBN-13: 978-0387095165, 2009. English language translation by Uwe Hansen of: Jürgen Meyer (2004) "Akustik und Musikalische Aufführungspraxis: Leitfaden für Akustiker, Tonmeister, Musiker, Instrumentenbauer und Architekten".
Ben Peterson (2012) "Trumpet Science: Understanding Performance Through Physics, Physiology, and Psychology" CreateSpace, ISBN-13: 978-1470089344.

7. About the author
Goh Kawai is a professor of educational engineering at Hokkaido University, Center for Language Learning. Goh designs and evaluates online learning systems for autonomous or blended learning. All of Goh's former graduate students currently teach English or Japanese language at high schools and universities. Goh has been awarded the Hokkaido University president's award for teaching excellence, the university's highest honor. Goh has a BA in linguistics (University of Tokyo), an MA in educational technology (International Christian University), and a PhD in information and communication engineering (University of Tokyo). Goh holds a teacher's license for English language at middle schools and high schools in Japan. Goh conducted research and teaching at Xerox PARC, SRI International, University of Tokyo, University of California Santa Cruz, Oregon Health & Science University, Hokkaido University, and University of Antwerp. Goh's inventions include Glexa (online learner management system), and Discourseware (multimedia conversation courseware system). For Goh's full bio and recent activities, visit https://goh.kawai.com. Goh's email address is goh@kawai.com.