part of an ongoing research by
Since the beginning of human arts, acoustic musical instruments were created in attempt to imitate and complement the human voice. However, none of the designs managed to do what the perfect biologic musical instrument could do.
As generations went by, the original purpose was lost and the craft of building and using clumsy inefficient imitations was passed on. The newer generations took for granted the legacy, carrying on their shoulders the heavy burden of tradition without ever questioning it.
The global timeline of musical instruments development splits, and takes a peculiar course in the West (Occident) where the invention of keyboards came to rule every aspect of musical tuning, theory and practice, as opposed to the East (Orient) where up until recently oral traditions remained undisturbed by globalization.
The western musical instruments are presently threatening with mass extinction many aboriginal, indigenous and ethnical traditions, for the sole reason of availability: they are cheap, so anyone can afford one. This is the same reason equal temperament established itself in the first place in the west, at the dawn of the industrial revolution, driven by a tone-deaf tradition inherited from the Renaissance — a period which brought about a blooming of all arts, dictated the shape of the instruments, the form of the music and the base of the fallacious theory that was going to insinuate itself and dominate under the proud flag of standardization up to this day.
In all this time, across the entire world, musicians, theorists and instrument builders devised patterns of sonic entities (rāgas, echos, maqāms, modes and scales), tuned their instruments, composed, wrote, performed and taught music by following, or striving to approximate, the model described by the sounds within one sound (overtone spectra, partials, harmonic/inharmonic/nonharmonic series) — without being consciously aware of doing so.
As the technology needed to split one complex sound into its simple sound components became available, it provided the proof that musician's ears and mathematician's numbers indeed followed the same pattern and came from the same source: the sounds within one sound, the natural model for tuning everywhere and anytime. However, the complacent conservatism of ignorant tradition denied the truth of this acoustic fact, and even today domesticated musicians consider the sounds within one sound a mere coloristic factor (timbre), rather than what it really is: a structural element.
The tuning procedure described in this paper is the result of my own research and work, but does not constitute a novelty in its entirety. The basic concept of using the sounds within one sound as model for tuning was never acknowledged and described in this manner by mainstream musicians (theorists, scientists, composers, artists). Instead, it was treated superficially backwards and in most cases as peculiarity.
TO DO: short presentation of historical timeline, starting perhaps with Ohm as the first to describe the sounds within one sound, then Helmholtz which attributed "coincidences of upper partials" to lead "to the natural consonances", Steiner who predicted this as the future of music; also the academic investigations from the 60's onwards focusing obsessively on temperaments (constructs irrelevant to natural acoustics), together with "spectral music" or "spectralism" which is exactly that, and finally Sethares who calls it "local consonance".
The Harmonic Series is nature's tuning system, defined through ratios as nature's perfect law of relationships. The HarmoniComb Matrix is a generalized approach to mapping musical entities and their interactions, which best fits vocal music, and which can be adapted to any other musical instrument — as long as one peculiar parameter is taken into account: the deviation from perfect numbers.
Working on it...
Calculating fret and hole position according to symmetrical series of harmonics, going up one double and down one half.
Theoretically interesting, practically useless.
Calculating fret and hole position according to the descending series of harmonics.
Total Resonance is a concept defining a properly tuned or consonant musical instrument as one which is in-tune-with-itself, that is, whose tones are tuned according to the model described by its own overtone spectra.
In the case of the human voice, these overtones will all be harmonics, because they display relationships based on perfect integers. On non-biologic instruments (which display slight deviations from perfect numbers due to mechanical attributes like stiffness), the whole matrix will get stretched or compressed according to the deviation (or “inharmonicity”) factor.
When studying the HarmoniComb or any other version of the lambdoma, what we're looking at is a *generalized* approach to mapping musical entities and their interactions, a concept from which various musical parameters spring, the most obvious being timbre (or color, nuance). Different musical elements like rhythm (timing) and pitch (sound height) can be derived (or tuned) from these numerical combinations.
However, not all instruments have the sounds within one sound spaced by perfect frequency numbers. For those which deviate from perfect harmonics, the universal musical instrument HarmoniComb still can be used, but the matrix will be stretched or compressed according to the deviation factor.
The deviation from perfect harmonics is not so obvious in the first overtones of a piano tone.
The deviation from perfect harmonics becomes obvious further up the overtone series.
Overtone 27 of piano tone has almost the same frequency as natural harmonics 28.
A term which says nothing to the musically innocent is “inharmonicity”: the deviation of numbers from perfect integers by means of stretching or compressing the series of harmonics, due to the stiffness of the physical oscillator (string) and its resonator. If you ever wondered what “inharmonic” means, this is it.
Piano tones for example deviate from perfect harmonic numbers by strecthcing their overtones. As we proceed further up the series, piano overtones tend to go higher in pitch than natural harmonics.
This has meaning for the tuning, as these deviations must be taken into account when calculating the pitches of the scale. This is the only way for a piano to be in tune with itself and sound consonant.
Just as videos are rapid successions of still images, giving the (false) impression of movement, so is digital audio a very rapid succession of fixed, still points, which by their positions approximate the continuous, uninterrupted movement of acoustic and analog-electric sound.
The main perceptual difference is in the numbers: while for video the human eye needs a minimum of about 15 images (frames) every second in order to form the illusion of movement, the number of points (samples) required by the ear in order to form the illusion of continuous sound (pitch, as opposed to rhythm) is in the order of tens of thousands (44100, 48000, 96000, and so on samples per second).
Therefore, in order for sound to be digitized, primitive computers modify its continuous shape into a series of dots that can be read by such machines.
Sample rate frequency must be (at least/more than) twice the wave's frequency. That's because a complete cycle is mathematically represented and electrtrically processed as a shape varying between two states (pozitive and negative). Although this is a description of primitive electrics, it serves the purpose of example: the red wave 1 needs at least 2 sample points (one pozitive and one negative) in order to exist in the digital domain.
Another way of saying this is wave's highest frequency component must be (less than) half the sample rate frequency.
The sonic reality however is more brutal. The "wave shape", meaning the lines connecting the dots, do not exist in the digital world. They are there as cues, to give us an idea about how the analog sound producing source — in most cases the speakers — will move.
A physical object as we know it is not cabable of quantum jumping between states: a speaker cannot be all the way out at one moment, then all the way in at the other without moving through space between these points. But the digital representation of sound, as it has been implemented by humans, behaves precisely in the opposite manner: it can only store information about the precise location of every individual point, at precise moments in time. This limitation can be of course overcomed; for infinite digital resolution, write me an email.
Now since there is such a small number of sampling points (in our example, 8), the round shape is completely distorted. That is, a speaker wil try its best to get from one point to the other, just like the shape of the digital sound wave below.
Such poorly represented signal amounts to an infinite addition of simple (round) harmonic waves, having only odd harmonics with amplitudes inversely proportional to square of frequency An=(1/n^2)A1, known as triangle wave.
TO DO: link to the chapter where these concepts are explained so that anyone could easily understand them.
In order to properly represent the round shape, many sample points have to be used.
TO DO: renumber properly! total=36
The example above is still not accurate, as the resulting shape would still not be round. In fact, no matter how much resolution increases (how many sample points get added), they still won't be able to represent a perfectly smooth curve — even when hundreds of thousands of points are used. All we get are "good enough" approximations.
Of course, there is a solution to this. But it requires completely new AD/DA (analog-to-digital and digital-to-analog) converters, suited for 3rd Millenium Technology.
Let's see what happens if the above rule (concerning double sampling points) is broken.
If the waveform is less than twice the sampling rate (8/6=1,333<2), the result is a completely different sonic entity: another tone. That's because the samples, the 8 white points in this example, are fixed at their position in time. They cannot move vertically; they only can move horizontally, up and down, to represent volume (amplitude).
TO DO: replace triangle waves with sinusoids (re-make resultant in picture above), and
add another image for triangles (use current).
Interestingly, the result ammounts to a sum of simple wave motions (1st and 2nd harmonics) where their relative positions (difference of phase) has been altered. This basically means that, because of improper digital represenation given by poor sample resolution, another signal has been introduced along with the original.
The result (similar to curve D below) is (harmonic) distorsion, and it might sound desirable in controlled situations, but in actual practice things are different. First of all, it is extremely rare to find in practice signals and sampling rates in harmonic numerical proportions. In our example, 3 identical simple signals are sampled across 8 sampling points. That's a harmonic 3:8 ratio. Secondly, there's almost no simple signals used in everyday sound applications that would benefit from extra harmonic content, and these complex signals are not identical. Thirdly, and most important, the round shapes of the signal are impossible to represent with (so few) sampling points. The digital reality ammounts to the triangular-like shapes from (the second image) below.
«if we superimpose the two pendular vibrational curves A and B, first with the point e of B on the point d0 of A, and next with the point e of B on the point d1 of A, we obtain the two entirely distinct vibrational curves C and D.»
Helmholtz (and Ellis), pages 119d-120a
All these examples use small numbers which are harmonically related.
This is extremely rare in our digital reality. Sampling numbers are of the magnitude of tens of thousands, while the frequencies used in music range from tens to thousands of Hertz in fundamentals, while their harmonics go up to tens and maybe hundrets of tousands.
TO DO: image with a corner representing a sample line and a sample point misrepresented both as sampling rate and bit depth.
Last updated: 12 january 2017