Sound

Vibration that travels via pressure waves in matter From Wikipedia, the free encyclopedia

Sound is a phenomenon in which pressure disturbances propagate through an elastic material medium. In the context of physics, it is characterised as a mechanical wave of pressure or related quantities (e.g. displacement), whereas in physiological-psychological contexts it refers to the reception of such waves and their perception by the brain.[1] Though sensitivity to sound varies among all organisms, the human ear is sensitive to frequencies ranging from 20 Hz to 20 kHz. Examples of the significance and application of sound include music, medical imaging techniques, oral language and parts of science.

A drum produces sound via a vibrating membrane [a].

Definition

According to the technical standard established by ANSI/ASA S1.1‑2013, the American National Standard for Acoustical Terminology, sound is defined as:

"(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation.
(b) Auditory sensation evoked by the oscillation described in (a)."[2]

This two-part definition of sound states that sound can be taken as a wave motion in an elastic medium, making it also a stimulus, or as an excitation of the hearing mechanism that results in the perception of sound, making it a sensation.

While ANSI/ASA S1.1‑2013 provides the standard terminological definition of sound used in U.S. acoustics, and is one of the few major standards organisations to define the concept explicitly,[2][b] physics and engineering texts often instead define sound as a kind of propagating mechanical disturbance—i.e., a wave—rather than as a vibration. Such sources typically describe sound as a perturbation in mechanical properties (e.g., pressure or particle motion) that travels through a medium and consists of medium variations (compressions and rarefactions), a formulation adopted for conceptual clarity and scientific precision.[8][9][c] [d]

Acoustics

Acoustics is the interdisciplinary scientific study of mechanical waves, vibrations, sound, ultrasound, and infrasound in gaseous, liquid, or solid media. A scientist who works in the field of acoustics is called an acoustician, while an individual specialising in acoustical engineering may be referred to as an acoustical engineer.[12] An audio engineer, by contrast, is concerned with the recording, manipulation, mixing, and reproduction of sound.

Applications of acoustics are found in many areas of modern society. Subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electroacoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration.[13]

Physics

Two tuning forks of the same frequency demonstrating acoustic resonance. Striking one fork induces vibrational motion in the other through the production of air‑pressure oscillations.

Sound travels as a mechanical wave through a medium (e.g. water, crystals, air). Sound waves are generated by a sound source, such as a vibrating diaphragm of a loudspeaker. As the sound source vibrates the surrounding medium, mechanical disturbances propagate away from the source at the local speed of sound, thus resulting in a sound wave. At a fixed distance from the source, the pressure, velocity, and displacement of the medium's particles vary in time. At an instant in time, the pressure, velocity, and displacement vary spatially. The particles of the medium do not travel with the sound wave; instead, the disturbance and its mechanical energy propagate through the medium. Though intuitively obvious for solids, this also applies for liquids and gases. During propagation, waves can be reflected, refracted, or attenuated by the medium.[14]

The matter that supports the transmission of a sound is named the transmission medium. Media may be any form of matter, whether solids, liquids, gases or plasmas. However, sound cannot propagate through a vacuum because there is no medium to support mechanical disturbances.[15][16]

The propagation of sound in a medium is influenced primarily by:

  • A complicated relationship between the density and pressure of the medium. This relationship, also affected by temperature, determines the speed of sound within the medium.
  • Motion of the medium itself. If the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the wind if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the sound wave will be decreased by the speed of the wind.
  • The viscosity of the medium. Medium viscosity determines the rate at which sound is attenuated. For many media, such as air or water, attenuation due to viscosity is negligible.

When sound is moving through a medium that isn't uniform in its physical properties, it may be refracted (either dispersed or focused).[14]

Theoretical work indicates that sound waves carry an extremely small effective gravitational mass. This mass arises from nonlinear corrections to the stress–energy of the wave, and implies that sound waves both respond to gravity and generate a very weak gravitational field of their own. For ordinary equations of state, the effective mass is negative, meaning that sound waves in such media act as if they carry a tiny negative gravitational mass.[e][17] The effect is extremely small because it appears only at nonlinear order in the equations governing wave motion.[18]

Waves

Sound is transmitted through fluids (e.g. gases, plasmas, and liquids) as longitudinal waves, also called compression waves. Through solids, however, sound can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress perpendicular to the direction of propagation. Unlike longitudinal sound waves, transverse sound waves have the property of polarisation.[19]

Sound waves may be viewed using parabolic mirrors and objects that produce sound.[20]

The energy carried by a periodic sound wave alternates between the potential energy of the extra compression (in the case of longitudinal waves) or lateral displacement strain (in the case of transverse waves) of the matter, and the kinetic energy of the particles' displacement velocity in the medium.

Although sound transmission involves many physical processes, the signal received at a point (such as a microphone or the ear) can be fully described as a time‑varying pressure. This pressure‑versus‑time waveform provides a complete representation of any sound or audio signal detected at that location.

Sound waves are often simplified as sinusoidal plane waves, which are characterized by these generic properties:

Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector.

To analyse audio, a complicated waveform—such as the one shown on the right—can be represented as a linear combination of sinusoidal components of different frequencies, amplitudes, and phases.[21][22][23]

Speed

U.S. Navy F/A-18 approaching the speed of sound. The white halo is formed by condensed water droplets thought to result from a drop in air pressure around the aircraft (see Prandtl–Glauert singularity).[24]

The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:

This was later proven wrong and the French mathematician Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but adiabatic. He added another factor to the equation—gamma—and multiplied by , thus coming up with the equation . Since , the final equation came up to be , which is also known as the Newton–Laplace equation. In this equation, K is the elastic bulk modulus, c is the velocity of sound, and is the density. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density.

Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 °C (68 °F) air at sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula v [m/s] = 331 + 0.6 T [°C]. The speed of sound is also slightly sensitive, being subject to a second-order anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array). If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations.

In fresh water the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph). Sound moves the fastest in solid atomic hydrogen at about 36,000 m/s (129,600 km/h; 80,530 mph).[25][26]

Sound pressure level

Quick facts Characteristic, Particle velocity ...
Sound measurements
Characteristic
Symbols
 Sound pressure p, SPL, LPA
 Particle velocity v, SVL
 Particle displacement δ
 Sound intensity I, SIL
 Sound power P, SWL, LWA
 Sound energy W
 Sound energy density w
 Sound exposure E, SEL
 Acoustic impedance Z
 Audio frequency AF
 Transmission loss TL

Close

Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as

where p is the root-mean-square sound pressure and is a reference sound pressure. Commonly used reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 μPa in air and 1 μPa in water. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level.

Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.

Perception

In physiology and psychology, the term sound refers to the perceptual experience produced by acoustic stimulation, distinguishing it from the physical definition used in acoustics. The field of psychoacoustics and the broader discipline of psychophysics study how organisms detect and interpret such stimuli. Webster's dictionary reflects this dual usage by defining sound both as "the sensation of hearing" and as the "vibrational energy which occasions such a sensation."[27] This distinction explains why the question "if a tree falls in a forest and no one is around to hear it, does it make a sound?" can yield different answers depending on whether the physical or perceptual definition is applied.

The physiological reception of sound in organisms with auditory systems is limited to a finite range of frequencies. In humans, sensitivity to pitch typically spans from about 20 Hz to 20 kHz,[28]:382[f] with the upper limit decreasing with age.[28]:249 Below about 20 Hz, periodic acoustic stimuli may be perceived not as pitch but as discrete pulses or slow amplitude fluctuations.[29] The term sound is sometimes restricted to vibrations within the human hearing range,[30] though other species exhibit markedly different auditory limits. For example, domestic dogs can detect frequencies above 20 kHz.[31]

As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.

Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see below).

Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment.

There are, historically, six experimentally separable ways in which sound waves are analysed. They are: pitch, duration, loudness, timbre, sonic texture and spatial location.[32] Some of these terms have a standardised definition (for instance in the ANSI Acoustical Terminology ANSI/ASA S1.1-2013). More recent approaches have also considered temporal envelope and temporal fine structure as perceptually relevant analyses.[33][34][35]

Pitch

Pitch perception. During the listening process, each sound is analysed for a repeating pattern (orange arrows) and the results forwarded to the auditory cortex as a single pitch of a certain height (octave) and chroma (note name).

Pitch is perceived as how "low" or "high" a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics.[36][37] Every sound is placed on a pitch continuum from low to high.

For example: white noise (random noise spread evenly across all frequencies) sounds higher in pitch than pink noise (random noise spread evenly across octaves) as white noise has more high frequency content.

Duration

Duration perception. When a new sound is noticed (Green arrows), a sound onset message is sent to the auditory cortex. When the repeating pattern is missed, a sound offset messages is sent.

Duration is perceived as how "long" or "short" a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased.[38] Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth.[39] This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous.

Loudness

Loudness perception involves the integration of sound energy over a brief time window (around 200 ms), during which greater basilar‑membrane displacement and increased nerve‑firing rates contribute to a stronger loudness signal.

Loudness is perceived as how "loud" or "soft" a sound is, and reflects the overall pattern of auditory‑nerve activity produced by a sound. In general, louder sounds create greater displacement of the basilar membrane, which stimulates more auditory‑nerve fibres and results in a stronger neural representation of loudness.[40]

Perceived loudness also depends on how sound energy is distributed over time. When a sound is very brief, the auditory system does not fully integrate its energy, so it is heard as softer than a longer sound presented at the same physical intensity. This process, known as temporal summation, operates over a window of roughly 200 ms.[41] Beyond this duration, increasing the length of the sound no longer increases its perceived loudness.

The spectral complexity of a sound can also influence loudness perception. Complex tones, which activate a broader range of auditory‑nerve fibres, are often judged as louder than simple tones (such as sine waves) even when matched for physical amplitude.[42]

Timbre

Timbre perception, showing how a sound changes over time. Despite a similar waveform, differences over time are evident.

Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. "it's an oboe!"). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame.[21][22][23] The way a sound changes over time provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar, differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano.

Texture

Sonic texture relates to the number of sound sources and the interaction between them.[43][44] The word texture, in this context, relates to the cognitive separation of auditory objects.[45] In music, texture is often referred to as the difference between unison, polyphony and homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as cacophony.

Spatial location

Spatial location represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment.[45][46] In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification.

Frequency

Ultrasound

Approximate frequency ranges corresponding to ultrasound, with rough guide of some applications

Ultrasound is sound waves with frequencies higher than 20,000 Hz. Ultrasound is not different from audible sound in its physical properties, but cannot be heard by humans. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.

Medical ultrasound is commonly used for diagnostics and treatment.

Infrasound

Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear as a pitch, these sound are heard as discrete pulses (like the 'popping' sound of an idling motorcycle). Whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music.[47]

See also

Notes

  1. In acoustics, the vibrating membrane of a drum functions as an acoustic membrane in the general sense (i.e. a membrane whose vibration produces or transmits sound). This is distinct from the engineered acoustic‑metamaterial membranes described in the article Acoustic membrane.
  2. Other major standards organisations, including ISO, IEC, DIN, and the SI system, primarily define acoustical quantities (such as sound pressure, sound power, and sound intensity) or specify terminology related to oscillatory behaviour, but do not provide a general conceptual definition of sound as a propagating mechanical disturbance. ISO 80000‑8 defines quantities and units,[3] ISO 18405 defines underwater‑acoustics terminology,[4] IEC 60050‑801 defines sound in terms of particle motion in an elastic medium,[5] DIN 1320 defines terminology for acoustics and acoustical quantities,[6] and the SI Brochure defines only "audible sound" as a frequency range.[7]
  3. In physics and acoustics literature, a distinction is made between local vibration (a time‑dependent disturbance at a point) and a wave (a disturbance that propagates through space over time). The disturbance moves through the medium, while the particles of the medium oscillate about an equilibrium position.[8] Several authors note that defining sound as an "oscillation" can be interpreted in ways that obscure the essential role of propagation, and may contribute to conceptual conflation between oscillations and waves.[10] Although sound waves are generated by mechanical vibrations (e.g., a loudspeaker diaphragm) and can induce vibrations in objects (e.g., a microphone diaphragm), the wave itself is the propagating disturbance in the medium, not the local vibration that produces or results from it. Some authors emphasise this distinction to avoid misidentifying sound with the vibrations that cause or result from it.[8][9]
  4. Terms such as "oscillation" and "oscillatory" are often associated with periodicity and cyclical change in physics contexts. Oscillation is commonly defined as a repetitive variation about some central value,[11] even though sound waves may be non‑periodic, transient, or broadband, such as noise and short pulses.[8][9]
  5. The same theoretical effect is also discussed in the context of phonons; see Phonon#Predicted properties.
  6. The commonly cited 20 Hz–20 kHz range typically applies only to young humans with healthy hearing. Hearing limits vary among individuals (e.g., with age or health) and differ widely across biological taxa.

References

Related Articles

Wikiwand AI