1. Hi all - I have asked many people but have yet to find a definition that satisfies me so I have to ask some physics people.........

how can all the different sounds from a microphone source audio of say a band or orchestra be described to be transmitted via the electrical signal?

In my layman's understanding I know that each different sound will be represented by a different analogue signal that the many hairs in my ears can pick up - but still do not see how a single source input (a sheet of paper connected to a coil and magnet that vibrates back and forth) can collect and transmit so many different frequencies.........

2.

3. Mathematically, any complex waveform can be constructed by superimposing sine waves of various frequencies and amplitude. Our ears and brains are capable of extracting the frequency information from that waveform.

http://en.wikipedia.org/wiki/Frequency_domain

4. Originally Posted by Harold14370
Mathematically, any complex waveform can be constructed by superimposing sine waves of various frequencies and amplitude. Our ears and brains are capable of extracting the frequency information from that waveform.

http://en.wikipedia.org/wiki/Frequency_domain
Thanks - this is where I am coming from - the question is how does the singular movement of the microphone transmit such superimposing waveforms?

5. Originally Posted by fatman57
Thanks - this is where I am coming from - the question is how does the singular movement of the microphone transmit such superimposing waveforms?
Sound is a pressure wave in air. So what you sense when you listen to music is small changes in the air pressure at your ear drum as a function of time.

A microphone has a diaphram like your ear drum connented to a piezo-resistive or piezoelectric device that produces an electric signal that varries in time (idealy) just like the pressure-time signal (sound) that is applied to it.

That signal can then be processed, transmitted, and amplified electronically and converted back to sound by speakers. Speakers use magnets and coils that receive a time-varying n electrical signal and wiggle a diaphram to produce sound -- the reverse process of a microphone.

The ability to analyze signals in terms of sinusoids makes the mathematical analysis easier, but is actually peripheral to the physical process. It is however interesting mathematics -- known as "Fourier analysis" or "harmonic analysis". The techniques include Fourier series and Fourier transforms and the related LaPlace transform.

6. The amplitudes of the waveforms for the various frequencies are additive, as shown below, just as the amplitudes of crisscrossing waves on the ocean are additive. Note that waveform #2 is shown at half of its correct amplitude.

The combined waveform retains the frequency information of its component frequencies and, when this combined sonic energy enters your ear, each hair reacts only to its particular frequency.

Alexander Graham Bell and Thomas Watson were working with the same scientific situation for Bell's "harmonic telegraph" when, through a serendipitous mishap, Bell realized that his invention of the telephone was much easier than he had previously imagined. I'll leave it to Bell's assistant, Thomas Watson, to explain the rest of the story to you in his own words.

(PS -- The clunk at the start of this sound recording (apparently for synchronization purposes) and Watson's reference to various pieces of equipment means that this was the sound track of an attempt to produce talking motion pictures in 1914.)

7. thanks that does help............my problem comes from the idea expressed by jrmonroe that different hairs pickup different frequencies but is contradicted by that expressed by DrRocket and how the eardrum membrane works........

If I were to pluck a string on my acoustic guitar in a room the sound I hear is not only that coming directly from the string but also what is being reflected around the room and other objects......same with that heard from a microphone where I can hear reverb and other effects on a recording.........

would it be safe to conclude that recordings do not play every sound that reaches the microphone but rather as DrRocket says is whatever sound wave might happen to hit it at a particular moment over time - so I hear reverb in one frame of reference because that was the strongest sound wave to reach the microphone at that point in time and a sound with less reverb in the next frame of reference for the very same reason?

So a microphone will not pick up every sound but more a collection of waves to be found over a time period that seem to accurately convey what it would sound like to a human ear standing there (an ear has the same limitations in how it cannot hear everything but since there are more hairs of different lengths might hear it more accurately because it is a much more powerful machine)?

[excuse the long reply, hopefully it isn't too complicated!]

8. Originally Posted by fatman57
thanks that does help............my problem comes from the idea expressed by jrmonroe that different hairs pickup different frequencies but is contradicted by that expressed by DrRocket and how the eardrum membrane works........
We did not really contradict one another. jrmonroe simply went into more detail on the physiology of the ear. I confined my explanation to the external physics.

http://en.wikipedia.org/wiki/Ear

Originally Posted by fatman57
If I were to pluck a string on my acoustic guitar in a room the sound I hear is not only that coming directly from the string but also what is being reflected around the room and other objects......same with that heard from a microphone where I can hear reverb and other effects on a recording.........

would it be safe to conclude that recordings do not play every sound that reaches the microphone but rather as DrRocket says is whatever sound wave might happen to hit it at a particular moment over time - so I hear reverb in one frame of reference because that was the strongest sound wave to reach the microphone at that point in time and a sound with less reverb in the next frame of reference for the very same reason?
I think you are using "frame of reference" in an unconventional way.

The microphone reacts to the pressure-time function at the location of the microphone. The function at some other spatial point could be different due to reflections of sound in the room.

Originally Posted by fatman57
So a microphone will not pick up every sound but more a collection of waves to be found over a time period that seem to accurately convey what it would sound like to a human ear standing there (an ear has the same limitations in how it cannot hear everything but since there are more hairs of different lengths might hear it more accurately because it is a much more powerful machine)?
A microphone, like the ear, is a mechanical device, with components having mass and inertia. It does not react equally to all frequencies, and therefore has limited range and introduces some distortion.

]

9. sounds good to me - please excuse the pun!

I appreciate that a microphone 'does not react equally to all frequencies' but was referring to the way the sound is received which would indicate that even fewer frequencies are recorded then might be perceived by a human ear.

10. There's an additional complication if the recording is digitized. Rather than go into all the details myself, I'll just point out where you can read more: http://en.wikipedia.org/wiki/Nyquist_frequency

Ok, maybe that page is a bit overwhelming. The point is that digitizing a waveform will lose information about frequencies above half the sampling rate.

All the other problems mentioned still apply in any case.

11. Originally Posted by MagiMaster
There's an additional complication if the recording is digitized. Rather than go into all the details myself, I'll just point out where you can read more: http://en.wikipedia.org/wiki/Nyquist_frequency

Ok, maybe that page is a bit overwhelming. The point is that digitizing a waveform will lose information about frequencies above half the sampling rate.

All the other problems mentioned still apply in any case.
It is more complicated than that. If you sample but don't first filter to limit the bandwidth you have no idea what the sampled data means.

Also there is a specific algorithm, involving a series of "sync" functions required to reconstruct the signal from the sampled data. If you sample a signal that is not band-limited and reconstruct using the algorithm and just pretend that it is band-limited at twice your sample rate, the maximum error is twice the integral of the absolute value of the Fourier transform of the signal outside the filtered band (which could be enormous). If you just "connect the dots" the error, even for a band-limited signal, can be arbitrarily large.

You will have a very difficult time finding this in the literaturre. I know of no source for the "connect the dots" result, and only one obscure one for the error estimate on non-band-limited signals.

Bottom line: An anti-aliasing filter is essential if you want the data to make sense.

12. A remarkably informative and neat picture on rmonroe's post. I wish I knew how to post things like that.

13. What is often not mentioned is that not only the bandwidth has to be limited, but also the dynamic range has to be limited by adding noise.

 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement