Notices
Results 1 to 12 of 12

Thread: audio signal composition

  1. #1 audio signal composition 
    Forum Bachelors Degree
    Join Date
    Jul 2009
    Posts
    404
    Hi all - I have asked many people but have yet to find a definition that satisfies me so I have to ask some physics people.........

    how can all the different sounds from a microphone source audio of say a band or orchestra be described to be transmitted via the electrical signal?


    In my layman's understanding I know that each different sound will be represented by a different analogue signal that the many hairs in my ears can pick up - but still do not see how a single source input (a sheet of paper connected to a coil and magnet that vibrates back and forth) can collect and transmit so many different frequencies.........


    Reply With Quote  
     

  2.  
     

  3. #2  
    Suspended
    Join Date
    Apr 2007
    Location
    Pennsylvania
    Posts
    8,795
    Mathematically, any complex waveform can be constructed by superimposing sine waves of various frequencies and amplitude. Our ears and brains are capable of extracting the frequency information from that waveform.

    http://en.wikipedia.org/wiki/Frequency_domain


    Reply With Quote  
     

  4. #3  
    Forum Bachelors Degree
    Join Date
    Jul 2009
    Posts
    404
    Quote Originally Posted by Harold14370
    Mathematically, any complex waveform can be constructed by superimposing sine waves of various frequencies and amplitude. Our ears and brains are capable of extracting the frequency information from that waveform.

    http://en.wikipedia.org/wiki/Frequency_domain
    Thanks - this is where I am coming from - the question is how does the singular movement of the microphone transmit such superimposing waveforms?
    Reply With Quote  
     

  5. #4  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by fatman57
    Thanks - this is where I am coming from - the question is how does the singular movement of the microphone transmit such superimposing waveforms?
    Sound is a pressure wave in air. So what you sense when you listen to music is small changes in the air pressure at your ear drum as a function of time.

    A microphone has a diaphram like your ear drum connented to a piezo-resistive or piezoelectric device that produces an electric signal that varries in time (idealy) just like the pressure-time signal (sound) that is applied to it.

    That signal can then be processed, transmitted, and amplified electronically and converted back to sound by speakers. Speakers use magnets and coils that receive a time-varying n electrical signal and wiggle a diaphram to produce sound -- the reverse process of a microphone.

    The ability to analyze signals in terms of sinusoids makes the mathematical analysis easier, but is actually peripheral to the physical process. It is however interesting mathematics -- known as "Fourier analysis" or "harmonic analysis". The techniques include Fourier series and Fourier transforms and the related LaPlace transform.
    Reply With Quote  
     

  6. #5  
    Forum Professor jrmonroe's Avatar
    Join Date
    Mar 2010
    Posts
    1,444
    The amplitudes of the waveforms for the various frequencies are additive, as shown below, just as the amplitudes of crisscrossing waves on the ocean are additive. Note that waveform #2 is shown at half of its correct amplitude.



    The combined waveform retains the frequency information of its component frequencies and, when this combined sonic energy enters your ear, each hair reacts only to its particular frequency.

    Alexander Graham Bell and Thomas Watson were working with the same scientific situation for Bell's "harmonic telegraph" when, through a serendipitous mishap, Bell realized that his invention of the telephone was much easier than he had previously imagined. I'll leave it to Bell's assistant, Thomas Watson, to explain the rest of the story to you in his own words.

    (PS -- The clunk at the start of this sound recording (apparently for synchronization purposes) and Watson's reference to various pieces of equipment means that this was the sound track of an attempt to produce talking motion pictures in 1914.)
    Grief is the price we pay for love. (CM Parkes) Our postillion has been struck by lightning. (Unknown) War is always the choice of the chosen who will not have to fight. (Bono) The years tell much what the days never knew. (RW Emerson) Reality is not always probable, or likely. (JL Borges)
    Reply With Quote  
     

  7. #6  
    Forum Bachelors Degree
    Join Date
    Jul 2009
    Posts
    404
    thanks that does help............my problem comes from the idea expressed by jrmonroe that different hairs pickup different frequencies but is contradicted by that expressed by DrRocket and how the eardrum membrane works........

    If I were to pluck a string on my acoustic guitar in a room the sound I hear is not only that coming directly from the string but also what is being reflected around the room and other objects......same with that heard from a microphone where I can hear reverb and other effects on a recording.........

    would it be safe to conclude that recordings do not play every sound that reaches the microphone but rather as DrRocket says is whatever sound wave might happen to hit it at a particular moment over time - so I hear reverb in one frame of reference because that was the strongest sound wave to reach the microphone at that point in time and a sound with less reverb in the next frame of reference for the very same reason?

    So a microphone will not pick up every sound but more a collection of waves to be found over a time period that seem to accurately convey what it would sound like to a human ear standing there (an ear has the same limitations in how it cannot hear everything but since there are more hairs of different lengths might hear it more accurately because it is a much more powerful machine)?

    [excuse the long reply, hopefully it isn't too complicated!]
    Reply With Quote  
     

  8. #7  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by fatman57
    thanks that does help............my problem comes from the idea expressed by jrmonroe that different hairs pickup different frequencies but is contradicted by that expressed by DrRocket and how the eardrum membrane works........
    We did not really contradict one another. jrmonroe simply went into more detail on the physiology of the ear. I confined my explanation to the external physics.

    http://en.wikipedia.org/wiki/Ear

    Quote Originally Posted by fatman57
    If I were to pluck a string on my acoustic guitar in a room the sound I hear is not only that coming directly from the string but also what is being reflected around the room and other objects......same with that heard from a microphone where I can hear reverb and other effects on a recording.........

    would it be safe to conclude that recordings do not play every sound that reaches the microphone but rather as DrRocket says is whatever sound wave might happen to hit it at a particular moment over time - so I hear reverb in one frame of reference because that was the strongest sound wave to reach the microphone at that point in time and a sound with less reverb in the next frame of reference for the very same reason?
    I think you are using "frame of reference" in an unconventional way.

    The microphone reacts to the pressure-time function at the location of the microphone. The function at some other spatial point could be different due to reflections of sound in the room.

    Quote Originally Posted by fatman57
    So a microphone will not pick up every sound but more a collection of waves to be found over a time period that seem to accurately convey what it would sound like to a human ear standing there (an ear has the same limitations in how it cannot hear everything but since there are more hairs of different lengths might hear it more accurately because it is a much more powerful machine)?
    A microphone, like the ear, is a mechanical device, with components having mass and inertia. It does not react equally to all frequencies, and therefore has limited range and introduces some distortion.

    ]
    Reply With Quote  
     

  9. #8  
    Forum Bachelors Degree
    Join Date
    Jul 2009
    Posts
    404
    sounds good to me - please excuse the pun!

    I appreciate that a microphone 'does not react equally to all frequencies' but was referring to the way the sound is received which would indicate that even fewer frequencies are recorded then might be perceived by a human ear.
    Reply With Quote  
     

  10. #9  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    There's an additional complication if the recording is digitized. Rather than go into all the details myself, I'll just point out where you can read more: http://en.wikipedia.org/wiki/Nyquist_frequency

    Ok, maybe that page is a bit overwhelming. The point is that digitizing a waveform will lose information about frequencies above half the sampling rate.

    All the other problems mentioned still apply in any case.
    Reply With Quote  
     

  11. #10  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by MagiMaster
    There's an additional complication if the recording is digitized. Rather than go into all the details myself, I'll just point out where you can read more: http://en.wikipedia.org/wiki/Nyquist_frequency

    Ok, maybe that page is a bit overwhelming. The point is that digitizing a waveform will lose information about frequencies above half the sampling rate.

    All the other problems mentioned still apply in any case.
    It is more complicated than that. If you sample but don't first filter to limit the bandwidth you have no idea what the sampled data means.

    Also there is a specific algorithm, involving a series of "sync" functions required to reconstruct the signal from the sampled data. If you sample a signal that is not band-limited and reconstruct using the algorithm and just pretend that it is band-limited at twice your sample rate, the maximum error is twice the integral of the absolute value of the Fourier transform of the signal outside the filtered band (which could be enormous). If you just "connect the dots" the error, even for a band-limited signal, can be arbitrarily large.

    You will have a very difficult time finding this in the literaturre. I know of no source for the "connect the dots" result, and only one obscure one for the error estimate on non-band-limited signals.

    Bottom line: An anti-aliasing filter is essential if you want the data to make sense.
    Reply With Quote  
     

  12. #11  
    Suspended
    Join Date
    Jul 2007
    Location
    Manchester, England
    Posts
    107
    A remarkably informative and neat picture on rmonroe's post. I wish I knew how to post things like that.
    Reply With Quote  
     

  13. #12  
    Forum Masters Degree Twit of wit's Avatar
    Join Date
    Jan 2010
    Posts
    553
    What is often not mentioned is that not only the bandwidth has to be limited, but also the dynamic range has to be limited by adding noise.
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •