For hardware engineer types re: sound controlers and/or codec chips ??

Les hlhowell at pacbell.net
Mon Sep 7 00:02:20 UTC 2009


On Sat, 2009-09-05 at 13:36 -0400, William Case wrote:
> Hi;
> 
> I am chasing down the creation or production of sound on my computer.
> Everything is fitting into place after being at it, on and off, for a
> couple of years.  However, there is one hardware answer I don't seem
> able to chase down.
> 
> Where is the sound data kept immediately on arrival at the sound card?
> 
> Whether analog or digital; whatever the source; sound arrives at the
> sound card or at on-board chip(s).  Whether the sound is in 'chunks',
> 'segments' or 'packets',  the sound data has to be stored somewhere on
> the sound card, before being coded or decoded, or before being moved to
> the DMA.
> 
> I expect that the memory requirements are small, perhaps only a few
> bytes, but none-the-less, the sound card has to (I would think) store
> the data somewhere before processing it and putting it in a DMA buffer.
> 
> Is my assumption about temporary, perhaps 1-2 ticks, storage accurate?
> 
> Where is this data stored?
> 
> Does the sound card itself have some small capacity for memory?  
> -- SRAM or DRAM?
> 
> If so, is this storage a property or function of the sound controller or
> the codec chip?
> 
> Is there a way to tell the size or nature of this memory from the
> specifications, or, the hardware definitions in lshw, or,
> cat /proc/asound/card*/pcm*/info?
> 
> Is, for some reason, this information propitiatory to manufacturers ?
> 
> If it is propitiatory, can you give me a best guess?
> 
> In the end, this is not a terribly important issue, other than without
> an explanation, the understanding of the logic chain for hardware and
> software used by sound is broken.
> 
> 
Hi, Bill,
	Sound, like most everything else electronic is becoming more and more
integrated.  The chip processes today are quite a bit different today
than they were even 20 months ago.  Anything I might say is probably
different from the actual truth anyway.

	But here is an attempt at a high level overview...

	Sound is digitized by an A/D.  that is it comes in, is buffered to
isolate the source from the actions of the A/D, is selected or mixed
with other sources (either before the A/D or afterwards mathematically)
and then encoded.  The encoding is dependent upon the standard(s)
chosen.  In the past the data was simply passed to the cpu and the
encoding or decoding took place on the main processor.  Today many of
the sound chips and advanced sound systems (USB or plug in board or
chipset) have encoder/decoder capable processors built in, and many of
them support programming to meet new standards, so that as MPEG2 becomes
MPG3 or 4, the software can be updated to manage that change.  This
relieves the main processor of that task, so it can concentrate on other
things.  At the same time, the video boards are also becoming more
powerful with array processing built in to do all kinds of 3d graphing,
and there are more powerful utilities being developed all the time
(facial recognition, tracking, movement detection, environment
measurements etc.)  And again the on board processing relieves the main
processor of much of the processing burden.  However the data is
handled, it is eventually sent to the main processor for storage or
transmission, and that part may be handled by DMA, although using DMA
for an Audio process is somewhat overkill, since DMA is in essence a
high speed process.  It depends upon the demands put on the hardware
whether slow speed processes like audio are handled by DMA or polled
buffering, but in any event, the mixing of multiple channels, the
processing for Dolby, or other encoding like THX, or even simulated
projected sound as used by some forms of speakers with phasing
capabilities, means that the processing overhead is increasing, and the
demand for integrated processing increases with that.  Also board real
estate is more expensive than IC real estate, so economies of scale
dictate that the chips become more and more powerful.  

The enemy of good sound is noise, and PC's with their switching power
supplies, digital I/O and something called ground bounce (where the
return path gets periodically overloaded with current, minimizing the
difference between the power rail and ground), all contribute noise.  So
lots of systems now use USB or some other means to isolate the audio
system from the processing ground. 

	And then a new problem crops up.  If the audio system does the
processing, how can it handle the processing noise it makes itself?
There are lots of methods, but one is to make the processing synchronous
with the sampling.  This means that the noise occurs at the same time as
the ADC or DAC is switching to the next sample, so the noise is not
captured.  Other techniques involved here all have to do with making the
noise "common mode" so that the audio system is balanced against ground,
and differential.  When the ground noise goes into such an amplifier, it
is canceled by its own image of the wrong polarity. (this is a
simplified explanation).

	So logically the Main processor will set up a process on the system
that will send and receive encoded digital streams and tell the audio
circuitry what is coming, how it is coded, how large the blocks are and
so forth for sound output, whose size and transfer method depend upon
the current expected standard and the chipset in use.  The audio
circuitry will process the digital input to sound output for speakers
etc.  

The corresponding operation for audio in will setup the processor on the
audio board for the signals to process, the method to use and size of
the data blocks to transfer. The audio input from the microphones or
other analog inputs are then sampled into digital data appropriately
encoded to pass back to the computer where a waiting process will
dispose of the data in the appropriate manner for the application.

I hope that helps.

Regards,
Les H




More information about the fedora-list mailing list