US20100324711A1 - Frequency normalization of audio signals - Google Patents

Frequency normalization of audio signals Download PDF

Info

Publication number
US20100324711A1
US20100324711A1 US12/853,147 US85314710A US2010324711A1 US 20100324711 A1 US20100324711 A1 US 20100324711A1 US 85314710 A US85314710 A US 85314710A US 2010324711 A1 US2010324711 A1 US 2010324711A1
Authority
US
United States
Prior art keywords
frequency response
frequency
signal
analog
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/853,147
Inventor
Anthony D. Janke
Ryan J. Perkofski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockford Corp
Original Assignee
Rockford Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockford Corp filed Critical Rockford Corp
Priority to US12/853,147 priority Critical patent/US20100324711A1/en
Publication of US20100324711A1 publication Critical patent/US20100324711A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response

Definitions

  • the present invention generally relates to the field of audio technology, and more particularly to a method and system for normalizing the frequency response of an audio source.
  • a stock car audio system refers to exactly what the manufacturer specified when constructing the car. These original factory components are referred to as Original Equipment Manufacturer (OEM) components.
  • OEM Original Equipment Manufacturer
  • a custom car audio installation involves changing and/or adding “after-market” components, including anything from the upgrade of the radio/cd player to a full-blown customization of a car based around delivering exceptional sound quality or volume from audio equipment.
  • High-end audio systems typically include component speakers that comprise of a matched tweeter, mid-range and woofer set. These component pairs are available in two speaker and three speaker combinations, and include a cross-over which limits the frequency range that each component speaker must handle. In addition, a subwoofer(s) is provided for low frequency music information. Amplifiers boost the music signals to drive the speakers.
  • the most common and familiar piece of audio equipment is the radio/tape player/CD player, which is generically described as a head-unit.
  • car audio head-units have generally comprised of self-contained units. The controls to operate these head-units were placed directly on the head-units. Further, these head-units typically included self-contained modular audio components such as the radio, cassette player, or CD player. As such, the head-unit has proved a highly popular component that a consumer could remove and upgrade with an after-market item that had greater functionality and quality. With the removal and replacement of the OEM head-unit, a consumer could then upgrade the car amplifier and speakers.
  • the human ear is capable of hearing frequencies from 20 Hz to 20 kHz.
  • a device capable of handling frequencies from 20 Hz to 20 kHz is referred to as a full range device.
  • An audio signal that possesses frequency information ranging from 20 Hz to 20 kHz is referred to as having full range frequency information.
  • These different frequencies are combined to create sound. For example, a bird chirping may create frequencies around 10,000 Hz, while the human voice is around 3,000 Hz. The sound of a door slamming may lie closer to 200 Hz. These are only some examples of different sounds and their frequencies.
  • Music is comprised of many frequencies. Ideally, the perfect reproduction of sound would have a full range flat frequency response.
  • a flat frequency response is one where all of the frequencies have the same amplitude or level. While a flat frequency response is desirable, it is possible to manipulate the frequency response of the sound signal to create a unique sound field for a specific vehicle. For example, many manufacturers will use the OEM head unit to attenuate low frequencies because the speakers cannot reproduce those signals accurately without sound distortion.
  • after-market head-units can offer a consumer a music frequency response without frequency or amplitude conditioning. No modifications or filtering are performed on the music signal information with an after-market head-unit, thereby allowing the reproduction of the signal as the artist intended. Consequently, replacement of the OEM head-unit is one of the most common ways to upgrade a car audio system.
  • a line-level converter converts a high-voltage level signal to a low voltage level signal.
  • the line level converter is placed between an OEM amplifier and an after-market amplifier and after-market speakers.
  • the line-level converter receives the high voltage signal from the OEM amplifier that would originally get transmitted to an OEM speaker, reduces it to a low voltage level line signal, and feeds it to an after-market amplifier.
  • the after market amplifier then increases the signal and transmits it to after-market speakers. This conversion is an adequate solution to this problem if the audio content leaving the OEM head-unit has the same frequency response as that which an after-market head-unit would provide.
  • the OEM head-units do not provide a flat frequency response, but rather typically provide a frequency response that is inferior to the flat frequency response an after-market amplifier and speakers are capable of supporting.
  • This conditioned signal formed by the OEM head-unit that is highly integrated with the car electrical system is therefore an ongoing problem when it is not possible to remove the OEM head-unit. It is therefore highly desirable to develop an audio system that can produce a flat audio response from a factory OEM head-unit that conditions a signal.
  • a system and method is provided to produce a flatter frequency response from an audio source that has a non-flat frequency response and, as such, has missing spectral content.
  • the system and method achieves a flatter frequency response by characterizing the frequency response of the audio source based upon a reference input signal.
  • This reference input signal is used to establish a reference frequency response, which is stored in a memory and used to select equalizer settings.
  • the system restores missing spectral content by way of summing multiple input signals from the audio source.
  • the system then normalizes the frequency response based on characterizations of the signal by utilizing equalizer settings from memory.
  • FIG. 1 illustrates a block diagram of an audio system.
  • FIG. 2 illustrates a block circuit diagram of a frequency normalization unit.
  • FIG. 3 illustrates a software block diagram of a frequency normalization unit.
  • FIG. 4 illustrates a process for normalizing an ideal signal.
  • FIG. 5 illustrates a process for normalizing a typical signal.
  • FIG. 6 illustrates a process for creating a flatter frequency response from a normalized frequency input.
  • FIG. 1 illustrates a block diagram of an audio system.
  • the block diagram of FIG. 1 includes an OEM head-unit 10 , a factory amplifier 12 , a frequency normalization unit 14 , an after-market amplifier 16 , and an after-market speaker 18 .
  • OEM head-unit 10 , an OEM amplifier 12 , a frequency normalization unit 14 , an after-market amplifier 16 , and an after-market speaker 18 form audio system 20 .
  • Audio system 20 includes frequency normalization unit 14 to enable the addition for after-market amplifier 16 and after-market speaker 18 to OEM head-unit 10 and OEM amplifier 12 .
  • OEM head-unit 10 typically conditions an audio signal in order to achieve a specific desired effect and compensate for other deficiencies of the low-cost OEM speakers. This effect can be used for many things including reducing the low spectral content, tuning the audio system based on the acoustic signature of the vehicle or for various other sonic responses. OEM head-unit 10 reduces the low spectral content of the music signal so that the low bass sounds do not damage the OEM speakers. As such, OEM head-unit 10 does not produce a signal with a full range flat frequency response. As such, frequency normalization unit 14 is provided to produce a flatter frequency response and allow for OEM head-unit 10 to remain a part of audio system 20 .
  • OEM head-unit 10 produces a low line-level voltage signal.
  • OEM amplifier 12 steps up the power of this low line-level voltage signal to a high voltage signal.
  • OEM head-unit 10 and OEM amplifier 12 may be contained in the same unit.
  • Frequency normalization unit 14 receives this high voltage signal, converts it back to a low line-level voltage signal, and normalizes the response of the audio signal to that of a full range flatter frequency response.
  • Frequency normalization unit 14 may also accept low line-level voltage signals as well. Frequency normalization unit 14 reverses the signal conditioning that is performed by OEM head-unit 10 .
  • After-market amplifier 16 receives the low line-level voltage signal from frequency normalization unit 14 and steps it back up to a high voltage signal that then drives after-market speaker 18 .
  • frequency normalization unit 14 Through the use of frequency normalization unit 14 , it is possible to continue to use OEM head-unit 10 and OEM amplifier 12 with audio system 20 and still achieve a full range flatter frequency response that is used by after-market amplifier 16 to drive after-market speakers 18 .
  • FIG. 2 illustrates a block circuit diagram of frequency normalization unit 14 .
  • Frequency normalization unit 14 includes an Analog-to-Digital Converter 22 (ADC), a Digital Signal Processor 24 (DSP), a Digital-to-Analog Converter 26 (DAC), a microcontroller 28 , and a communication Integrated Circuit 30 (IC).
  • Frequency normalization unit 14 also includes memory 32 , a power supply 34 , differential input 36 , analog preamplifier 38 , and auxiliary input 40 .
  • Frequency normalization unit takes the analog audio signal from OEM amplifier 12 and re-digitizes it, processes the signal in digital form, and produces an analog signal that has a full range flatter frequency response.
  • OEM head-unit 10 primarily reads music information from a digital source such as a compact disk, a stored MP3 file, digital radio, or other digital source. However, OEM head-unit 10 may also acquire analog music information from an analog audio source such as an analog radio signal. Regardless of the source of the information, OEM head-unit 10 converts all digital signals into an analog signal for amplification by OEM amplifier 12 in order to drive speakers. In order to process the signal information from OEM head-unit 10 and normalize it to a flatter frequency response for playing on after-market speakers 18 , the signal information is re-digitized by frequency normalization unit 10 and manipulated by a Digital Signal Processing (DSP) technique.
  • DSP Digital Signal Processing
  • Digital Signal Processing is a technique that converts signals from real world sources (usually in analog form), such as OEM amplifier 12 , into digital data that can then be analyzed. Analysis is performed in digital form because once a signal has been reduced to numbers, its components can be isolated, analyzed and rearranged more easily than in analog form.
  • the input signal to frequency normalization unit 14 is an analog signal.
  • Differential input 36 is the input section of frequency normalization unit 14 .
  • Differential input 36 accepts six separate input channels, labeled 1-6. These six separate input channels are the individual audio channels of multi-channel audio system 20 .
  • Multi-channel audio refers to the use of multiple sound sources to create a richer music experience.
  • the six audio channels correspond to the following speaker units: front left speaker, front center speaker, front right speaker, left surround speaker, right surround speaker, and the subwoofer.
  • the use of a six channel audio system is merely exemplary. Other audio configurations that use fewer than six, or more than six channels can be implemented with frequency normalization unit 14 .
  • Differential input 36 is an input circuit that actively responds to the differences between channels 1-6. This is desirable in this application as a ground reference is not required. OEM head-units rarely contain a power supply and because of this, the voltage rails required to reproduce the musical content range from ground potential (0 volts) up to the battery voltage (12.6/14.4 volts). This leads to a DC offset of typically 6 volts. Differential input 36 outputs the difference in voltage between the two of the input leads. The differential input section allows the audio content (AC) into the device while simultaneously blocking the DC offset. Another benefit to differential signals is that noise common to both leads is cancelled by this arrangement.
  • ADC 22 samples the analog signal that is output from differential input section 36 and converts it to a series of digital values to represent the signal at a sampling rate f(s).
  • the primary input to ADC 22 is differential input section 36 .
  • ADC 22 may also accept input from auxiliary input 40 .
  • Auxiliary input 40 is used to receive an analog audio signal from a source different from OEM head-unit 10 , such as an external MP3 player, an external satellite radio device, or other audio consumer electronic device.
  • ADC 22 digitizes the analog audio signal into a digital signal according to the Inter-IC Sound (I2S) standard.
  • the I2S standard is a protocol for transmitting two channels of digital audio data over a single serial connection.
  • I2S comprises a serial bus (path) design for digital audio devices and technologies such as compact disc (CD) players, digital sound processors, and digital TV (DTV) sound.
  • the I2S design handles audio data separately from clock signals. By separating the data and clock signals, time-related errors that cause deviation in or displacement of some aspect of the pulses in a high-frequency digital signal do not occur, thereby eliminating the need for various deviation correction devices.
  • An I2S bus design includes three serial bus lines: a line with two time-division multiplexing (TDM) data channels, a word select line, and a clock line.
  • TDM time-division multiplexing
  • DSP 24 processes the audio signal.
  • DSP 24 is a special-purpose CPU (Central Processing Unit) that provides ultra-fast instruction sequences, such as shift and add, and multiply and add, which are commonly used in math-intensive signal processing applications.
  • DSP 24 is not the same as a typical microprocessor. Microprocessors are typically general purpose devices that run large blocks of software. They are not often called upon for real-time computation and they work at a slower pace, choosing a course of action, then waiting to finish the present job before responding to the next user command.
  • DSP 24 is often used as a type of embedded controller or processor that is built into another piece of equipment, such as frequency normalization unit 14 , and is dedicated to a single group of tasks, such as analysis of a music signal to facilitate its conversion to a signal possessing a flatter frequency response.
  • DSP 24 is classified by its dynamic range, the spread of numbers that must be processed in the course of an application. This number is a function of the processor's data width (the number of bits it manipulates) and the type of arithmetic it performs (fixed or floating point). For example, a 32-bit processor has a wider dynamic range than a 24-bit processor, which has a wider range than 16-bit processor. Floating-point chips have wider ranges than fixed-point devices.
  • Each type of processor is suited for a particular range of applications.
  • Sixteen-bit fixed-point DSPs are used for voice-grade systems such as phones, since they work with a relatively narrow range of sound frequencies.
  • Hi-fidelity stereo sound has a wider range, calling for a 16-bit ADC (Analog/Digital Converter), and a 24-bit fixed point DSP.
  • Image processing, 3-D graphics and scientific simulations have a much wider dynamic range and require a 32-bit floating-point processor.
  • DSP 24 architectural features are designed to perform discrete mathematical operations as quickly as possible, preferably within a single instruction cycle.
  • DSP 24 is preferably configured to provide single-cycle computation for multiplication with accumulation, arbitrary amounts of shifting, and standard arithmetic and logical operations.
  • Extended sums-of-products common in DSP algorithms, may be supported in multiply-accumulate units.
  • Extended precision in the multiplier's accumulator can provide extra bits for protection against overflow in successive additions to ensure that no loss of data or range occurs. In extended sums-of-products calculations, two operations are needed on each cycle to support the calculation.
  • DSP 24 is preferably able to sustain two-operand data throughput, whether the data is stored on-chip or off.
  • the digital data is turned back into an analog signal by DAC 26 , with improved quality and a flatter frequency response.
  • This analysis process is handled quickly in real-time. For instance, stereo equipment handles sound signals of up to 20 kilohertz (20,000 cycles per second), requiring DSP 24 to perform hundreds of millions of operations per second.
  • the audio signal is transmitted between DSP 24 and DAC 26 according to the I2S protocol.
  • DAC 26 transforms a digital word representing an analog value such as a voltage into an output corresponding to that analog signal.
  • the fineness of the signal is represented by resolution in bits.
  • Important device specifications to consider when searching for digital-to-analog converters include number of output channels, resolution, maximum or reference voltage, bandwidth, accuracy, and output impedance.
  • the analog output voltage at this stage contains discrete steps, each step representing one sample. These steps can be smoothed out by way of a low-pass filter.
  • Microcontroller 28 coordinates the operation of ADC 22 , DSP 24 , and DAC 26 . Audio signal information gained through the analysis of the audio signal by DSP 24 is stored in memory 32 by microcontroller 28 . This audio signal information stored in memory 32 is utilized by frequency normalization unit 14 to condition the audio signal to have a flatter frequency response. Microcontroller 28 preferably communicates with ADC 22 , DSP 24 , and DAC 26 and memory 32 through an SPI protocol. It is a full-duplex protocol that functions on a master-slave paradigm that is ideally suited to data stream application. SPI requires four signals: clock (SCLK), master output/slave input (MOSI), master input/slave output (MISO), slave select (SS).
  • SCLK clock
  • MOSI master output/slave input
  • MISO master input/slave output
  • SS slave select
  • SCLK is generated by the master device and is used for synchronization.
  • MOSI and MISO are the data lines.
  • the SPI protocol utilizes bi-directional communication within its bus structure. This data bus is shared with all devices connected to the network. Each device only responds to the data bus when its slave select (SS) line is pulled low. The remainder of the time the data on the bus is simply ignored.
  • SS slave select
  • Each device has its own SS line.
  • the master pulls low on a slave's SS line to select a device for communication.
  • SPI is a very simple communication protocol. It does not have a specific high-level protocol which means that there is almost no overhead. Data can be shifted at very high rates in full duplex. This makes it very simple and efficient in a single master single slave scenario. Because each slave needs its own SS, the number of traces required is n+3, where n is the number of SPI devices.
  • Frequency normalization unit 14 may be provided with a wireless capability.
  • Communication IC 30 may be coupled to microcontroller 28 .
  • Communication IC 30 is wireless enabled, thereby allowing a controller to program microcontroller 28 from a remote device such as a PDA, or other wireless device.
  • a preferred device for communication IC 30 is a BLUETOOTH® enabled device.
  • BLUETOOTH is an industrial specification for wireless Personal Area Networks (PANS).
  • BLUETOOTH provides a way to connect and exchange information between devices like personal digital assistants (PDAs), mobile phones, laptops, PCs, and other consumer devices that are enabled with a BLUETOOTH communications chip via a secure, low-cost, globally available short range radio frequency.
  • PDAs personal digital assistants
  • PCs personal digital assistants
  • BLUETOOTH allows these devices talk to each other when they come in range, even if they are not in the same room, as long as they are within 10 meters (32 feet) of each other.
  • a UART or Universal Asynchronous Receiver-Transmitter is a device that controls microcontroller's 28 interface to IC 30 . Specifically, it provides microcontroller 28 with an interface so that it can “talk” to and exchange data with IC 30 . As part of this interface, the UART converts the bytes it receives from microcontroller 28 along parallel circuits into a single serial bit stream for outbound transmission. On inbound transmission, UART converts the serial bit stream into the bytes that microcontroller 28 handles. UART may also add a parity bit on outbound transmissions and checks the parity of incoming bytes and discards the parity bit. UART may also add start and stop delineators on outbound and strips them from inbound transmissions. BLUETOOTH devices and modules are increasingly being made available with an embedded stack and a standard UART port.
  • DAC 26 is coupled to analog preamplifier 38 .
  • Preamplifier 38 is an electronic amplifier designed to prepare an electrical signal for further amplification by after-market amplifier 16 .
  • Preamplifier 38 amplifies the low level signal from DAC 26 and applies an anti-alias filter to the audio signal.
  • An anti-alias filter is a low-pass filter designed to remove all content above 20,000 Hz.
  • After-market amplifier 16 is a power amplifier that boosts the power of the audio signal from this low line level signal to a high power signal to drive after-market speakers 18 .
  • FIG. 3 illustrates a software block diagram of frequency normalization unit 14 .
  • DSP code section 42 manages the manner in which DSP 24 processes and analyzes the audio signal.
  • Firmware 44 is responsible for controlling the hardware, such as ADC 22 , DSP 24 , DAC 26 , IC 30 , and microcontroller 28 .
  • This role includes communicating with the control software 46 , initializing ADC 22 and DAC 26 as well as programming DSP 24 .
  • DSP 24 has a responsibility to process the digitized audio.
  • Control software 46 has two functions. The primary role is a User Interface (UI). This UI allows the user to configure DSP 24 to produce a desired audio signal normalization.
  • the other role that control software 46 provides is the artificial intelligence as to the correction factors used to normalize the incoming audio to a flatter frequency response.
  • the control software configures the hardware based on the desired mode of operation.
  • the first mode of operation is to analyze the frequency response of the audio signal carried on each of the six input channels from OEM head-unit 10 .
  • DSP 24 is configured to accept input from each of the six input channels and analyze the frequency response of each channel.
  • DSP 24 may be configured to accept input from fewer than six channels, or more than six channels, depending upon the number of channels that form multi-channel audio system 20 . This data is sent back to the control software where decisions can be made.
  • the control software determines if the audio content is full band (20 to 20 kHz) for the five speaker channels and full band (10 Hz-200 Hz) for the subwoofer. If any of the six channels have a frequency response that does not encompass the full range of the channel band, the software will inform the user.
  • a channel will not have a frequency response that encompasses the full range of the channel band if OEM head-unit 10 conditions the audio signal.
  • OEM head-unit 10 may condition the audio signal to limit the range of the audio frequency response to enable the original equipment manufacturer to use speakers that are incapable of handling a full range audio signal for various reasons.
  • a particular auto manufacturer may decide that the music within a particular vehicle has a better sound quality if they condition the audio signal and limit the frequency response of one or more of the channels in accordance with the perceived acoustics of the vehicle interior.
  • the frequency response of a particular channel that is lost to frequency conditioning by the OEM head-unit may be recovered by summing the frequency responses of the various channels together.
  • FIG. 4 illustrates a process for normalizing an ideal signal.
  • OEM head-unit 10 may condition the frequency response of each channel depending upon the type of speaker that the original system may have contained prior to the upgrade with after-market amplifier 16 and after-market speakers 18 .
  • the OEM speaker system may have included tweeters that played high frequencies, midrange speakers that played middle range to high range frequencies, bass speakers that play low frequencies, and a subwoofer that plays very low frequencies.
  • OEM head-unit 10 may have passed the audio signal through a high pass filter to cut off all of the frequency content below the range of the tweeter.
  • OEM head-unit may have passed the audio signal through band pass and low pass filters for the midrange, and bass speakers/subwoofer respectively.
  • channels that carry an audio signal that is conditioned by one of these filters does not possess full range frequency information.
  • FIG. 4 illustrates the process of combining these various channels possessing conditioned frequency information to form a channel possessing full range frequency information.
  • Frequency graph 48 illustrates an ideal frequency response of a channel that carries an audio signal processed through a high pass filter.
  • Frequency graph 50 illustrates and ideal frequency response of a channel that carries an audio signal processed through a band pass filter.
  • Frequency graph 52 illustrates an ideal frequency response of a channel that carries an audio signal processed through a low pass filter. Combining the frequency information possessed by graphs 48 , 50 , and 52 together enables the recreation of a frequency graph 54 that possesses full range frequency information. When summing frequency information, all of the left speaker channels are summed separately from all of the right speaker channels in order to preserve the stereo quality of the music.
  • Graphs 48 , 50 , 52 , and 54 each show frequency information with the x axis representing frequency and the y axis representing power.
  • FIG. 5 illustrates a process for normalizing a typical signal with non-ideal frequency characteristics. As discussed with respect to FIG. 4 , it is possible to recover the full range of frequency information of an audio signal by summing the frequency response of channels that possess overlapping frequency information.
  • Frequency graph 56 illustrates a common frequency response of a channel that carries an audio signal processed through a high pass filter.
  • Frequency graph 58 illustrates a common frequency response of a channel that carries an audio signal processed through a band pass filter.
  • Frequency graph 60 illustrates a common frequency response of a channel that carries an audio signal processed through a low pass filter.
  • Summing frequency graphs 56 , 58 , and 60 produces frequency graph 62 that possesses frequency information that is full range.
  • Graphs 56 , 58 , 60 , and 62 each show frequency information with the x axis representing frequency and the y axis representing power.
  • graph 54 in FIG. 4 possesses a flatter frequency response, as this is an ideal case.
  • graph 62 the frequency response is not flat, the power varies with frequency.
  • the next step in the process is to normalize the frequency output as illustrated by arrow 64 and FIG. 6 .
  • the frequency normalization process is accomplished by using the data acquired during the initial stage where the frequency information was analyzed.
  • Outputs 1 through 5 use a 31-band equalizer for the correction process.
  • Each band corresponds to 1 ⁇ 3 octave.
  • Channel 6 is a dedicated subwoofer channel so the frequency response is preferably 20 to 200 Hz.
  • a 10-band EQ is used for correction.
  • FIG. 6 illustrates a process for creating a flatter frequency response from a normalized frequency input.
  • Graph 66 is a detailed example showing the frequency response along with the equalization to normalize it. Below graph 66 is an equalizer 68 .
  • a desired power reference level 70 is chosen before any correction to normalize the frequency response can take place. This reference level 70 establishes the output level.
  • the reference level 70 in one exemplary manner, is calculated by averaging the valid frequency response data.
  • Valid data includes data points that are found in the pass-band only. For example, if channel 1 contains content from 20 Hz up to 1 kHz, the data above 1 kHz will not be used to calculate the reference voltage level.
  • the position of each control on the correction equalizer 68 can be adjusted to compensate for the incoming response.
  • FIG. 6 illustrates an example of a response 66 and corresponding equalizer level adjustments 72 to normalize said response to a flatter response. The result of this normalization is a flatter frequency response shown by graph 74 in FIG. 5 .

Abstract

A system and method is provided to produce a flatter frequency response from an audio source that has a non-flat frequency response and, as such, has missing spectral content. The system and method achieves a flatter frequency response by characterizing the frequency response of the audio source based upon a reference input signal. This reference input signal is used to establish a reference frequency response, which is stored in a memory and used to select equalizer settings. The system restores missing spectral content by way of summing multiple input signals from the audio source. The system then normalizes the frequency response based on characterizations of the signal by utilizing equalizer settings from memory.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to the field of audio technology, and more particularly to a method and system for normalizing the frequency response of an audio source.
  • BACKGROUND OF THE INVENTION
  • In 1929, Paul Galvin, the head of Galvin Manufacturing Corporation, invented the first car radio. As the first car radios were not available from car makers, consumers had to purchase the radios separately. Lacking modern solid state electronics, these early radios were quite bulky as they were formed using vacuum tubes.
  • Since the introduction of this first car radio over 75 years ago, car audio systems have undergone a significant evolution. The movement to add more than just a basic radio to a car largely originated on the west coast of the United States in the late 1970's. Several early manufacturers and audio enthusiasts, such as the founder of Rockford Fosgate, Jim Fosgate, led this movement and began building audio amplifiers to run on the standard voltage in automotive electrical systems.
  • Today, unlike 75 years ago, auto manufacturers routinely include audio systems as standard equipment with modern vehicles. A stock car audio system refers to exactly what the manufacturer specified when constructing the car. These original factory components are referred to as Original Equipment Manufacturer (OEM) components. A custom car audio installation involves changing and/or adding “after-market” components, including anything from the upgrade of the radio/cd player to a full-blown customization of a car based around delivering exceptional sound quality or volume from audio equipment.
  • High-end audio systems typically include component speakers that comprise of a matched tweeter, mid-range and woofer set. These component pairs are available in two speaker and three speaker combinations, and include a cross-over which limits the frequency range that each component speaker must handle. In addition, a subwoofer(s) is provided for low frequency music information. Amplifiers boost the music signals to drive the speakers.
  • The most common and familiar piece of audio equipment is the radio/tape player/CD player, which is generically described as a head-unit. Since their creation, car audio head-units have generally comprised of self-contained units. The controls to operate these head-units were placed directly on the head-units. Further, these head-units typically included self-contained modular audio components such as the radio, cassette player, or CD player. As such, the head-unit has proved a highly popular component that a consumer could remove and upgrade with an after-market item that had greater functionality and quality. With the removal and replacement of the OEM head-unit, a consumer could then upgrade the car amplifier and speakers.
  • The human ear is capable of hearing frequencies from 20 Hz to 20 kHz. A device capable of handling frequencies from 20 Hz to 20 kHz is referred to as a full range device. An audio signal that possesses frequency information ranging from 20 Hz to 20 kHz is referred to as having full range frequency information. These different frequencies are combined to create sound. For example, a bird chirping may create frequencies around 10,000 Hz, while the human voice is around 3,000 Hz. The sound of a door slamming may lie closer to 200 Hz. These are only some examples of different sounds and their frequencies.
  • Music is comprised of many frequencies. Ideally, the perfect reproduction of sound would have a full range flat frequency response. A flat frequency response is one where all of the frequencies have the same amplitude or level. While a flat frequency response is desirable, it is possible to manipulate the frequency response of the sound signal to create a unique sound field for a specific vehicle. For example, many manufacturers will use the OEM head unit to attenuate low frequencies because the speakers cannot reproduce those signals accurately without sound distortion.
  • However, after-market head-units can offer a consumer a music frequency response without frequency or amplitude conditioning. No modifications or filtering are performed on the music signal information with an after-market head-unit, thereby allowing the reproduction of the signal as the artist intended. Consequently, replacement of the OEM head-unit is one of the most common ways to upgrade a car audio system.
  • The advancement of consumer electronics has enabled auto manufacturers to greatly enhance the features offered to consumers in automobiles. These features range from head-unit controls on the steering wheel and factory alarms, all the way to voice recognition, navigation, and integrated video systems. Many, if not all of these features are integrated into the OEM head-unit. The high level of integration of the OEM head-unit with the rest of the car electrical system in modern vehicles presents a problem for a consumer who wishes to install high fidelity after-market components. If the OEM head-unit is removed, these additional features are either lost or require expensive adapters to function. Sometimes, it is actually not possible to remove the OEM head-unit and still allow the car to function as designed. Further, replacing OEM amplifiers and speakers with high quality after-market items generally requires replacement of the OEM head-unit. As such, the introduction of highly integrated OEM head-units by auto manufacturers presents a significant problem.
  • One currently known method of addressing this problem is through the use of a device called a line-level converter. These devices convert a high-voltage level signal to a low voltage level signal. The line level converter is placed between an OEM amplifier and an after-market amplifier and after-market speakers. The line-level converter receives the high voltage signal from the OEM amplifier that would originally get transmitted to an OEM speaker, reduces it to a low voltage level line signal, and feeds it to an after-market amplifier. The after market amplifier then increases the signal and transmits it to after-market speakers. This conversion is an adequate solution to this problem if the audio content leaving the OEM head-unit has the same frequency response as that which an after-market head-unit would provide. However, typically the OEM head-units do not provide a flat frequency response, but rather typically provide a frequency response that is inferior to the flat frequency response an after-market amplifier and speakers are capable of supporting. This conditioned signal formed by the OEM head-unit that is highly integrated with the car electrical system is therefore an ongoing problem when it is not possible to remove the OEM head-unit. It is therefore highly desirable to develop an audio system that can produce a flat audio response from a factory OEM head-unit that conditions a signal.
  • SUMMARY OF THE INVENTION
  • According to a preferred embodiment of the invention, a system and method is provided to produce a flatter frequency response from an audio source that has a non-flat frequency response and, as such, has missing spectral content. The system and method achieves a flatter frequency response by characterizing the frequency response of the audio source based upon a reference input signal. This reference input signal is used to establish a reference frequency response, which is stored in a memory and used to select equalizer settings. The system restores missing spectral content by way of summing multiple input signals from the audio source. The system then normalizes the frequency response based on characterizations of the signal by utilizing equalizer settings from memory.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a block diagram of an audio system.
  • FIG. 2 illustrates a block circuit diagram of a frequency normalization unit.
  • FIG. 3 illustrates a software block diagram of a frequency normalization unit.
  • FIG. 4 illustrates a process for normalizing an ideal signal.
  • FIG. 5 illustrates a process for normalizing a typical signal.
  • FIG. 6 illustrates a process for creating a flatter frequency response from a normalized frequency input.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Referring to the Figures by characters of reference, FIG. 1 illustrates a block diagram of an audio system. The block diagram of FIG. 1 includes an OEM head-unit 10, a factory amplifier 12, a frequency normalization unit 14, an after-market amplifier 16, and an after-market speaker 18. Together, OEM head-unit 10, an OEM amplifier 12, a frequency normalization unit 14, an after-market amplifier 16, and an after-market speaker 18 form audio system 20. Audio system 20 includes frequency normalization unit 14 to enable the addition for after-market amplifier 16 and after-market speaker 18 to OEM head-unit 10 and OEM amplifier 12.
  • OEM head-unit 10 typically conditions an audio signal in order to achieve a specific desired effect and compensate for other deficiencies of the low-cost OEM speakers. This effect can be used for many things including reducing the low spectral content, tuning the audio system based on the acoustic signature of the vehicle or for various other sonic responses. OEM head-unit 10 reduces the low spectral content of the music signal so that the low bass sounds do not damage the OEM speakers. As such, OEM head-unit 10 does not produce a signal with a full range flat frequency response. As such, frequency normalization unit 14 is provided to produce a flatter frequency response and allow for OEM head-unit 10 to remain a part of audio system 20.
  • OEM head-unit 10 produces a low line-level voltage signal. OEM amplifier 12 steps up the power of this low line-level voltage signal to a high voltage signal. OEM head-unit 10 and OEM amplifier 12 may be contained in the same unit. Frequency normalization unit 14 receives this high voltage signal, converts it back to a low line-level voltage signal, and normalizes the response of the audio signal to that of a full range flatter frequency response. Frequency normalization unit 14 may also accept low line-level voltage signals as well. Frequency normalization unit 14 reverses the signal conditioning that is performed by OEM head-unit 10.
  • After-market amplifier 16 receives the low line-level voltage signal from frequency normalization unit 14 and steps it back up to a high voltage signal that then drives after-market speaker 18. Through the use of frequency normalization unit 14, it is possible to continue to use OEM head-unit 10 and OEM amplifier 12 with audio system 20 and still achieve a full range flatter frequency response that is used by after-market amplifier 16 to drive after-market speakers 18.
  • FIG. 2 illustrates a block circuit diagram of frequency normalization unit 14. Frequency normalization unit 14 includes an Analog-to-Digital Converter 22 (ADC), a Digital Signal Processor 24 (DSP), a Digital-to-Analog Converter 26 (DAC), a microcontroller 28, and a communication Integrated Circuit 30 (IC). Frequency normalization unit 14 also includes memory 32, a power supply 34, differential input 36, analog preamplifier 38, and auxiliary input 40. Frequency normalization unit takes the analog audio signal from OEM amplifier 12 and re-digitizes it, processes the signal in digital form, and produces an analog signal that has a full range flatter frequency response.
  • Since the introduction of the compact disc in the early 1980s, digital technology has become the standard for the recording and storage of high-fidelity audio. Digital signals are robust. Digital signals can be transmitted and copied without distortion. Digital signals can be played back without degrading the carrier. OEM head-unit 10 primarily reads music information from a digital source such as a compact disk, a stored MP3 file, digital radio, or other digital source. However, OEM head-unit 10 may also acquire analog music information from an analog audio source such as an analog radio signal. Regardless of the source of the information, OEM head-unit 10 converts all digital signals into an analog signal for amplification by OEM amplifier 12 in order to drive speakers. In order to process the signal information from OEM head-unit 10 and normalize it to a flatter frequency response for playing on after-market speakers 18, the signal information is re-digitized by frequency normalization unit 10 and manipulated by a Digital Signal Processing (DSP) technique.
  • Digital Signal Processing is a technique that converts signals from real world sources (usually in analog form), such as OEM amplifier 12, into digital data that can then be analyzed. Analysis is performed in digital form because once a signal has been reduced to numbers, its components can be isolated, analyzed and rearranged more easily than in analog form.
  • The input signal to frequency normalization unit 14 is an analog signal. Differential input 36 is the input section of frequency normalization unit 14. Differential input 36 accepts six separate input channels, labeled 1-6. These six separate input channels are the individual audio channels of multi-channel audio system 20. Multi-channel audio refers to the use of multiple sound sources to create a richer music experience. In a six channel audio system, the six audio channels correspond to the following speaker units: front left speaker, front center speaker, front right speaker, left surround speaker, right surround speaker, and the subwoofer. The use of a six channel audio system is merely exemplary. Other audio configurations that use fewer than six, or more than six channels can be implemented with frequency normalization unit 14. Differential input 36 is an input circuit that actively responds to the differences between channels 1-6. This is desirable in this application as a ground reference is not required. OEM head-units rarely contain a power supply and because of this, the voltage rails required to reproduce the musical content range from ground potential (0 volts) up to the battery voltage (12.6/14.4 volts). This leads to a DC offset of typically 6 volts. Differential input 36 outputs the difference in voltage between the two of the input leads. The differential input section allows the audio content (AC) into the device while simultaneously blocking the DC offset. Another benefit to differential signals is that noise common to both leads is cancelled by this arrangement.
  • ADC 22 samples the analog signal that is output from differential input section 36 and converts it to a series of digital values to represent the signal at a sampling rate f(s). The primary input to ADC 22 is differential input section 36. However, ADC 22 may also accept input from auxiliary input 40. Auxiliary input 40 is used to receive an analog audio signal from a source different from OEM head-unit 10, such as an external MP3 player, an external satellite radio device, or other audio consumer electronic device.
  • ADC 22 digitizes the analog audio signal into a digital signal according to the Inter-IC Sound (I2S) standard. The I2S standard is a protocol for transmitting two channels of digital audio data over a single serial connection. I2S comprises a serial bus (path) design for digital audio devices and technologies such as compact disc (CD) players, digital sound processors, and digital TV (DTV) sound. The I2S design handles audio data separately from clock signals. By separating the data and clock signals, time-related errors that cause deviation in or displacement of some aspect of the pulses in a high-frequency digital signal do not occur, thereby eliminating the need for various deviation correction devices. An I2S bus design includes three serial bus lines: a line with two time-division multiplexing (TDM) data channels, a word select line, and a clock line.
  • Once the audio signal is converted to an I2S format, DSP 24 processes the audio signal. DSP 24 is a special-purpose CPU (Central Processing Unit) that provides ultra-fast instruction sequences, such as shift and add, and multiply and add, which are commonly used in math-intensive signal processing applications. DSP 24 is not the same as a typical microprocessor. Microprocessors are typically general purpose devices that run large blocks of software. They are not often called upon for real-time computation and they work at a slower pace, choosing a course of action, then waiting to finish the present job before responding to the next user command. DSP 24, on the other hand, is often used as a type of embedded controller or processor that is built into another piece of equipment, such as frequency normalization unit 14, and is dedicated to a single group of tasks, such as analysis of a music signal to facilitate its conversion to a signal possessing a flatter frequency response.
  • DSP 24 is classified by its dynamic range, the spread of numbers that must be processed in the course of an application. This number is a function of the processor's data width (the number of bits it manipulates) and the type of arithmetic it performs (fixed or floating point). For example, a 32-bit processor has a wider dynamic range than a 24-bit processor, which has a wider range than 16-bit processor. Floating-point chips have wider ranges than fixed-point devices.
  • Each type of processor is suited for a particular range of applications. Sixteen-bit fixed-point DSPs are used for voice-grade systems such as phones, since they work with a relatively narrow range of sound frequencies. Hi-fidelity stereo sound has a wider range, calling for a 16-bit ADC (Analog/Digital Converter), and a 24-bit fixed point DSP. Image processing, 3-D graphics and scientific simulations have a much wider dynamic range and require a 32-bit floating-point processor.
  • DSP 24 architectural features are designed to perform discrete mathematical operations as quickly as possible, preferably within a single instruction cycle. DSP 24 is preferably configured to provide single-cycle computation for multiplication with accumulation, arbitrary amounts of shifting, and standard arithmetic and logical operations. Extended sums-of-products, common in DSP algorithms, may be supported in multiply-accumulate units. Extended precision in the multiplier's accumulator can provide extra bits for protection against overflow in successive additions to ensure that no loss of data or range occurs. In extended sums-of-products calculations, two operations are needed on each cycle to support the calculation. DSP 24 is preferably able to sustain two-operand data throughput, whether the data is stored on-chip or off.
  • Eventually, when the DSP 24 has finished its processing of the audio signal, the digital data is turned back into an analog signal by DAC 26, with improved quality and a flatter frequency response. This analysis process is handled quickly in real-time. For instance, stereo equipment handles sound signals of up to 20 kilohertz (20,000 cycles per second), requiring DSP 24 to perform hundreds of millions of operations per second. The audio signal is transmitted between DSP 24 and DAC 26 according to the I2S protocol.
  • DAC 26 transforms a digital word representing an analog value such as a voltage into an output corresponding to that analog signal. The fineness of the signal is represented by resolution in bits. Important device specifications to consider when searching for digital-to-analog converters include number of output channels, resolution, maximum or reference voltage, bandwidth, accuracy, and output impedance. The analog output voltage at this stage contains discrete steps, each step representing one sample. These steps can be smoothed out by way of a low-pass filter.
  • Microcontroller 28 coordinates the operation of ADC 22, DSP 24, and DAC 26. Audio signal information gained through the analysis of the audio signal by DSP 24 is stored in memory 32 by microcontroller 28. This audio signal information stored in memory 32 is utilized by frequency normalization unit 14 to condition the audio signal to have a flatter frequency response. Microcontroller 28 preferably communicates with ADC 22, DSP 24, and DAC 26 and memory 32 through an SPI protocol. It is a full-duplex protocol that functions on a master-slave paradigm that is ideally suited to data stream application. SPI requires four signals: clock (SCLK), master output/slave input (MOSI), master input/slave output (MISO), slave select (SS).
  • Three signals are shared by all devices on the SPI bus: SCLK, MOSI and MISO. SCLK is generated by the master device and is used for synchronization. MOSI and MISO are the data lines. The SPI protocol utilizes bi-directional communication within its bus structure. This data bus is shared with all devices connected to the network. Each device only responds to the data bus when its slave select (SS) line is pulled low. The remainder of the time the data on the bus is simply ignored.
  • Each device has its own SS line. The master pulls low on a slave's SS line to select a device for communication. SPI is a very simple communication protocol. It does not have a specific high-level protocol which means that there is almost no overhead. Data can be shifted at very high rates in full duplex. This makes it very simple and efficient in a single master single slave scenario. Because each slave needs its own SS, the number of traces required is n+3, where n is the number of SPI devices.
  • Frequency normalization unit 14 may be provided with a wireless capability. Communication IC 30 may be coupled to microcontroller 28. Communication IC 30 is wireless enabled, thereby allowing a controller to program microcontroller 28 from a remote device such as a PDA, or other wireless device. A preferred device for communication IC 30 is a BLUETOOTH® enabled device. BLUETOOTH is an industrial specification for wireless Personal Area Networks (PANS). BLUETOOTH provides a way to connect and exchange information between devices like personal digital assistants (PDAs), mobile phones, laptops, PCs, and other consumer devices that are enabled with a BLUETOOTH communications chip via a secure, low-cost, globally available short range radio frequency. BLUETOOTH allows these devices talk to each other when they come in range, even if they are not in the same room, as long as they are within 10 meters (32 feet) of each other.
  • IC 30 couples to microcontroller 28 via a UART port. A UART or Universal Asynchronous Receiver-Transmitter is a device that controls microcontroller's 28 interface to IC 30. Specifically, it provides microcontroller 28 with an interface so that it can “talk” to and exchange data with IC 30. As part of this interface, the UART converts the bytes it receives from microcontroller 28 along parallel circuits into a single serial bit stream for outbound transmission. On inbound transmission, UART converts the serial bit stream into the bytes that microcontroller 28 handles. UART may also add a parity bit on outbound transmissions and checks the parity of incoming bytes and discards the parity bit. UART may also add start and stop delineators on outbound and strips them from inbound transmissions. BLUETOOTH devices and modules are increasingly being made available with an embedded stack and a standard UART port.
  • DAC 26 is coupled to analog preamplifier 38. Preamplifier 38 is an electronic amplifier designed to prepare an electrical signal for further amplification by after-market amplifier 16. Preamplifier 38 amplifies the low level signal from DAC 26 and applies an anti-alias filter to the audio signal. An anti-alias filter is a low-pass filter designed to remove all content above 20,000 Hz. After-market amplifier 16 is a power amplifier that boosts the power of the audio signal from this low line level signal to a high power signal to drive after-market speakers 18.
  • FIG. 3 illustrates a software block diagram of frequency normalization unit 14. There are three basic sections to the software of frequency normalization unit 14: DSP code section 42, device firmware code section 44, and control software 46 located on a PDA or other device. DSP code section 42 manages the manner in which DSP 24 processes and analyzes the audio signal. Firmware 44 is responsible for controlling the hardware, such as ADC 22, DSP 24, DAC 26, IC 30, and microcontroller 28. This role includes communicating with the control software 46, initializing ADC 22 and DAC 26 as well as programming DSP 24. DSP 24 has a responsibility to process the digitized audio. Control software 46 has two functions. The primary role is a User Interface (UI). This UI allows the user to configure DSP 24 to produce a desired audio signal normalization. The other role that control software 46 provides is the artificial intelligence as to the correction factors used to normalize the incoming audio to a flatter frequency response.
  • The control software configures the hardware based on the desired mode of operation. The first mode of operation is to analyze the frequency response of the audio signal carried on each of the six input channels from OEM head-unit 10. In this mode, DSP 24 is configured to accept input from each of the six input channels and analyze the frequency response of each channel. DSP 24 may be configured to accept input from fewer than six channels, or more than six channels, depending upon the number of channels that form multi-channel audio system 20. This data is sent back to the control software where decisions can be made. The control software determines if the audio content is full band (20 to 20 kHz) for the five speaker channels and full band (10 Hz-200 Hz) for the subwoofer. If any of the six channels have a frequency response that does not encompass the full range of the channel band, the software will inform the user.
  • A channel will not have a frequency response that encompasses the full range of the channel band if OEM head-unit 10 conditions the audio signal. OEM head-unit 10 may condition the audio signal to limit the range of the audio frequency response to enable the original equipment manufacturer to use speakers that are incapable of handling a full range audio signal for various reasons. Also, a particular auto manufacturer may decide that the music within a particular vehicle has a better sound quality if they condition the audio signal and limit the frequency response of one or more of the channels in accordance with the perceived acoustics of the vehicle interior. The frequency response of a particular channel that is lost to frequency conditioning by the OEM head-unit may be recovered by summing the frequency responses of the various channels together.
  • FIG. 4 illustrates a process for normalizing an ideal signal. In a multi-channel speaker system, OEM head-unit 10 may condition the frequency response of each channel depending upon the type of speaker that the original system may have contained prior to the upgrade with after-market amplifier 16 and after-market speakers 18. For instance, the OEM speaker system may have included tweeters that played high frequencies, midrange speakers that played middle range to high range frequencies, bass speakers that play low frequencies, and a subwoofer that plays very low frequencies. For the channels connected to the tweeters, OEM head-unit 10 may have passed the audio signal through a high pass filter to cut off all of the frequency content below the range of the tweeter. Similarly, OEM head-unit may have passed the audio signal through band pass and low pass filters for the midrange, and bass speakers/subwoofer respectively. As such, channels that carry an audio signal that is conditioned by one of these filters does not possess full range frequency information. One can recover the frequency information stripped by a low, high, or band pass filter by summing the frequency responses of all of the channels, as some channels will possess high frequency information, some will possess mid frequency information, and some will possess low frequency information. FIG. 4 illustrates the process of combining these various channels possessing conditioned frequency information to form a channel possessing full range frequency information. Frequency graph 48 illustrates an ideal frequency response of a channel that carries an audio signal processed through a high pass filter. Frequency graph 50 illustrates and ideal frequency response of a channel that carries an audio signal processed through a band pass filter. Frequency graph 52 illustrates an ideal frequency response of a channel that carries an audio signal processed through a low pass filter. Combining the frequency information possessed by graphs 48, 50, and 52 together enables the recreation of a frequency graph 54 that possesses full range frequency information. When summing frequency information, all of the left speaker channels are summed separately from all of the right speaker channels in order to preserve the stereo quality of the music. Graphs 48, 50, 52, and 54 each show frequency information with the x axis representing frequency and the y axis representing power.
  • FIG. 5 illustrates a process for normalizing a typical signal with non-ideal frequency characteristics. As discussed with respect to FIG. 4, it is possible to recover the full range of frequency information of an audio signal by summing the frequency response of channels that possess overlapping frequency information. Frequency graph 56 illustrates a common frequency response of a channel that carries an audio signal processed through a high pass filter. Frequency graph 58 illustrates a common frequency response of a channel that carries an audio signal processed through a band pass filter. Frequency graph 60 illustrates a common frequency response of a channel that carries an audio signal processed through a low pass filter. Summing frequency graphs 56, 58, and 60 produces frequency graph 62 that possesses frequency information that is full range. Graphs 56, 58, 60, and 62 each show frequency information with the x axis representing frequency and the y axis representing power. When summed, graph 54 in FIG. 4 possesses a flatter frequency response, as this is an ideal case. However, as shown by graph 62, the frequency response is not flat, the power varies with frequency. As such, in order to produce a flatter frequency response, the next step in the process is to normalize the frequency output as illustrated by arrow 64 and FIG. 6.
  • The frequency normalization process is accomplished by using the data acquired during the initial stage where the frequency information was analyzed. Outputs 1 through 5 use a 31-band equalizer for the correction process. Each band corresponds to ⅓ octave. Channel 6 is a dedicated subwoofer channel so the frequency response is preferably 20 to 200 Hz. For this channel, a 10-band EQ is used for correction. FIG. 6 illustrates a process for creating a flatter frequency response from a normalized frequency input. Graph 66 is a detailed example showing the frequency response along with the equalization to normalize it. Below graph 66 is an equalizer 68. Before any correction to normalize the frequency response can take place, a desired power reference level 70 is chosen. This reference level 70 establishes the output level. The reference level 70, in one exemplary manner, is calculated by averaging the valid frequency response data. Valid data includes data points that are found in the pass-band only. For example, if channel 1 contains content from 20 Hz up to 1 kHz, the data above 1 kHz will not be used to calculate the reference voltage level. After a reference level is established, the position of each control on the correction equalizer 68 can be adjusted to compensate for the incoming response. FIG. 6 illustrates an example of a response 66 and corresponding equalizer level adjustments 72 to normalize said response to a flatter response. The result of this normalization is a flatter frequency response shown by graph 74 in FIG. 5.
  • Although a preferred embodiment of the present invention has been described in detail, it will be apparent to those of skill in the art that the invention may be embodied in a variety of specific forms and that various changes, substitutions, and alterations can be made without departing from the spirit and scope of the invention. The described embodiments are only illustrative and not restrictive and the scope of the invention is, therefore, indicated by the following claims.

Claims (20)

1. A method for manipulating a frequency response of an audio source, comprising:
characterizing said frequency response of said audio source from an input signal, said input signal is carried on multiple channels;
storing said frequency response in a memory;
summing said multiple channels to restore missing frequency information to said input signal; and
producing an output signal having full range frequency information.
2. The method of claim 1, further comprising normalizing said frequency response to have a flatter frequency response.
3. The method of claim 1, further comprising configuring an equalizer setting from said memory to vary said output signal.
4. (canceled)
5. (canceled)
6. A method for processing an audio signal comprising:
applying an analog input signal carried on multiple inputs to a differential input;
converting said analog input signal to a digital signal;
characterizing a frequency response of said digital signal;
determining if said digital signal possesses full frequency range information;
summing said multiple inputs to recover lost frequency information when said digital signal does not possess full frequency range information;
producing an analog output signal possessing full range frequency information;
after summing, normalizing said frequency response to have a flatter frequency response.
7. The method of claim 6, further comprising storing said frequency response in a memory.
8. The method of claim 7, further comprising configuring an equalizer setting from said memory to vary said output signal.
9. The method of claim 6, further comprising processing said analog output signal to possess a flatter frequency response.
10. The method of claim 6, further comprising providing an instruction to sum said multiple inputs from a control device.
11. The method of claim 8, further comprising establishing a reference frequency response level for providing said analog output signal with a flatter frequency response.
12. The method of claim 6, further comprising processing said analog output signal with an analog preamplifier.
13. The method of claim 6, further comprising, accepting additional audio inputs from an auxiliary input.
14. A frequency normalization unit coupled between a head unit and an amplifier, comprising:
a input section that receives an analog signal from said head unit that is carried multiple input channels;
an analog to digital converter that converts said analog signal to a digital signal;
a digital signal processor coupled to said analog to digital converter that characterizes a frequency response of said digital signal;
a microcontroller coupled to said digital signal processor, said microcontroller directs said digital signal processor to sum said multiple input channels to recover lost frequency information, thereby producing an audio signal with a full range frequency response having a non-flat frequency response, and said microcontroller corrects the audio signal in concert with a power reference level to create a restored output with a flat frequency response; and
a digital to analog converter coupled to said digital signal processor that converts said audio signal to an audio analog signal.
15. The frequency normalization unit of claim 14, said microcontroller provides instructions to said digital signal processor to processes said digital signal to have a flatter frequency response.
16. The frequency normalization unit of claim 14, further comprising a memory that stores said frequency response.
17. The frequency normalization unit of claim 14, further comprising a wireless communication unit coupled to said microcontroller.
18. The frequency normalization unit of claim 14, further comprising a preamplifier coupled to said digital to analog converter.
19. The frequency normalization unit of claim 14, further comprising an auxiliary input coupled to said analog to digital converter.
20. A method for restoring audio frequencies to an audio source output, comprising:
restoring missing frequencies received on multiple source channels wherein the frequencies are restored by summing to produce a full frequency range summed output comprising a non-flat frequency response and the summed output is corrected in concert with a power reference level to deliver a restored output with a flat frequency response.
US12/853,147 2005-05-24 2010-08-09 Frequency normalization of audio signals Abandoned US20100324711A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/853,147 US20100324711A1 (en) 2005-05-24 2010-08-09 Frequency normalization of audio signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/136,274 US7778718B2 (en) 2005-05-24 2005-05-24 Frequency normalization of audio signals
US12/853,147 US20100324711A1 (en) 2005-05-24 2010-08-09 Frequency normalization of audio signals

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/136,274 Continuation US7778718B2 (en) 2005-05-24 2005-05-24 Frequency normalization of audio signals

Publications (1)

Publication Number Publication Date
US20100324711A1 true US20100324711A1 (en) 2010-12-23

Family

ID=37464512

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/136,274 Active 2027-07-18 US7778718B2 (en) 2005-05-24 2005-05-24 Frequency normalization of audio signals
US12/853,147 Abandoned US20100324711A1 (en) 2005-05-24 2010-08-09 Frequency normalization of audio signals

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/136,274 Active 2027-07-18 US7778718B2 (en) 2005-05-24 2005-05-24 Frequency normalization of audio signals

Country Status (1)

Country Link
US (2) US7778718B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116475A1 (en) * 2007-10-02 2009-05-07 Openpeak, Inc. System and method for inter-processor communication
CN106063291A (en) * 2013-11-06 2016-10-26 珍尼雷克公司 Method and device for storing equalization settings in active loudspeaker

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7254243B2 (en) * 2004-08-10 2007-08-07 Anthony Bongiovi Processing of an audio signal for presentation in a high noise environment
US10848118B2 (en) 2004-08-10 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US9281794B1 (en) 2004-08-10 2016-03-08 Bongiovi Acoustics Llc. System and method for digital signal processing
US10158337B2 (en) 2004-08-10 2018-12-18 Bongiovi Acoustics Llc System and method for digital signal processing
US8284955B2 (en) 2006-02-07 2012-10-09 Bongiovi Acoustics Llc System and method for digital signal processing
US8565449B2 (en) * 2006-02-07 2013-10-22 Bongiovi Acoustics Llc. System and method for digital signal processing
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US9413321B2 (en) 2004-08-10 2016-08-09 Bongiovi Acoustics Llc System and method for digital signal processing
US9195433B2 (en) 2006-02-07 2015-11-24 Bongiovi Acoustics Llc In-line signal processor
US8229136B2 (en) * 2006-02-07 2012-07-24 Anthony Bongiovi System and method for digital signal processing
US9348904B2 (en) 2006-02-07 2016-05-24 Bongiovi Acoustics Llc. System and method for digital signal processing
US8705765B2 (en) * 2006-02-07 2014-04-22 Bongiovi Acoustics Llc. Ringtone enhancement systems and methods
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10069471B2 (en) 2006-02-07 2018-09-04 Bongiovi Acoustics Llc System and method for digital signal processing
US10701505B2 (en) 2006-02-07 2020-06-30 Bongiovi Acoustics Llc. System, method, and apparatus for generating and digitally processing a head related audio transfer function
US9615189B2 (en) 2014-08-08 2017-04-04 Bongiovi Acoustics Llc Artificial ear apparatus and associated methods for generating a head related audio transfer function
US11202161B2 (en) 2006-02-07 2021-12-14 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
SE531023C2 (en) * 2007-02-08 2008-11-18 Paer Gunnars Risberg Listening System
US8249260B2 (en) * 2007-04-13 2012-08-21 Qualcomm Incorporated Method and apparatus for audio path filter tuning
DE112009005145T5 (en) * 2009-09-14 2012-06-14 Hewlett-Packard Development Company, L.P. Electronic audio device
US20140233744A1 (en) * 2011-09-26 2014-08-21 Actiwave Ab Audio processing and enhancement system
US9344828B2 (en) 2012-12-21 2016-05-17 Bongiovi Acoustics Llc. System and method for digital signal processing
US9336791B2 (en) * 2013-01-24 2016-05-10 Google Inc. Rearrangement and rate allocation for compressing multichannel audio
US20150005661A1 (en) * 2013-02-22 2015-01-01 Max Sound Corporation Method and process for reducing tinnitus
US20140289624A1 (en) * 2013-03-22 2014-09-25 Hyundai Mobis Co.,Ltd. Multimedia system and method for interfacing between multimedia unit and audio head unit
TW201445878A (en) * 2013-05-20 2014-12-01 Chi Mei Comm Systems Inc Audio processing system and method
US9398394B2 (en) 2013-06-12 2016-07-19 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9264004B2 (en) 2013-06-12 2016-02-16 Bongiovi Acoustics Llc System and method for narrow bandwidth digital signal processing
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
US9397629B2 (en) 2013-10-22 2016-07-19 Bongiovi Acoustics Llc System and method for digital signal processing
US10820883B2 (en) 2014-04-16 2020-11-03 Bongiovi Acoustics Llc Noise reduction assembly for auscultation of a body
US10639000B2 (en) 2014-04-16 2020-05-05 Bongiovi Acoustics Llc Device for wide-band auscultation
US9615813B2 (en) 2014-04-16 2017-04-11 Bongiovi Acoustics Llc. Device for wide-band auscultation
US9564146B2 (en) 2014-08-01 2017-02-07 Bongiovi Acoustics Llc System and method for digital signal processing in deep diving environment
US9638672B2 (en) 2015-03-06 2017-05-02 Bongiovi Acoustics Llc System and method for acquiring acoustic information from a resonating body
US9736588B2 (en) * 2015-07-23 2017-08-15 Automotive Data Solutions, Inc. Digital signal router for vehicle replacement sound system
WO2017087495A1 (en) 2015-11-16 2017-05-26 Bongiovi Acoustics Llc Surface acoustic transducer
US9621994B1 (en) 2015-11-16 2017-04-11 Bongiovi Acoustics Llc Surface acoustic transducer
US20190005974A1 (en) * 2017-06-28 2019-01-03 Qualcomm Incorporated Alignment of bi-directional multi-stream multi-rate i2s audio transmitted between integrated circuits
CN112236812A (en) 2018-04-11 2021-01-15 邦吉欧维声学有限公司 Audio-enhanced hearing protection system
WO2020028833A1 (en) 2018-08-02 2020-02-06 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10848818B2 (en) * 2018-09-03 2020-11-24 Vanco International, Llc Sensing based audio signal injection
WO2020220140A1 (en) * 2019-05-02 2020-11-05 Lucid Inc. Device, method, and medium for integrating auditory beat stimulation into music
US11570548B1 (en) * 2020-05-13 2023-01-31 Stillwater Designs & Audio, Inc. System and method for augmenting vehicle sound system
CN111966949B (en) * 2020-10-20 2021-02-09 河北帝业科技有限责任公司 Comprehensive information management system based on information display platform

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4038496A (en) * 1975-05-15 1977-07-26 Audiometric Teleprocessing, Inc. Portable bekesy type diagnostic audiometer
US4182930A (en) * 1978-03-10 1980-01-08 Dbx Inc. Detection and monitoring device
US4698842A (en) * 1985-07-11 1987-10-06 Electronic Engineering And Manufacturing, Inc. Audio processing system for restoring bass frequencies
US4739514A (en) * 1986-12-22 1988-04-19 Bose Corporation Automatic dynamic equalizing
US4748669A (en) * 1986-03-27 1988-05-31 Hughes Aircraft Company Stereo enhancement system
US5027403A (en) * 1988-11-21 1991-06-25 Bose Corporation Video sound
US5208866A (en) * 1989-12-05 1993-05-04 Pioneer Electronic Corporation On-board vehicle automatic sound volume adjusting apparatus
US5319713A (en) * 1992-11-12 1994-06-07 Rocktron Corporation Multi dimensional sound circuit
US5339362A (en) * 1992-01-07 1994-08-16 Rockford Corporation Automotive audio system
US5363057A (en) * 1992-01-30 1994-11-08 Mitsubishi Denki Kabushiki Kaisha Control device for power amplifier
US5515446A (en) * 1992-10-02 1996-05-07 Velmer; George Electronic audio accurate reproduction system and method
US5557680A (en) * 1995-04-19 1996-09-17 Janes; Thomas A. Loudspeaker system for producing multiple sound images within a listening area from dual source locations
US5581621A (en) * 1993-04-19 1996-12-03 Clarion Co., Ltd. Automatic adjustment system and automatic adjustment method for audio devices
US5610986A (en) * 1994-03-07 1997-03-11 Miles; Michael T. Linear-matrix audio-imaging system and image analyzer
US5617478A (en) * 1994-04-11 1997-04-01 Matsushita Electric Industrial Co., Ltd. Sound reproduction system and a sound reproduction method
US5617480A (en) * 1993-02-25 1997-04-01 Ford Motor Company DSP-based vehicle equalization design system
US5636244A (en) * 1995-07-26 1997-06-03 Motorola, Inc. Method and apparatus for initializing equalizer coefficents using peridioc training sequences
US5790481A (en) * 1996-11-21 1998-08-04 Meitner; Edmund Retrofitable CD player system
US6104750A (en) * 1996-08-15 2000-08-15 Lsi Logic Corporation Method and apparatus for transmission line equalization
US6115476A (en) * 1998-06-30 2000-09-05 Intel Corporation Active digital audio/video signal modification to correct for playback system deficiencies
US6252968B1 (en) * 1997-09-23 2001-06-26 International Business Machines Corp. Acoustic quality enhancement via feedback and equalization for mobile multimedia systems
US20010031060A1 (en) * 2000-01-07 2001-10-18 Carver Robert W. Compact speaker system
US6341166B1 (en) * 1997-03-12 2002-01-22 Lsi Logic Corporation Automatic correction of power spectral balance in audio source material
US6470087B1 (en) * 1996-10-08 2002-10-22 Samsung Electronics Co., Ltd. Device for reproducing multi-channel audio by using two speakers and method therefor
US20020154783A1 (en) * 2001-02-09 2002-10-24 Lucasfilm Ltd. Sound system and method of sound reproduction
US6584204B1 (en) * 1997-12-11 2003-06-24 The Regents Of The University Of California Loudspeaker system with feedback control for improved bandwidth and distortion reduction
US20030161479A1 (en) * 2001-05-30 2003-08-28 Sony Corporation Audio post processing in DVD, DTV and other audio visual products
US20030179891A1 (en) * 2002-03-25 2003-09-25 Rabinowitz William M. Automatic audio system equalizing
US6662018B1 (en) * 2000-06-28 2003-12-09 Northrop Grumman Corporation Analog power control system for a multi-carrier transmitter
US20040013272A1 (en) * 2001-09-07 2004-01-22 Reams Robert W System and method for processing audio data
US6718039B1 (en) * 1995-07-28 2004-04-06 Srs Labs, Inc. Acoustic correction apparatus
US20040091120A1 (en) * 2002-11-12 2004-05-13 Kantor Kenneth L. Method and apparatus for improving corrective audio equalization
US6760451B1 (en) * 1993-08-03 2004-07-06 Peter Graham Craven Compensating filters
US20050018860A1 (en) * 2001-05-07 2005-01-27 Harman International Industries, Incorporated: Sound processing system for configuration of audio signals in a vehicle
US20050063201A1 (en) * 2003-01-16 2005-03-24 Yukio Yamazaki Switching power supply device
US20050094822A1 (en) * 2005-01-08 2005-05-05 Robert Swartz Listener specific audio reproduction system
US20050094829A1 (en) * 2003-11-03 2005-05-05 Cordell Robert R. System and method for achieving extended low-frequency response in a loudspeaker system
US6898470B1 (en) * 2000-11-07 2005-05-24 Cirrus Logic, Inc. Digital tone controls and systems using the same
US20050135631A1 (en) * 2003-11-19 2005-06-23 Hajime Yoshino Automatic sound field correcting device and computer program therefor
US20050195993A1 (en) * 2004-03-04 2005-09-08 Lg Electronics Inc. Method and apparatus of compensating for speaker distortion in an audio apparatus
US20060013295A1 (en) * 2004-07-09 2006-01-19 Maarten Kuijk Multistage tuning-tolerant equalizer filter
US20060034472A1 (en) * 2004-08-11 2006-02-16 Seyfollah Bazarjani Integrated audio codec with silicon audio transducer
US7027497B2 (en) * 2000-12-19 2006-04-11 Ntt Docomo, Inc. Adaptive equalization method and adaptive equalizer
US7031474B1 (en) * 1999-10-04 2006-04-18 Srs Labs, Inc. Acoustic correction apparatus
US20060088175A1 (en) * 2001-05-07 2006-04-27 Harman International Industries, Incorporated Sound processing system using spatial imaging techniques
US7058144B2 (en) * 2001-08-07 2006-06-06 Conexant, Inc. Intelligent control system and method for compensation application in a wireless communications system
US20060147057A1 (en) * 2004-12-30 2006-07-06 Harman International Industries, Incorporated Equalization system to improve the quality of bass sounds within a listening area
US20060245485A1 (en) * 2005-04-28 2006-11-02 Intel Corporation Continuous-time equalizer
US7167515B2 (en) * 2004-10-27 2007-01-23 Jl Audio, Inc. Method and system for equalization of a replacement load
US20070025559A1 (en) * 2005-07-29 2007-02-01 Harman International Industries Incorporated Audio tuning system
US7181402B2 (en) * 2000-08-24 2007-02-20 Infineon Technologies Ag Method and apparatus for synthetic widening of the bandwidth of voice signals
US7302062B2 (en) * 2004-03-19 2007-11-27 Harman Becker Automotive Systems Gmbh Audio enhancement system
US7359671B2 (en) * 2001-10-30 2008-04-15 Unwired Technology Llc Multiple channel wireless communication system

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4038496A (en) * 1975-05-15 1977-07-26 Audiometric Teleprocessing, Inc. Portable bekesy type diagnostic audiometer
US4182930A (en) * 1978-03-10 1980-01-08 Dbx Inc. Detection and monitoring device
US4698842A (en) * 1985-07-11 1987-10-06 Electronic Engineering And Manufacturing, Inc. Audio processing system for restoring bass frequencies
US4748669A (en) * 1986-03-27 1988-05-31 Hughes Aircraft Company Stereo enhancement system
US4739514A (en) * 1986-12-22 1988-04-19 Bose Corporation Automatic dynamic equalizing
US5027403A (en) * 1988-11-21 1991-06-25 Bose Corporation Video sound
US5208866A (en) * 1989-12-05 1993-05-04 Pioneer Electronic Corporation On-board vehicle automatic sound volume adjusting apparatus
US5339362A (en) * 1992-01-07 1994-08-16 Rockford Corporation Automotive audio system
US5363057A (en) * 1992-01-30 1994-11-08 Mitsubishi Denki Kabushiki Kaisha Control device for power amplifier
US5515446A (en) * 1992-10-02 1996-05-07 Velmer; George Electronic audio accurate reproduction system and method
US5319713A (en) * 1992-11-12 1994-06-07 Rocktron Corporation Multi dimensional sound circuit
US5617480A (en) * 1993-02-25 1997-04-01 Ford Motor Company DSP-based vehicle equalization design system
US5581621A (en) * 1993-04-19 1996-12-03 Clarion Co., Ltd. Automatic adjustment system and automatic adjustment method for audio devices
US6760451B1 (en) * 1993-08-03 2004-07-06 Peter Graham Craven Compensating filters
US5610986A (en) * 1994-03-07 1997-03-11 Miles; Michael T. Linear-matrix audio-imaging system and image analyzer
US5617478A (en) * 1994-04-11 1997-04-01 Matsushita Electric Industrial Co., Ltd. Sound reproduction system and a sound reproduction method
US5557680A (en) * 1995-04-19 1996-09-17 Janes; Thomas A. Loudspeaker system for producing multiple sound images within a listening area from dual source locations
US5636244A (en) * 1995-07-26 1997-06-03 Motorola, Inc. Method and apparatus for initializing equalizer coefficents using peridioc training sequences
US6718039B1 (en) * 1995-07-28 2004-04-06 Srs Labs, Inc. Acoustic correction apparatus
US6104750A (en) * 1996-08-15 2000-08-15 Lsi Logic Corporation Method and apparatus for transmission line equalization
US6470087B1 (en) * 1996-10-08 2002-10-22 Samsung Electronics Co., Ltd. Device for reproducing multi-channel audio by using two speakers and method therefor
US5790481A (en) * 1996-11-21 1998-08-04 Meitner; Edmund Retrofitable CD player system
US6341166B1 (en) * 1997-03-12 2002-01-22 Lsi Logic Corporation Automatic correction of power spectral balance in audio source material
US6252968B1 (en) * 1997-09-23 2001-06-26 International Business Machines Corp. Acoustic quality enhancement via feedback and equalization for mobile multimedia systems
US6584204B1 (en) * 1997-12-11 2003-06-24 The Regents Of The University Of California Loudspeaker system with feedback control for improved bandwidth and distortion reduction
US6115476A (en) * 1998-06-30 2000-09-05 Intel Corporation Active digital audio/video signal modification to correct for playback system deficiencies
US7031474B1 (en) * 1999-10-04 2006-04-18 Srs Labs, Inc. Acoustic correction apparatus
US20010031060A1 (en) * 2000-01-07 2001-10-18 Carver Robert W. Compact speaker system
US6662018B1 (en) * 2000-06-28 2003-12-09 Northrop Grumman Corporation Analog power control system for a multi-carrier transmitter
US7181402B2 (en) * 2000-08-24 2007-02-20 Infineon Technologies Ag Method and apparatus for synthetic widening of the bandwidth of voice signals
US6898470B1 (en) * 2000-11-07 2005-05-24 Cirrus Logic, Inc. Digital tone controls and systems using the same
US7027497B2 (en) * 2000-12-19 2006-04-11 Ntt Docomo, Inc. Adaptive equalization method and adaptive equalizer
US7593533B2 (en) * 2001-02-09 2009-09-22 Thx Ltd. Sound system and method of sound reproduction
US20020154783A1 (en) * 2001-02-09 2002-10-24 Lucasfilm Ltd. Sound system and method of sound reproduction
US20060088175A1 (en) * 2001-05-07 2006-04-27 Harman International Industries, Incorporated Sound processing system using spatial imaging techniques
US20050018860A1 (en) * 2001-05-07 2005-01-27 Harman International Industries, Incorporated: Sound processing system for configuration of audio signals in a vehicle
US20030161479A1 (en) * 2001-05-30 2003-08-28 Sony Corporation Audio post processing in DVD, DTV and other audio visual products
US7058144B2 (en) * 2001-08-07 2006-06-06 Conexant, Inc. Intelligent control system and method for compensation application in a wireless communications system
US20040013272A1 (en) * 2001-09-07 2004-01-22 Reams Robert W System and method for processing audio data
US7359671B2 (en) * 2001-10-30 2008-04-15 Unwired Technology Llc Multiple channel wireless communication system
US20030179891A1 (en) * 2002-03-25 2003-09-25 Rabinowitz William M. Automatic audio system equalizing
US20040091120A1 (en) * 2002-11-12 2004-05-13 Kantor Kenneth L. Method and apparatus for improving corrective audio equalization
US20050063201A1 (en) * 2003-01-16 2005-03-24 Yukio Yamazaki Switching power supply device
US20050094829A1 (en) * 2003-11-03 2005-05-05 Cordell Robert R. System and method for achieving extended low-frequency response in a loudspeaker system
US20050135631A1 (en) * 2003-11-19 2005-06-23 Hajime Yoshino Automatic sound field correcting device and computer program therefor
US20050195993A1 (en) * 2004-03-04 2005-09-08 Lg Electronics Inc. Method and apparatus of compensating for speaker distortion in an audio apparatus
US7302062B2 (en) * 2004-03-19 2007-11-27 Harman Becker Automotive Systems Gmbh Audio enhancement system
US20060013295A1 (en) * 2004-07-09 2006-01-19 Maarten Kuijk Multistage tuning-tolerant equalizer filter
US20060034472A1 (en) * 2004-08-11 2006-02-16 Seyfollah Bazarjani Integrated audio codec with silicon audio transducer
US7167515B2 (en) * 2004-10-27 2007-01-23 Jl Audio, Inc. Method and system for equalization of a replacement load
US20060147057A1 (en) * 2004-12-30 2006-07-06 Harman International Industries, Incorporated Equalization system to improve the quality of bass sounds within a listening area
US20050094822A1 (en) * 2005-01-08 2005-05-05 Robert Swartz Listener specific audio reproduction system
US20060245485A1 (en) * 2005-04-28 2006-11-02 Intel Corporation Continuous-time equalizer
US20070025559A1 (en) * 2005-07-29 2007-02-01 Harman International Industries Incorporated Audio tuning system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116475A1 (en) * 2007-10-02 2009-05-07 Openpeak, Inc. System and method for inter-processor communication
US8755309B2 (en) * 2007-10-02 2014-06-17 Id8 Group R2 Studios, Inc. System and method for inter-processor communication
CN106063291A (en) * 2013-11-06 2016-10-26 珍尼雷克公司 Method and device for storing equalization settings in active loudspeaker

Also Published As

Publication number Publication date
US20060271215A1 (en) 2006-11-30
US7778718B2 (en) 2010-08-17

Similar Documents

Publication Publication Date Title
US7778718B2 (en) Frequency normalization of audio signals
CN103886866B (en) System and method for Digital Signal Processing
CN103888103B (en) system and method for digital signal processing
US7852239B2 (en) Method and system for processing multi-rate audio from a plurality of audio processing sources
US7949419B2 (en) Method and system for controlling gain during multipath multi-rate audio processing
CN103210668B (en) For upwards mixed method and the system of multi-channel audio regeneration
GB2416653A (en) Distributed sound enhancement
US20100057474A1 (en) Method and system for digital gain processing in a hardware audio codec for audio transmission
CA2357200C (en) Listening device
JPH10307592A (en) Data distributing system for on-vehicle audio device
US6931139B1 (en) Computer audio system
EP2421283B1 (en) Extraction of channels from multichannel signals utilizing stimulus
JP3889546B2 (en) Level adjustment circuit
US7986796B2 (en) Apparatus to generate multi-channel audio signals and method thereof
US10558424B1 (en) Speaker device with equalization control
US6529787B2 (en) Multimedia computer speaker system with bridge-coupled subwoofer
CN108737936A (en) The volume control of personal sound area
CN104490402B (en) PCI active noise control card
US20150365061A1 (en) System and method for modifying an audio signal
CN103518384A (en) Speaker for reproducing surround sound
US6711270B2 (en) Audio reproducing apparatus
US10506340B2 (en) Digital signal processor for audio, in-vehicle audio system and electronic apparatus including the same
TWI567731B (en) System and method for digital signal processing
KR20040075358A (en) Multichannel echo canceller system using active audio matrix coefficients
CN101146374A (en) Low voice managing method and audio system with low voice management

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION