본 ë°ëª ì ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë°©ë²ì ê´í ê²ì´ë¤. ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ê²ì¼ë¡ ê°ì ëë¤. ì기 ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íëë° ìì´ìì ì²ë¦¬ ë¶í를 ê°ììí¤ê¸° ìíì¬, ì ì´ë ë§¤ê° ë³ì ëë©ì¸ìì ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ ì±ëë¤ì ë§¤ê° ë³ì ê°ë¤ì´ ê²°í©ëë ê²ì´ ì ìëë¤. ê·¸ë¤ì ì기 ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì ì기 ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íëë° ì¬ì©ëë¤. 본 ë°ëª ì ëë±íê² ëìíë ì¤ëì¤ ë³µí¸ê¸°, ëìíë ë¶í¸í ìì¤í ë° ëìíë ìíí¸ì¨ì´ íë¡ê·¸ë¨ ìì±ë¬¼ì ê´í ê²ì´ë¤.The present invention relates to a method of synthesizing a mono audio signal based on an available coded multichannel audio signal. It is assumed that the encoded multichannel audio signal includes individual parameter values for each channel of the multichannel audio signal for at least a portion of an audio frequency band. In order to reduce the processing load in synthesizing the mono audio signal, it is proposed that the parameter values of the multiple channels are combined for at least a portion of the audio frequency band in the parameter domain. The combined parameter values are then used to synthesize the mono audio signal. The present invention relates equally to corresponding audio decoders, corresponding encoding systems and corresponding software program products.
Description Translated from Korean ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë°©ë² ë° ì¥ì¹{Synthesizing a mono audio signal based on an encoded multichannel audio signal}Synthesizing a mono audio signal based on an encoded multichannel audio signal본 ë°ëª ì ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë°©ë²ì¼ë¡ì, ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ê° ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ë°©ë²ì ê´í ê²ì´ë¤. 본 ë°ëª ì ëë±íê² ëìíë ì¤ëì¤ ë³µí¸ê¸°, ëìíë ë¶í¸í ìì¤í ë° ëìíë ìíí¸ì¨ì´ íë¡ê·¸ë¨ ìì±ë¬¼ì ê´í ê²ì´ë¤.The present invention provides a method for synthesizing a mono audio signal based on an available coded multichannel audio signal, wherein the coded multichannel audio signal is a separate parameter for each channel of the multichannel audio signal for at least a portion of an audio frequency band. A method for including variable values. The present invention relates equally to corresponding audio decoders, corresponding encoding systems and corresponding software program products.
ì¤ëì¤ ë¶í¸í ìì¤í ë¤ì ë¹ ê¸°ì ì ìì¤ìì ì ìë ¤ì ¸ ìë¤. ê·¸ë¤ì í¹í ì¤ëì¤ ì í¸ë¤ì ì¡ì íê±°ë ì ì¥íëë° ì¬ì©ëë¤.Audio encoding systems are well known at the technical level. They are especially used for transmitting or storing audio signals.
ì¤ëì¤ ì í¸ë¤ì ì¡ì ì ì¬ì©ëë ì¤ëì¤ ë¶í¸í ìì¤í ì ì¡ì ë¨ì ìë ë¶í¸ê¸° ë° ìì ë¨ì ìë ë³µí¸ê¸°ë¥¼ í¬í¨íë¤. ì기 ì¡ì ë¨ ë° ìì ë¨ì ì를 ë¤ì´ ì´ë ë¨ë§ê¸°ë¤ì¼ ì ìë¤. ì¡ì ë ì¤ëì¤ ì í¸ë ì기 ë¶í¸ê¸°ì ì ê³µëë¤. ì기 ë¶í¸ê¸°ë ì ë ¥ ì¤ëì¤ ë°ì´í° ë ì´í¸ë¥¼ ì¡ì ì±ëììì ëìí ì¡°ê±´ë¤ì´ ìë°ëì§ ìë ë¹í¸ë ì´í¸ ë 벨ì ì ììí¤ë ê²ì ë´ë¹íë¤. ì´ìì ì¼ë¡, ì기 ë¶í¸ê¸°ë ì기 ë¶í¸í íë¡ì¸ì¤ìì ì기 ì¤ëì¤ ì í¸ë¡ë¶í° ê´ë ¨ìë ì ë³´ë§ì ë²ë¦°ë¤. ê·¸ë¤ì ì기 ë¶í¸íë ì¤ëì¤ ì í¸ë ì기 ì¤ëì¤ ë¶í¸í ìì¤í ì ì¡ì ë¨ì ìí´ ì¡ì ëê³ ì기 ì¤ëì¤ ë¶í¸í ìì¤í ì ìì ë¨ìì ìì ëë¤. ì기 ìì ë¨ì ìë ë³µí¸ê¸°ë ì기 ë¶í¸í íë¡ì¸ì¤ë¥¼ ê±°ê¾¸ë¡ ìííì¬ ê±°ì ì´íìì´ ëë ìë¬´ë° ì¤ëì¤ì ì íìì´ ë³µí¸íë ì¤ëì¤ ì í¸ë¥¼ íëíë¤.An audio encoding system used for the transmission of audio signals includes an encoder at the transmitting end and a decoder at the receiving end. The transmitting end and the receiving end may be mobile terminals, for example. The audio signal to be transmitted is provided to the encoder. The encoder is responsible for adapting the input audio data rate to the bitrate level at which bandwidth conditions in the transmission channel are not violated. Ideally, the encoder discards only unrelated information from the audio signal in the encoding process. The coded audio signal is then transmitted by the transmitting end of the audio encoding system and received at the receiving end of the audio encoding system. The decoder at the receiving end performs the encoding process backwards to obtain a decoded audio signal with little degradation or no audio degradation.
ì기 ì¤ëì¤ ë¶í¸í ìì¤í ì´ ì¤ëì¤ ë°ì´í°ë¥¼ ë³´ê´íëë° ì¬ì©ëë ê²½ì°, ì기 ë¶í¸ê¸°ì ìí´ ì ê³µë ë¶í¸íë ì¤ëì¤ ë°ì´í°ë ì´ë¤ ì ì¥ ì ëì ì ì¥ëê³ , ì기 ë³µí¸ê¸°ë ì를 ë¤ì´ ì´ë¤ 미ëì´ íë ì´ì´ì ìí í리ì í ì´ì ì ìí´, ì기 ì ì¥ ì ëì¼ë¡ë¶í° ê²ìë ë°ì´í°ë¥¼ ë³µí¸ííë¤. ì´ë¬í ëììì, ì ì¥ ê³µê°ì ì ê°í기 ìíì¬, ì기 ë¶í¸ê¸°ê° ê°ë¥í í ë®ì ë¹í¸ë ì´í¸ë¥¼ ë¬ì±íë ê²ì´ 목íì´ë¤.When the audio encoding system is used to store audio data, the encoded audio data provided by the encoder is stored in a storage unit, and the decoder is stored, for example, for presentation by some media player. Decrypt the data retrieved from. In this alternative, in order to save storage space, the aim is to achieve the bitrate as low as possible.
íì©ë ë¹í¸ë ì´í¸ì ìì¡´íì¬, ìì´í ë¶í¸í ë°©ìë¤ì´ ì¤ëì¤ ì í¸ì ì ì©ë ì ìë¤.Depending on the allowed bitrate, different coding schemes can be applied to the audio signal.
ëë¶ë¶ì ê²½ì°ì, ì¤ëì¤ ì í¸ì ì 주íì ëì ë° ê³ ì£¼íì ëìì ìë¡ ìê´ëë¤. ê·¸ë¬ë¯ë¡ ì¤ëì¤ ì½ë± ëìí íì¥ ìê³ ë¦¬ì¦ë¤ì ì íì ì¼ë¡ ì°ì ë¶í¸íë ì¤ëì¤ ì í¸ì ëìíì ëê°ì 주íì ëìë¤ë¡ ë¶í íë¤. ê·¸ë¤ì ì기 ì 주íì ëìì ìì ì½ì´ ì½ë±ì ìí´ ë 립ì ì¼ë¡ ì²ë¦¬ëê³ , ë°ë©´ì ì기 ê³ ì£¼íì ëìì ì기 ì 주íì ëìì¼ë¡ë¶í°ì ì í¸ë¤ ë° ë¶í¸í ë§¤ê° ë³ìë¤ì ëí ì§ìì ì¬ì©íì¬ ì²ë¦¬ëë¤. ì기 ê³ ì£¼íì ëììì ì기 ì 주íì ëì ë¶í¸íë¡ë¶í°ì ë§¤ê° ë³ìë¤ì ì¬ì©íë ê²ì ëì ëì ë¶í¸í를 ì´ëíë ë¹í¸ ë ì´í¸ 를 ìë¹í ê°ììí¨ë¤.In most cases, the low and high frequency bands of the audio signal are correlated with each other. Therefore, audio codec bandwidth extension algorithms typically first divide the bandwidth of an encoded audio signal into two frequency bands. The low frequency band is then processed independently by a so-called core codec, while the high frequency band is processed using knowledge of the signals and coding parameters from the low frequency band. Using the parameters from the low frequency band coding in the high frequency band significantly reduces the bit rate resulting in high band coding.
ë 1ì ì íì ì¸ ë¶í ëì ë¶í¸í ë° ë³µí¸í ìì¤í ì ëíë¸ ê²ì´ë¤. ì기 ìì¤í ì ì¤ëì¤ ë¶í¸ê¸°(10) ë° ì¤ëì¤ ë³µí¸ê¸°(20)를 í¬í¨íë¤. ì기 ì¤ëì¤ ë¶í¸ê¸°(10)ë 2 ëì ë¶ì íí°ë± í¬(11), ì ëì ë¶í¸ê¸°(12) ë° ê³ ëì ë¶í¸ê¸°(13)를 í¬í¨íë¤. ì기 ì¤ëì¤ ë³µí¸ê¸°(20)ë ì ëì ë³µí¸ê¸°(21), ê³ ëì ë³µí¸ê¸°(22) ë° 2 ëì í©ì± íí°ë± í¬(23)를 í¬í¨íë¤. ì기 ì ëì ë¶í¸ê¸°(12)ì ë³µí¸ê¸°(21)ë ì를 ë¤ì´ ì ì ë¤ì¤-ë ì´í¸ ê´ëì(AMR-WB: Adaptive Multi-Rate Wideband) íì¤ ë¶í¸ê¸° ë° ë³µí¸ê¸°ì¼ ì ìê³ , ë°ë©´ì ì기 ê³ ëì ë¶í¸ê¸°(13)ì ë³µí¸ê¸°(22)ë ë 립 ë¶í¸í ìê³ ë¦¬ì¦, ëìí íì¥ ìê³ ë¦¬ì¦ ëë ììì ì¡°í©ì í¬í¨í ì ìë¤. ìë¡ì, ì기 ì ìë ìì¤í ì ë¶í ëì ë¶í¸í ìê³ ë¦¬ì¦ì¼ë¡ì íì¥ë AMR-WB(AMR-WB+) ì½ë±ì ì¬ì©íë ê²ì¼ë¡ ê°ì ëë¤.1 shows a typical split band encoding and decoding system. The system includes an audio encoder 10 and an audio decoder 20. The audio encoder 10 includes a two band analysis filterbank 11, a low band encoder 12 and a high band encoder 13. The audio decoder 20 includes a low band decoder 21, a high band decoder 22 and a two band synthesis filter bank 23. The low band encoder 12 and decoder 21 may be, for example, an adaptive multi-rate wideband (AMR-WB) standard encoder and decoder, while the high band encoder 13 and Decoder 22 may include an independent encoding algorithm, a bandwidth extension algorithm, or a combination of both. By way of example, it is assumed that the system presented above uses an extended AMR-WB (AMR-WB +) codec as the split band coding algorithm.
ì ë ¥ ì¤ëì¤ ì í¸(1)ë ì°ì 2-ëì ë¶ì íí°ë± í¬(11)ì ìí´ ì²ë¦¬ëëë°, ì기 ì¤ëì¤ ì£¼íì ëìì ì 주íì ëì ë° ê³ ì£¼íì ëìì¼ë¡ ë¶í ëë¤. ì¤ëª ì ìí´, ë 2ë AMR-WB+ì ê²½ì°ì ëí´ 2-ëì íí°ë± í¬ì 주íì ìëµì ì를 ëìí ê²ì´ë¤. 12 kHz ì¤ëì¤ ëìì 0 kHz ë´ì§ 6.4 kHz ëì L ë° 6.4 kHz ë´ì§ 12 kHz ëì Hë¡ ë¶í ëë¤. ì기 2-ëì ë¶ì íí°ë± í¬(11)ìì, ê²°ê³¼ë¡ì ìì±ë 주íì ëìë¤ì ëì±ì´ ê²°ì ì ì¼ë¡ ë¤ì´-ìíë§ëë¤. ì¦, ì기 ì 주íì ëìì 12.8 kHzë¡ ë¤ì´-ìíë§ëê³ ì기 ê³ ì£¼íì ëìì 11.2 kHzë¡ ì¬-ìíë§ëë¤.The input audio signal 1 is first processed by a two- band analysis filterbank 11, which is divided into a low frequency band and a high frequency band. For illustration purposes, FIG. 2 shows an example of the frequency response of a two-band filterbank for the case of AMR-WB +. The 12 kHz audio band is divided into 0 kHz to 6.4 kHz band L and 6.4 kHz to 12 kHz band H. In the two- band analysis filterbank 11, the resulting frequency bands are further decisively down-sampled. That is, the low frequency band is down-sampled at 12.8 kHz and the high frequency band is resampled at 11.2 kHz.
ê·¸ë¤ì, ì기 ì 주íì ëìê³¼ ì기 ê³ ì£¼íì ëìì ê°ê° ì기 ì ëì ë¶í¸ê¸°(12)ì ì기 ê³ ëì ë¶í¸ê¸°(13)ì ìí´ ìë¡ì ìê´ìì´ ë¶í¸íëë¤.The low frequency band and the high frequency band are then encoded independently of each other by the low band encoder 12 and the high band encoder 13, respectively.
ì기 ì ëì ë¶í¸ê¸°(12)ë ì´ê² ë문ì ìì ìì¤ ì í¸ ë¶í¸í ìê³ ë¦¬ì¦ë¤ì í¬í¨íë¤. ì기 ìê³ ë¦¬ì¦ë¤ì ëì ë¶í¸ ì¬ì§ ì í ì측(ACELP: algebraic code excitation linear prediction) ì íì ìê³ ë¦¬ì¦ ë° ë³í ê¸°ë° ìê³ ë¦¬ì¦ì í¬í¨íë¤. ì기 ì¤ì ë¡ ì¬ì©ë ìê³ ë¦¬ì¦ì ê°ê°ì ì ë ¥ ì¤ëì¤ ì í¸ì ì í¸ í¹ì±ì 기ë°íì¬ ì íëë¤. ì기 ACELP ìê³ ë¦¬ì¦ì ì íì ì¼ë¡ ìì± ì í¸ë¤ê³¼ 경과ìë¤(transients)ì ë¶í¸íí기 ìí´ ì íëê³ , ë°ë©´ì ì기 ë³í ê¸°ë° ìê³ ë¦¬ì¦ì ì íì ì¼ë¡ 주íì í´ìë를 ë ì ì²ë¦¬í기 ìíì¬ ìì ë° í¤ ì í ì í¸ë¤ì ë¶í¸íí기 ìí´ ì íëë¤.The low band encoder 12 contains full source signal coding algorithms because of this. The algorithms include algorithms of algebraic code excitation linear prediction (ACELP) type and transform based algorithms. The actually used algorithm is selected based on the signal characteristics of each input audio signal. The ACELP algorithm is typically chosen to encode speech signals and transients, while the transformation based algorithm is typically chosen to encode music and tone type signals to better handle frequency resolution. .
AMR-WB+ ì½ë±ìì, ì기 ê³ ëì ë¶í¸ê¸°(13)ë ì기 ê³ ì£¼íì ëì ì í¸ì ì¤íí¸ë¼ í¬ë½ì ì 모ë¸ë§í기 ìí´ ì í ì측 ë¶í¸í(LPC: linear prediction coding)를 ì´ì©íë¤. ê·¸ëì ì기 ê³ ì£¼íì ëìì í©ì±ë ì í¸ì ì¤íí¸ë¼ í¹ì±ë¤ì ì ìíë LPC í©ì± íí° ê³ìë¤ ë° ì기 í©ì±ë ê³ ì£¼íì ëì ì¤ëì¤ ì í¸ì ì§íì ì ì´íë ì¬ê¸° ì í¸ì ëí ì´ë ê³ìë¤ì ìí´ ì¤ëª ë ì ìë¤. ì기 ê³ ëì ì¬ê¸° ì í¸ë ì기 ì ëì ë¶í¸ê¸°(12)ë¡ë¶í° ë³µì¬ëë¤. ë¨ì§ ì기 LPC ê³ìë¤ê³¼ ì기 ì´ë ê³ìë¤ë§ì´ ì¡ì ì ìí´ ì ê³µëë¤.In the AMR-WB + codec, the high band encoder 13 uses linear prediction coding (LPC) to model the spectral envelope of the high frequency band signal. The high frequency band can thus be described by LPC synthesis filter coefficients defining the spectral characteristics of the synthesized signal and gain coefficients for the excitation signal controlling the amplitude of the synthesized high frequency band audio signal. The high band excitation signal is copied from the low band encoder 12. Only the LPC coefficients and the gain coefficients are provided for transmission.
ì기 ì ëì ë¶í¸ê¸°(12)ì ì기 ê³ ëì ë¶í¸ê¸°(13)ì ì¶ë ¥ì ë¨ì¼ ë¹í¸ ì¤í¸ë¦¼(2)ì ë¤ì¤íëë¤.The outputs of the low band encoder 12 and the high band encoder 13 are multiplexed onto a single bit stream 2.
ì기 ë¤ì¤íë ë¹í¸ ì¤í¸ë¦¼(2)ì ì를 ë¤ì´ íµì ì±ëì íµí´ ì기 ì¤ëì¤ ë³µí¸ê¸°(20)ë¡ ì¡ì ëëë°, ì기 ì 주íì ëìê³¼ ì기 ê³ ì£¼íì ëìì ê°ë³ì ì¼ë¡ ë³µí¸íëë¤.The multiplexed bit stream 2 is transmitted to the audio decoder 20 via a communication channel, for example, wherein the low frequency band and the high frequency band are separately decoded.
ì기 ì ëì ë³µí¸ê¸°(21)ìì, ì기 ì ëì ë¶í¸ê¸°(12)ììì ì²ë¦¬ë ì기 ì 주íì ëì ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìíì¬ ê±°ê¾¸ë¡ ìíëë¤.In the low band decoder 21, the processing in the low band encoder 12 is performed upside down to synthesize the low frequency band audio signal.
ì기 ê³ ëì ë³µí¸ê¸°(22)ìì, ì기 ì ëì ë³µí¸ê¸°(21)ì ìí´ ì ê³µë ì 주íì ëì ì¬ê¸°ë¥¼ ì기 ê³ ì£¼íì ëììì ì¬ì©ë ìíë§ ë ì´í¸ë¡ ì¬-ìíë§í¨ì¼ë¡ì¨ ì¬ê¸° ì í¸ê° ìì±ëë¤. ì¦, ì기 ì 주íì ëì ì¬ê¸° ì í¸ë ì기 ì 주íì ëì ì í¸ë¥¼ ì기 ê³ ì£¼íì ëìì¼ë¡ ë°ê¾¸ì´ ëìì¼ë¡ì¨ ì기 ê³ ì£¼í ëìì ë³µí¸í를 ìí´ ì¬ì¬ì©ëë¤. ëìì ì¼ë¡, ì기 ê³ ì£¼íì ëì ì í¸ì ì¬êµ¬ì±ì ìí´ ëë¤ ì¬ê¸° ì í¸ê° ìì±ë ì ìë¤. ê·¸ë¤ì, ì기 ê³ ì£¼íì ëì ì í¸ë ì기 LPC ê³ìë¤ì ìí´ ì ìë ê³ ëì LPC 모ë¸ì íµí´ ì¤ì¼ì¼ë§ë ì¬ê¸° ì í¸ë¥¼ íí°ë§í¨ì¼ë¡ì¨ ì¬êµ¬ì±ëë¤.In the high band decoder 22, an excitation signal is generated by re-sampling the low frequency band excitation provided by the low band decoder 21 at the sampling rate used in the high frequency band. That is, the low frequency band excitation signal is reused for decoding the high frequency band by replacing the low frequency band signal with the high frequency band. Alternatively, a random excitation signal can be generated for reconstruction of the high frequency band signal. The high frequency band signal is then reconstructed by filtering the scaled excitation signal through a high band LPC model defined by the LPC coefficients.
ì기 2 ëì í©ì± íí°ë± í¬(23)ìì, ì기 ë³µí¸íë ì 주íì ëì ì í¸ë¤ ë° ì기 ê³ ì£¼íì ëì ì í¸ë¤ì ìëì ìíë§ ì£¼íìë¡ ì -ìíë§ëê³ í©ì±ë ì¶ë ¥ ì¤ëì¤ ì í¸(3)ì ê²°í©ëë¤.In the two band synthesis filterbank 23, the decoded low frequency band signals and the high frequency band signals are coupled to the output audio signal 3 which is up-sampled and synthesized at the original sampling frequency.
ë¶í¸íë ì ë ¥ ì¤ëì¤ ì í¸(1)ë ëª¨ë ¸ ì¤ëì¤ ì í¸ ëë ì ì´ë ì 1 ë° ì 2 ì±ë ì í¸ë¥¼ í¬í¨íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì¼ ì ìë¤. ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ìë ì¢ ì±ë ì í¸ì ì° ì±ë ì í¸ë¡ 구ì±ëë ì¤í ë ì¤ ì¤ëì¤ ì í¸ì´ë¤.The input audio signal 1 to be encoded may be a mono audio signal or a multichannel audio signal comprising at least first and second channel signals. An example of a multichannel audio signal is a stereo audio signal composed of a left channel signal and a right channel signal.
AMR-WB+ ì½ë±ì ì¤í ë ì¤ ëìì ìí´, ì기 ì ë ¥ ì¤ëì¤ ì í¸ë ì기 2 ëì ë¶ì íí°ë± í¬(11)ìì ì 주íì ëì ì í¸ì ê³ ì£¼íì ëì ì í¸ë¡ ëì¼íê² ë¶í ëë¤. ì기 ì ëì ë¶í¸ê¸°(12)ë ì기 ì 주íì ëììì ì¢ ì±ë ì í¸ë¤ê³¼ ì° ì±ë ì í¸ë¤ì ê²°í©í¨ì¼ë¡ì¨ ëª¨ë ¸ ì í¸ë¥¼ ìì±íë¤. ì기 ëª¨ë ¸ ì í¸ë ìì ë ë°ì ê°ì´ ë¶í¸íëë¤. ëì±ì´, ì기 ì ëì ë¶í¸ê¸°(12)ë ì기 ëª¨ë ¸ ì í¸ì ëí ì¢ ë° ì° ì±ë ì í¸ë¤ì ì°¨ì´ë¤ì ë¶í¸íí기 ìíì¬ íë¼ë©í¸ë¦ ë¶í¸í를 ì¬ì©íë¤. ì기 ê³ ëì ë¶í¸ê¸°(13)ë ê° ì±ëì ëí´ ê°ë³ LPC ê³ìë¤ê³¼ ì´ë ê³ìë¤ì ê²°ì í¨ì¼ë¡ì¨ ê°ë³ì ì¼ë¡ ì기 ì¢ ì±ëê³¼ ì° ì±ëì ë¶í¸ííë¤.For stereo operation of the AMR-WB + codec, the input audio signal is equally divided into a low frequency band signal and a high frequency band signal in the two band analysis filter bank 11. The low band encoder 12 generates a mono signal by combining left channel signals and right channel signals in the low frequency band. The mono signal is encoded as described above. Moreover, the low band encoder 12 uses parametric coding to encode differences between left and right channel signals for the mono signal. The high band encoder 13 encodes the left channel and the right channel separately by determining individual LPC coefficients and gain coefficients for each channel.
ì기 ì ë ¥ ì¤ëì¤ ì í¸(1)ê° ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì´ì§ë§, í©ì±ë ì¤ëì¤ ì í¸(3)를 ëíë¼ ì¥ì¹ê° ë¤ì¤ì±ë ì¤ëì¤ ì¶ë ¥ì ì§ìíì§ ìë ê²½ì°, ì ë ¥ëë ë¤ì¤ì±ë ë¹í¸ ì¤í¸ë¦¼(2)ì ì기 ì¤ëì¤ ë³µí¸ê¸°(20)ì ìí´ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¡ ë³íëì´ì¼ íë¤. ì 주íì ëììì, ì기 ë¤ì¤ì±ë ì í¸ë¥¼ ëª¨ë ¸ ì í¸ë¡ ë³ííë ê²ì ì§ì ì ì¸ë°, ìëíë©´ ì기 ì ëì ë³µí¸ê¸°(21)ë ì기 ìì ë ë¹í¸ ì¤í¸ë¦¼ìì ì¤í ë ì¤ ë§¤ê° ë³ìë¤ì ë¨ìí ìëµí ì ìê³ ì기 ëª¨ë ¸ ë¶ë¶ë§ì ë³µí¸íí ì ì기 ë문ì´ë¤. íì§ë§ ì기 ê³ ì£¼íì ëìì ëí´, ì기 ê³ ì£¼íì ëìì ìë¬´ë° ê°ë³ ëª¨ë ¸ ì í¸ ë¶ë¶ë ì기 ë¹í¸ ì¤í¸ë¦¼ìì ì´ì©ê°ë¥íì§ ì기 ë문ì, ë ë§ì ì²ë¦¬ê° ì구ëë¤.If the input audio signal 1 is a multichannel audio signal, but the device representing the synthesized audio signal 3 does not support multichannel audio output, the input multichannel bit stream 2 is the audio decoder 20. To mono audio signal. In the low frequency band, it is direct to convert the multichannel signal into a mono signal, because the low band decoder 21 can simply omit stereo parameters from the received bit stream and decode only the mono part. Because there is. However, for the high frequency band, more processing is required since no individual mono signal portion of the high frequency band is available in the bit stream.
ê´ì©ì ì¼ë¡, ì기 ê³ ì£¼íì ëìì ëí ì¤í ë ì¤ ë¹í¸ ì¤í¸ë¦¼ì ì¢ ë° ì° ì±ë ì í¸ë¤ì ëí´ ê°ë³ì ì¼ë¡ ë³µí¸íëê³ , ê·¸ë¤ì ì기 ëª¨ë ¸ ì í¸ë ë¤ì´-ë¯¹ì± íë¡ì¸ì¤ìì ì¢ ë° ì° ì±ë ì í¸ë¤ì ê²°í©í¨ì¼ë¡ì¨ ìì±ëë¤. ì´ ì ê·¼ì ë 3ì ëìëë¤.Conventionally, the stereo bit stream for the high frequency band is decoded separately for the left and right channel signals, and then the mono signal is generated by combining the left and right channel signals in a down-mixing process. This approach is shown in FIG.
ë 3ì ëª¨ë ¸ ì¤ëì¤ ì í¸ ì¶ë ¥ì ëí ë 1ì ê³ ëì ë³µí¸ê¸°(22)ì ìì¸ë¥¼ ê°ëµì ì¼ë¡ ëìí ê²ì´ë¤. ì기 ê³ ëì ë³µí¸ê¸°ë ì´ê² ë문ì ì¢ ì±ë ì²ë¦¬ë¶(30)ì ì° ì±ë ì²ë¦¬ë¶(33)를 í¬í¨íë¤. ì기 ì¢ ì±ë ì²ë¦¬ë¶(30)ë LPC í©ì± í í°(32)ì ì°ê²°ë, 믹ì(31)를 í¬í¨íë¤. ì기 ì° ì±ë ì²ë¦¬ë¶(33)ë LPC í©ì± íí°(35)ì ì°ê²°ë, 믹ì(34)를 ëë±íê² í¬í¨íë¤. LPC í©ì± íí°ë¤(32, 35) ììì ì¶ë ¥ì ì¶ê° 믹ì(36)ì ì°ê²°ëë¤.3 schematically illustrates the details of the high band decoder 22 of FIG. 1 for a mono audio signal output. The high band decoder includes a left channel processor 30 and a right channel processor 33 for this reason. The left channel processor 30 includes a mixer 31 connected to the LPC synthesis filter 32. The right channel processor 33 equally includes a mixer 34 connected to the LPC synthesis filter 35. The output of both LPC synthesis filters 32, 35 is connected to an additional mixer 36.
ì기 ì ëì ë³µí¸ê¸°(21)ì ìí´ ì ê³µëë ì 주íì ëì ì¬ê¸° ì í¸ë ì기 믹ìë¤(31 ë° 34) ì¤ íëì ê³µê¸ëë¤. ì기 믹ì(31)ë ì기 ì¢ ì±ëì ëí ì´ë ê³ìë¤ì ì기 ì 주íì ëì ì¬ê¸° ì í¸ì ì ì©íë¤. ê·¸ë¤ì ì기 ì¢ ì±ë ê³ ëì ì í¸ë ì기 ì¢ ì±ëì ëí LPC ê³ìë¤ì ìí´ ì ìë ê³ ëì LPC 모ë¸ì íµí´ ì¤ì¼ì¼ë§ë ì¬ê¸° ì í¸ë¥¼ íí°ë§í¨ì¼ë¡ì¨ ì기 LPC í©ì± íí°(32)ì ìí´ ì¬êµ¬ì±ëë¤. ì기 믹ì(34)ë ì° ì±ëì ëí ì´ë ê³ìë¤ì ì기 ì 주íì ëì ì¬ê¸° ì í¸ì ì ì©íë¤. ê·¸ë¤ì ì기 ì° ì±ë ê³ ëì ì í¸ë ì기 ì° ì±ëì ëí LPC ê³ìë¤ì ìí´ ì ìë ê³ ëì LPC 모ë¸ì íµí´ ì¤ì¼ì¼ë§ë ì¬ê¸° ì í¸ë¥¼ íí°ë§í¨ì¼ë¡ì¨ ì기 LPC í©ì± íí°(35)ì ìí´ ì¬êµ¬ì±ëë¤.The low frequency band excitation signal provided by the low band decoder 21 is supplied to one of the mixers 31 and 34. The mixer 31 applies the gain coefficients for the left channel to the low frequency band excitation signal. The left channel high band signal is then reconstructed by the LPC synthesis filter 32 by filtering the scaled excitation signal through a high band LPC model defined by the LPC coefficients for the left channel. The mixer 34 applies gain coefficients for the right channel to the low frequency band excitation signal. The right channel high band signal is then reconstructed by the LPC synthesis filter 35 by filtering the scaled excitation signal through a high band LPC model defined by the LPC coefficients for the right channel.
ê·¸ë¤ì ì기 ì¬êµ¬ì±ë ì¢ ì±ë ê³ ì£¼íì ëì ì í¸ì ì기 ì¬êµ¬ì±ë ì° ì±ë ê³ ì£¼íì ëì ì í¸ë ì기 믹ì(36)ì ìí´ ìê° ëë©ì¸ìì ê·¸ë¤ì íê· ì ê³ì°í¨ì¼ë¡ì¨ ëª¨ë ¸ ê³ ì£¼íì ëì ì í¸ë¡ ë³íëë¤.The reconstructed left channel high frequency band signal and the reconstructed right channel high frequency band signal are then converted by the mixer 36 into a mono high frequency band signal by calculating their average in the time domain.
ì´ê²ì ìì¹ì ì¼ë¡ ë¨ìíê³ í¨ê³¼ì ì¸ ì ê·¼ì´ë¤. íì§ë§, ì´ê² ë문ì ë¨ì§ ë¨ì¼ ì±ë ì í¸ê° íìí ì§ë¼ë, ê·¸ê²ì ë¤ì¤ ì±ëë¤ì ê°ë³ í©ì±ì íìë¡ íë¤.This is in principle a simple and effective approach. However, even though this only requires a single channel signal, it requires separate synthesis of multiple channels.
ëì±ì´, ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ëë¶ë¶ì ìëì§ê° ì기 ì±ëë¤ ì¤ í ì±ëìì ì¡´ì¬íë ë°©ìì¼ë¡ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì ë ¥ ì í¸(1)ê° ë¶ê· ííëë ê²½ì°, ê·¸ë¤ì íê· ì ê³ì°í¨ì¼ë¡ì¨ ë¤ì¤ì±ëë¤ì ì§ì 믹ì±ì ì기 ê²°í©ë ì í¸ì ìì ê°ì 를 ì´ëí ê²ì´ë¤. ê·¹ë¨ì ì¸ ê²½ì°ì, ì기 ì±ëë¤ ì¤ í ì±ëì ìì í ë¬´ì± ìíì´ê³ , ì´ê²ì ìëì íì± ì ë ¥ ì±ëì ìëì§ ë 벨ì ì ë°ì¸ ê²°í©ë ì í¸ì ìëì§ ë 벨ì ì´ëíë¤.Furthermore, if the multichannel audio input signal 1 is unbalanced in such a way that most of the energy of the multichannel audio signal is present on one of the channels, the direct mixing of the multichannels by calculating their average This will result in attenuation in the combined signal. In extreme cases, one of the channels is completely silent, which results in an energy level of the combined signal that is half the energy level of the original active input channel.
본 ë°ëª ì 목ì ì ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íëë° íìí ì²ë¦¬ ë¶í를 ê°ììí¤ë ê²ì´ë¤.It is an object of the present invention to reduce the processing load required to synthesize a mono audio signal based on an encoded multichannel audio signal.
ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë°©ë²ì´ ì ìëëë°, ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë¤. ì기 ì ìë ë°©ë²ì ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´ ë§¤ê° ë³ì ëë©ì¸ìì ë¤ì¤ ì±ëë¤ì ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ë¨ê³ë¥¼ í¬í¨íë¤. ì기 ì ìë ë°©ë²ì ì기 ì¤ëì¤ ì£¼íì ëìì ë¶ë¶ì ëí´ ì기 ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì ì¬ì©íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë¨ê³ë¥¼ ë í¬í¨íë¤.A method of synthesizing a mono audio signal based on an available coded multichannel audio signal is proposed, wherein the coded multichannel audio signal is a separate parameter for each channel of the multichannel audio signal for at least a portion of an audio frequency band. Contains variable values. The proposed method comprises combining parameter values of multiple channels in the parameter domain for at least a portion of the audio frequency band. The proposed method further comprises synthesizing a mono audio signal using the combined parameter values for the portion of the audio frequency band.
ëì±ì´, ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìí ì¤ëì¤ ë³µí¸ê¸°ê° ì ìëë¤. ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë¤. ì기 ì ìë ì¤ëì¤ ë³µí¸ê¸°ë ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì ì´ë ì¼ë¶ë¶ì ëí´ ë§¤ê° ë³ì ëë©ì¸ìì ì기 ë¤ì¤ ì±ëë¤ì ë§¤ê° ë³ì ê°ë¤ì ê²°í©í기ì ì í©í ì ì´ë íëì ë§¤ê° ë³ì ì íë¶ë¥¼ í¬í¨íë¤. ì기 ì ìë ì¤ëì¤ ë³µí¸ê¸°ë ë§¤ê° ë³ì ì íë¶ì ìí´ ì ê³µë ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì 기ë°íì¬ ì ì´ë ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기ì ì í©í ì¤ëì¤ ì í¸ í©ì±ë¶ë¥¼ ë í¬í¨íë¤.Moreover, an audio decoder for synthesizing a mono audio signal based on the available encoded multichannel audio signal is proposed. The encoded multichannel audio signal includes individual parameter values for each channel of the multichannel audio signal for at least a portion of the frequency band of the original multichannel audio signal. The proposed audio decoder includes at least one parameter selector suitable for combining parameter values of the multiple channels in a parameter domain for at least a portion of the frequency band of the multichannel audio signal. The proposed audio decoder further comprises an audio signal synthesizer suitable for synthesizing a mono audio signal for at least a portion of the frequency band of the multichannel audio signal based on the combined parameter values provided by the parameter selector.
ëì±ì´, ì기 ì ìë ë³µí¸ê¸°ì ë¶ê°íì¬ ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì ê³µíë ì¤ëì¤ ë¶í¸ê¸°ë¥¼ í¬í¨íë, ë¶í¸í ìì¤í ì´ ì ìëë¤.Moreover, an encoding system is proposed, comprising an audio encoder for providing an encoded multichannel audio signal in addition to the proposed decoder.
ë§ì§ë§ì¼ë¡, ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìí ìíí¸ì¨ì´ ì½ëê° ì ì¥ëì´ ìë ìíí¸ì¨ì´ íë¡ê·¸ë¨ ìì±ë¬¼ì´ ì ìëë¤. ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë¤. ì기 ì ìë ìíí¸ì¨ì´ ì½ëë ì¤ëì¤ ë³µí¸ê¸°ìì ì¤íëë ê²½ì° ì기 ì ìë ë°©ë²ì ë¨ê³ë¤ì 구ííë¤.Finally, a software program product is proposed that stores software code for synthesizing a mono audio signal based on an available encoded multichannel audio signal. The encoded multichannel audio signal includes individual parameter values for each channel of the multichannel audio signal for at least a portion of the frequency band of the original multichannel audio signal. The proposed software code implements the steps of the proposed method when executed in an audio decoder.
ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë í¹í ë¶í¸íë ì¤í ë ì¤ ì¤ëì¤ ì í¸ì¼ ì ìì§ë§, ì´ì íì ëì§ë ìëë¤.The encoded multichannel audio signal may be, in particular, an encoded stereo audio signal, but is not limited thereto.
본 ë°ëª ì ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ íëí기 ìíì¬, ë¤ì¤ ì±ëë¤ì ì´ì©ê°ë¥íë§¤ê° ë³ì ê°ë¤ì´ ë³µí¸í ì´ì ì ë§¤ê° ë³ì ëë©ì¸ìì ì´ë¯¸ ê²°í©ë ê²½ì°, ì´ì©ê°ë¥í ë¤ì¤ ì±ëë¤ì ê°ë³ ë³µí¸íê° íí¼ë ì ìë¤ë ê³ ë ¤ë¡ë¶í° ì ëëë¤. ê·¸ë¤ì ì기 ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì ë¨ì¼ ì±ë ë³µí¸íì ì¬ì©ë ì ìë¤.The invention derives from the consideration that separate parameterization of the available multiple channels can be avoided if the parameter values available for the multiple channels have already been combined in the parameter domain prior to decoding to obtain a mono audio signal. The combined parameter values can then be used for single channel decoding.
본 ë°ëª ì ì´ì ì ê·¸ê²ì´ ë³µí¸ê¸°ì ì²ë¦¬ ë¶í를 ì ê°í ì ìê² íê³ ê·¸ê²ì´ ë³µí¸ê¸°ì ë³µì¡ì±ì ê°ììí¨ë¤ë ê²ì´ë¤. ì기 ë¤ì¤ ì±ëë¤ì´ ë¶í ëì ìì¤í ìì ì²ë¦¬ëë ì¤í ë ì¤ ì±ëë¤ì¸ ê²½ì°, ì를 ë¤ì´ ì ì±ëë¤ì ëí´ ê°ë³ì ì¼ë¡ ê³ ì£¼íì ëì í©ì± íí°ë§ì ìííê³ ê²°ê³¼ë¡ì ìì±ëë ì¢ ë° ì° ì±ë ì í¸ë¤ì 믹ì±íë ê²ê³¼ ë¹êµíì¬ ê³ ì£¼íì ëì í©ì± íí°ë§ì íìí ì²ë¦¬ ë¶íì ì½ ì ë°ì´ ì ê°ë ì ìë¤.The advantage of the present invention is that it allows to reduce the processing load on the decoder and it reduces the complexity of the decoder. If the multiple channels are stereo channels processed in a split band system, for example high frequency compared to performing separate high frequency band synthesis filtering on both channels and mixing the resulting left and right channel signals About half of the processing load required for band synthesis filtering can be reduced.
본 ë°ëª ì ì¼ ì¤ìììì, ì기 ë§¤ê° ë³ìë¤ì ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ìë¤ ë° ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤ì í¬í¨íë¤.In one embodiment of the invention, the parameters include gain coefficients for each of the multiple channels and linear prediction coefficients for each of the multiple channels.
ì기 ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ê²ì ì ì ì¸ ë°©ë²ì¼ë¡ 구íë ì ìëë°, ì를 ë¤ì´ 모ë ì±ëë¤ì ëí´ ì´ì©ê°ë¥í ë§¤ê° ë³ì ê°ë¤ì íê· ì ì¼ë°ì ì¼ë¡ ê³ì°í¨ì¼ë¡ì¨ 구íë ì ìë¤. íì§ë§, ì 리íê²ë ì기 ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ê²ì ì기 ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì ë³´ì 기ë°íì¬ ì ì´ë íëì ë§¤ê° ë³ìì ëí´ ì ì´ëë¤. ì´ê²ì ì¤íí¸ë¼ í¹ì±ì ì§ëê³ ì기 ì¤íí¸ë¼ í¹ì±ë¤ë° ê°ê°ì íì± ì±ëììì ì í¸ ë 벨ì ê°ë¥í í ê·¼ì í ì í¸ ë 벨ì ì§ë ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ ë¬ì±íëë¡ íì©íì¬, í©ì±ë ëª¨ë ¸ ì¤ëì¤ ì í¸ì ê°ì ë ì¤ëì¤ íì§ì ë¬ì±íëë¡ íì©íë¤.Combining the parameter values can be implemented in a static manner, for example by generally calculating the average of the parameter values available for all channels. However, advantageously combining the parameter values is controlled for at least one parameter based on information about each activity in the multiple channels. This allows to achieve a mono audio signal having spectral characteristics and a signal level as close as possible to the spectral characteristics and the signal level in each active channel, thereby achieving an improved audio quality of the synthesized mono audio signal. .
ì 1 ì±ëììì íëì´ ì 2 ì±ëììë³´ë¤ ìë¹í ëì ê²½ì°, ì기 ì 1 ì±ëì íì± ì±ëì¸ ê²ì¼ë¡ ê°ì ë ì ìê³ , ë°ë©´ì ì기 ì 2 ì±ëì ìëì ì¤ëì¤ ì í¸ì 기본ì ì¼ë¡ ìë¬´ë° ì¤ëì¤ì ì¸ ê¸°ì¬ë¥¼ ì ê³µíì§ ìë ì¬ì¼ë°í¸ ì±ëì¸ ê²ì¼ë¡ ê°ì ë ì ìë¤. ì¬ì¼ë°í¸ ì±ëì´ ì¡´ì¬íë ê²½ì°, ì 리íê²ë ì ì´ë íëì ë§¤ê° ë³ìì ë§¤ê° ë³ì ê°ë¤ì ì기 ë§¤ê° ë³ì ê°ë¤ì ê²°í©í ë ìì í 무ìëë¤. ê²°ê³¼ë¡ì, ì기 í©ì±ë ëª¨ë ¸ ì í¸ë ì기 íì± ì±ëê³¼ ì ì¬í ê²ì´ë¤. 모ë ë¤ë¥¸ ê²½ì°ì, ì기 ë§¤ê° ë³ì ê°ë¤ì ì를 ë¤ì´ 모ë ì±ëë¤ì ê±¸ì³ íê· ëë ê°ì¤ë íê· ì íì±í¨ì¼ë¡ì¨ ê²°í©ë ì ìë¤. ê°ì¤ë íê· ì ëí´, ì±ëì í ë¹ë ê°ì¤ì¹ë ë¤ë¥¸ ì±ë ëë ì±ëë¤ê³¼ ë¹êµíì¬ ê·¸ê²ì ê´ë ¨ë íëì ë°ë¼ ìì¹íë¤. ì기 ê²°í©ì 구íí기 ìíì¬ ë¤ë¥¸ ë°©ë²ë¤ì´ ëí ì¬ì©ë ì ìë¤. ëì¼íê², í기ëì§ ìì ì¬ì¼ë°í¸ ì±ëì ëí ë§¤ê° ë³ì ê°ì íê· í ëë ì´ë¤ ë¤ë¥¸ ë°©ë²ì ìí´ íì± ì±ëì ë§¤ê° ë³ì ê°ë¤ê³¼ ê²°í©ë ì ìë¤.If the activity in the first channel is significantly higher than in the second channel, the first channel can be assumed to be the active channel, while the second channel provides essentially no audio contribution to the original audio signal. It can be assumed to be a silent channel that does not. If there is a silent channel, advantageously the parameter values of at least one parameter are completely ignored when combining the parameter values. As a result, the synthesized mono signal will be similar to the active channel. In all other cases, the parameter values can be combined, for example, by forming an average or weighted average over all channels. For a weighted average, the weight assigned to a channel rises with its related activity compared to other channels or channels. Other methods may also be used to implement the combination. Equally, parameter values for silent channels that will not be discarded may be combined with parameter values of the active channel by averaging or some other method.
ë¤ìí ì íì ì ë³´ë ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì 보를 íì±í ì ìë¤. ê·¸ê²ì ì를 ë¤ì´ ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ì, ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ë¨ê¸°ê°ëìì ì´ë ê³ìë¤ì ê²°í© ëë ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤ì ìí´ ì ê³µë ì ìë¤. ì기 íë ì ë³´ë ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì ì´ë ì¼ë¶ë¶ììì ìëì§ ë 벨 ëë ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì ê³µíë ë¶í¸ê¸°ë¡ë¶í° ìì ë íëì ëí ê°ë³ ë³´ì¡° ì ë³´ì ìí´ ëì¼íê² ì ê³µë ì ìë¤.Various types of information may form information about each activity in multiple channels. It may be provided for example by a gain coefficient for each of the multiple channels, a combination of short term gain coefficients for each of the multiple channels or linear prediction coefficients for each of the multiple channels. The activity information is equally provided by individual supplementary information about the activity received from an encoder providing an energy level or an encoded multichannel audio signal in at least a portion of the frequency band of the multichannel audio signal for each of the multichannels. Can be.
ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ íëí기 ìíì¬, ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì를 ë¤ì´ ì 주íì ëì ì í¸ ë° ê³ ì£¼íì ëì ì í¸ë¡ ë¶í ë ì ìë¤. ê·¸ë¤ì ì기 ì 주íì ëì ì í¸ë ê´ì©ì ì¸ ë°©ë²ì¼ë¡ ë¶í¸íë ì ìë¤. ëí ì기 ê³ ì£¼íì ëì ì í¸ë ê´ì©ì ì¸ ë°©ë²ì¼ë¡ ì기 ë¤ì¤ ì±ëë¤ì ëí´ ê°ë³ì ì¼ë¡ ë¶í¸íë ì ìëë°, ì´ê²ì ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ë§¤ê° ë³ì ê°ë¤ì ì´ëíë¤. ê·¸ë¤ì ì기 ì ì²´ ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ì ì´ë ì기 ë¶í¸íë ê³ ì£¼íì ëì ë¶ë¶ì 본 ë°ëª ì ìí´ ì²ë¦¬ë ì ìë¤.In order to obtain an encoded multichannel audio signal, the original multichannel audio signal can be divided into, for example, a low frequency band signal and a high frequency band signal. The low frequency band signal may then be encoded in a conventional manner. The high frequency band signal may also be encoded separately for the multiple channels in a conventional manner, which results in parameter values for each of the multiple channels. Then at least the coded high frequency band portion of the entire coded multichannel audio signal can be processed by the present invention.
ì기 ì 주íì ëì ë° ì기 ê³ ì£¼íì ëì ê°ì ë¶ê· í, ì를 ë¤ì´ ì í¸ ë 벨ì ë¶ê· íì ë°©ì§í기 ìíì¬, ì ì²´ ì í¸ì ì 주íì ëì ë¶ë¶ì ë¤ì¤ì±ë ë§¤ê° ë³ì ê°ë¤ì´ ëì¼íê² ë³¸ ë°ëª ì ë°ë¼ ì²ë¦¬ë ì ìë¤ë ê²ì ì´í´ëì´ì¼ íë¤. ëìì ì¼ë¡, ì기 ì í¸ ë 벨ì ìí¥ì 미ì¹ë ì기 ê³ ì£¼íì ëìììì ì¬ì¼ë°í¸ ì±ëë¤ì ëí ë§¤ê° ë³ì ê°ë¤ì ìì¹ì ì¼ë¡ í기ë ì ìê³ , ì기 ì í¸ì ì¤í í¸ë¼ í¹ì±ì ìí¥ì 미ì¹ë ì¬ì¼ë°í¸ ì±ëë¤ì ëí ë§¤ê° ë³ì ê°ë¤ë§ì´ í기ë ì ìë¤.It is to be understood that the multichannel parameter values of the low frequency band portion of the overall signal can be treated according to the present invention in order to prevent imbalance between the low frequency band and the high frequency band, for example an unbalance of the signal level. Should be. Alternatively, parameter values for silent channels in the high frequency band that affect the signal level cannot in principle be discarded, and parameter values for silent channels that affect the spectrum characteristics of the signal. Only these can be discarded.
본 ë°ëª ì ì를 ë¤ì´ AMR-WB+ ê¸°ë° ë¶í¸í ìì¤í ìì 구íë ì ìì§ë§ ì´ì íì ëì§ë ìëë¤.The present invention may be implemented in, for example, AMR-WB + based coding system, but is not limited thereto.
본 ë°ëª ì ë¤ë¥¸ 목ì ë¤ ë° í¹ì§ë¤ì 첨ë¶í ëë©´ë¤ê³¼ í¨ê» ê³ ë ¤ëë í기ì ìì¸í ì¤ëª ì¼ë¡ë¶í° ëª ë°±í´ì§ ê²ì´ë¤.Other objects and features of the present invention will become apparent from the following detailed description considered in conjunction with the accompanying drawings.
ë 1ì ë¶í ëì ë¶í¸í ìì¤í ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤.1 is a schematic block diagram of a split band coding system.
ë 2ë 2-ëì íí°ë± í¬ì 주íì ìëµì ëë©´ì´ë¤.2 is a diagram of the frequency response of a two-band filterbank.
ë 3ì ì¤í ë ì¤-ëª¨ë ¸ ë³íì ìí ê´ì©ì ì¸ ê³ ëì ë³µí¸ê¸°ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤.3 is a schematic block diagram of a conventional high band decoder for stereo-mono conversion.
ë 4ë 본 ë°ëª ì ì 1 ì¤ììì ìí ì¤í ë ì¤-ëª¨ë ¸ ë³íì ìí ê³ ëì ë³µí¸ê¸°ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤.4 is a schematic block diagram of a high band decoder for stereo-mono conversion according to a first embodiment of the present invention.
ë 5ë ë 4ì ê³ ëì ë³µí¸ê¸°ë¥¼ ê°ì§ê³ ê²°ê³¼ë¡ì ìì±ëë ì¤í ë ì¤ ì í¸ë¤ ë° ëª¨ë ¸ ì í¸ì ëí 주íì ìëµì ëìí ëë©´ì´ë¤.FIG. 5 is a diagram illustrating the frequency response of the resulting stereo signals and mono signal with the high band decoder of FIG. 4.
ë 6ì 본 ë°ëª ì ì 2 ì¤ììì ìí ì¤í ë ì¤-ëª¨ë ¸ ë³íì ìí ê³ ëì ë³µí¸ê¸°ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤.6 is a schematic block diagram of a high band decoder for stereo-mono conversion according to a second embodiment of the present invention.
ë 7ì ë 6ì ê³ ëì ë³µí¸ê¸°ë¥¼ ì¬ì©íë ìì¤í ììì ëìì ëìí íë¦ëì´ë¤.7 is a flowchart illustrating operation in a system using the high band decoder of FIG. 6.
ë 8ì ë 7ì íë¦ëìì ë§¤ê° ë³ì ê²°í©ì ëí ì 1 ìµì ì ëìí íë¦ëì´ë¤.8 is a flow chart showing a first option for parameter combining in the flow chart of FIG.
ë 9ë ë 7ì íë¦ëìì ë§¤ê° ë³ì ê²°í©ì ëí ì 2 ìµì ì ëìí íë¦ëì´ë¤.FIG. 9 is a flow chart illustrating a second option for parameter combining in the flow chart of FIG. 7.
본 ë°ëª ì í기ìì ëí 참조ë , ë 1ì ìì¤í ìì 구íëë ê²ì¼ë¡ ê°ì ëë¤. ì¤í ë ì¤ ì ë ¥ ì¤ëì¤ ì í¸(1)ê° ë¶í¸í를 ìí´ ì¤ëì¤ ë¶í¸ê¸°(10)ì ì ê³µëê³ , ë°ë©´ì ë³µí¸íë ëª¨ë ¸ ì¤ëì¤ ì í¸(3)ë í리ì í ì´ì ì ìí´ ì¤ëì¤ ë³µí¸ê¸°(20)ì ìí´ ì ê³µëì´ì¼ íë¤.The invention is assumed to be implemented in the system of FIG. 1, which will also be referenced below. The stereo input audio signal 1 is provided to the audio encoder 10 for encoding, while the decoded mono audio signal 3 must be provided by the audio decoder 20 for presentation.
ë®ì ì²ë¦¬ ë¶í를 ì§ë ì´ë¬í ëª¨ë ¸ ì¤ëì¤ ì í¸(3)를 ì ê³µí ì ì기 ìíì¬, ì기 ìì¤í ì ê³ ëì ë³µí¸ê¸°(22)ë 본 ë°ëª ì ë¨ìí ì 1 ì¤ììì ë°ë¼ 구íë ì ìë¤.In order to be able to provide such a mono audio signal 3 with a low processing load, the high band decoder 22 of the system can be implemented according to the first simple embodiment of the invention.
ë 4ë ì기 ê³ ëì ë³µí¸ê¸°(22)ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤. ì기 ê³ ëì ë³µí¸ê¸°(22)ì ì ëì ì¬ê¸° ì ë ¥ì 믹ì(40)ì LPC í©ì± íí°(41)를 íµí´ ì기 ê³ ëì ë³µí¸ê¸°(22)ì ì¶ë ¥ì ì°ê²°ëë¤. ì기 ê³ ëì ë³µí¸ê¸°(22)ë ì기 믹ìì ì°ê²°ë ì´ë íê· ê³ì°ë¶(42) ë° ë¶ê°ì ì¼ë¡ ì기 LPC í©ì± íí°(41)ì ì°ê²°ë LPC íê· ê³ ì°ë¶(43)를 í¬í¨íë¤. 4 is a schematic block diagram of the high band decoder 22. The low band excitation input of the high band decoder 22 is connected to the output of the high band decoder 22 via mixer 40 and LPC synthesis filter 41. The high band decoder 22 includes a gain average calculator 42 connected to the mixer and an LPC average calculator 43 connected to the LPC synthesis filter 41.
ì기 ìì¤í ì ë¤ìê³¼ ê°ì´ ëìíë¤.The system operates as follows.
ì기 ì¤ëì¤ ë¶í¸ê¸°(10)ì ì ë ¥ë ì¤í ë ì¤ ì í¸ë ì기 2 ëì ë¶ì íí°ë± í¬(11)ì ìí´ ì 주íì ëìê³¼ ê³ ì£¼íì ëìì¼ë¡ ë¶í ëë¤. ì ëì ë¶í¸ê¸°(11)ë ìì ë ë°ì ê°ì´ ì기 ì 주íì ëì ì¤ëì¤ ì í¸ë¥¼ ë¶í¸ííë¤. AMR-WB+ ê³ ëì ë¶í¸ê¸°(12)ë ì¢ ë° ì° ì±ëë¤ì ëí´ ê°ë³ì ì¼ë¡ ì기 ê³ ëì ì¤í ë ì¤ ì í¸ë¥¼ ë¶í¸ííë¤. í¹í, ê·¸ê²ì ìì ë ë°ì ê°ì´ ê° ì±ëì ëí ì í ì측 ê³ìë¤ ë° ì´ë ê³ìë¤ì ê²°ì íë¤.The stereo signal input to the audio encoder 10 is divided into a low frequency band and a high frequency band by the two-band analysis filter bank 11. The low band encoder 11 encodes the low frequency band audio signal as described above. AMR-WB + high band encoder 12 encodes the high band stereo signal separately for the left and right channels. In particular, it determines linear prediction coefficients and gain coefficients for each channel as described above.
ì기 ë¶í¸íë ëª¨ë ¸ ì 주íì ëì ì í¸, ì기 ì¤í ë ì¤ ì 주íì ëì ë§¤ê° ë³ì ê°ë¤ ë° ì기 ì¤í ë ì¤ ê³ ì£¼íì ëì ë§¤ê° ë³ì ê°ë¤ì ì기 ì¤ëì¤ ë³µí¸ê¸°(20)ì ë¹í¸ ì¤í¸ë¦¼(2)ì¼ë¡ ì¡ì ëë¤.The encoded mono low frequency band signal, the stereo low frequency band parameter values and the stereo high frequency band parameter values are transmitted to the audio decoder 20 in a bit stream 2.
ì기 ì ëì ë³µí¸ê¸°(21)ë ë³µí¸í를 ìí´ ì기 ë¹í¸ ì¤í¸ë¦¼ì ì 주íì ëì ë¶ë¶ì ìì íë¤. ì기 ë³µí¸íìì, ê·¸ê²ì ì기 ì¤í ë ì¤ ë§¤ê° ë³ìë¤ì ìëµíê³ ë¨ì§ ì기 ëª¨ë ¸ ë¶ë¶ë§ì ë³µí¸ííë¤. ê·¸ ê²°ê³¼ë ëª¨ë ¸ ì 주íì ëì ì¤ëì¤ ì í¸ì´ë¤.The low band decoder 21 receives the low frequency band portion of the bit stream for decoding. In the decoding, it omits the stereo parameters and only decodes the mono part. The result is a mono low frequency audio signal.
ì기 ê³ ëì ë³µí¸ê¸°(22)ë íí¸ì¼ë¡ ì기 ì¡ì ë ë¹í¸ ì¤í¸ë¦¼ì¼ë¡ë¶í° ì기 ê³ ì£¼íì ëì ë§¤ê° ë³ì ê°ë¤ì ìì íê³ ë¤ë¥¸ íí¸ì¼ë¡ ì기 ì ëì ë³µí¸ê¸°(21)ì ìí´ ì¶ë ¥ë ì ëì ì¬ê¸° ì í¸ë¥¼ ìì íë¤.The high band decoder 22 receives on the one hand the high frequency band parameter values from the transmitted bit stream and on the other hand receives the low band excitation signal output by the low band decoder 21.
ì기 ê³ ì£¼íì ëì ë§¤ê° ë³ìë¤ì ì¢ ì±ë ì´ë ê³ì, ì° ì±ë ì´ë ê³ì, ì¢ ì±ë LPC ê³ìë¤ ë° ì° ì±ë LPC ê³ìë¤ì ê°ê° í¬í¨íë¤. ì기 ì´ë íê· ê³ì° ë¶(42)ìì, ì기 ì¢ ì±ë ë° ì기 ì° ì±ëì ëí ê°ê°ì ì´ë ê³ìë¤ì´ íê· íëê³ , ì기 íê· ì´ë ê³ìë ì기 ì ëì ì¬ê¸° ì í¸ë¥¼ ì¤ì¼ì¼ë§í기 ìíì¬ ì기 믹ì(40)ì ìí´ ì¬ì©ëë¤. ì기 ê²°ê³¼ë¡ì ìì±ë ì í¸ë ì기 LPC í©ì± íí°(42)ì ëí íí°ë§ì ìí´ ì ê³µëë¤.The high frequency band parameters include left channel gain coefficients, right channel gain coefficients, left channel LPC coefficients and right channel LPC coefficients, respectively. In the gain average calculation section 42, respective gain coefficients for the left channel and the right channel are averaged, and the average gain coefficient is used by the mixer 40 to scale the low band excitation signal. . The resulting signal is provided for filtering on the LPC synthesis filter 42.
ì기 íê· LPC ê³ì°ë¶(43)ìì, ì기 ì¢ ì±ë ë° ì° ì±ëì ëí ê°ê°ì ì í ì측 ê³ìë¤ì´ ê²°í©ëë¤. AMR-WB+ìì, ì ì±ëë¤ë¡ë¶í°ì LPC ê³ìë¤ì ê²°í©ì ì를 ë¤ì´ ì´ë¯¸í´ì¤ ì¤íí¸ë¼ ì(ISP: Immittance Spectral Pair) ëë©ì¸ìì ìì ë ê³ìë¤ì ëí íê· ì ê³ì°í¨ì¼ë¡ì¨ íí´ì§ ì ìë¤. ê·¸ë¤ì ì기 íê· ê³ìë¤ì ì기 ì¤ì¼ì¼ë§ë ì ëì ì¬ê¸° ì í¸ê° ì¢ ìëë, ì기 LPC í©ì± íí°(41)를 구ì±íëë° ì¬ì©ëë¤.In the average LPC calculator 43, respective linear prediction coefficients for the left channel and the right channel are combined. In AMR-WB +, the combination of LPC coefficients from both channels can be done, for example, by calculating the average over the coefficients received in the Immittance Spectral Pair (ISP) domain. The average coefficients are then used to construct the LPC synthesis filter 41 upon which the scaled low band excitation signal is dependent.
ì기 ì¤ì¼ì¼ë§ëê³ íí°ë§ë ì ëì ì¬ê¸° ì í¸ë ìíë ëª¨ë ¸ ê³ ëì ì¤ëì¤ ì í¸ë¥¼ íì±íë¤.The scaled filtered low band excitation signal forms the desired mono high band audio signal.
ì기 ëª¨ë ¸ ì ëì ì¤ëì¤ ì í¸ì ì기 ëª¨ë ¸ ê³ ëì ì¤ëì¤ ì í¸ë ì기 2 ëì í©ì± íí°ë± í¬(23)ìì ê²°í©ëê³ , ê²°ê³¼ë¡ì ìì±ë í©ì±ë ì í¸(3)ë í리ì í ì´ì ì ìí´ ì¶ë ¥ëë¤.The mono low band audio signal and the mono high band audio signal are combined in the two band synthesis filter bank 23, and the resulting synthesized signal 3 is output for presentation.
ë 3ì ê³ ëì ë¶í¸ê¸°ë¥¼ ì¬ì©íë ìì¤í ê³¼ ë¹êµíì¬, ë 4ì ê³ ëì ë¶í¸ê¸°ë¥¼ ì¬ì©íë ìì¤í ì ì기 í©ì±ë ì í¸ê° ë¨ì§ íë²ë§ ìì±ë기 ë문ì ì기 í©ì±ë ì í¸ë¥¼ ìì±í기 ìíì¬ ì²ë¦¬ ë¥ë ¥ì ì½ ì ë°ë§ì íìë¡ íë¤ë ì´ì ì ê°ì§ê³ ìë¤.Compared to the system using the high band encoder of FIG. 3, the system using the high band encoder of FIG. 4 only requires about half of the processing power to generate the synthesized signal because the synthesized signal is generated only once. It has the advantage that
ê·¸ë¬ë, ì기 ì±ëë¤ ì¤ ë¨ì§ í ì±ëìì íì± ì í¸ë¥¼ ì§ë ì¤í ë ì¤ ì¤ë ì¤ ì ë ¥ì ê²½ì° ì기 ê²°í©ë ì í¸ììì ê°ë¥í ê°ì ì ê´í ì기ì ì¸ê¸ë 문ì ê° ë¨ì ìë¤ë ê²ì 주목ëì´ì¼ íë¤.However, it should be noted that the above mentioned problem regarding possible attenuation in the combined signal remains in case of a stereo audio input with an active signal in only one of the channels.
ëì±ì´, ë¨ì§ íëì íì± ì±ëì ì§ë ì¤í ë ì¤ ì¤ëì¤ ì ë ¥ ì í¸ë¤ì ëí´, ì í ì측 ê³ìë¤ì íê· íë ê²°ê³¼ë¡ì ìì±ëë ê²°í©ë ì í¸ìì ì기 ì¤íí¸ë¼ì 'íííê² íë' ìíì§ ìë ì¬ì´ë í¨ê³¼ë¥¼ ì¼ê¸°íë¤. ì기 íì± ì±ëì ì¤íí¸ë¼ í¹ì±ì ê°ì§ë ê² ëì ì, ì기 ê²°í©ë ì í¸ë ì기 íì± ì±ëì 'ì¤ì ' ì¤íí¸ë¼ê³¼ ì기 ì¬ì¼ë°í¸ ì±ëì ì¤ì ì ì¼ë¡ íí ëë ëë¤-ì ì¬ ì¤íí¸ë¼ì ê²°í©ì¼ë¡ ì¸íì¬ ë¤ì ì곡ë ì¤íí¸ë¼ í¹ì±ì ì§ëë¤.Moreover, for stereo audio input signals with only one active channel, the averaging of the linear prediction coefficients results in an unwanted side effect of 'flattening' the spectrum in the resulting combined signal. Instead of having the spectral characteristics of the active channel, the combined signal has somewhat distorted spectral characteristics due to the combination of the 'real' spectrum of the active channel with the substantially flat or random-like spectrum of the silent channel.
ì´ë¬í í¨ê³¼ë ë 5ì ëìëì´ ìë¤. ë 5ë 80 msì íë ìëì ê³ì°ë 3ê°ì ìì´í LPC í©ì± íí° ì£¼íì ìëµë¤ì ëí´ ì£¼íìì ëí ì§íì ëìí ëë©´ì´ë¤. ì¤ì ì íì± ì±ëì LPC í©ì± íí° ì£¼íì ìëµì ëíë¸ë¤. ì ì ì ì¬ì¼ë°í¸ ì±ëì LPC í©ì± íí° ì£¼íì ìëµì ëíë¸ë¤. íì ì ì기 ISP ëë©ì¸ìì ì ì±ëë¤ë¡ë¶í° ì기 LPC 모ëë¤ì íê· ííë ê²½ì° ê²°ê³¼ë¡ì ìì±ë LPC í©ì± íí° ì£¼íì ìëµì ëíë¸ë¤. ì기 íê· íë LPC íí°ê° ì¤ì ì¤íí¸ë¼ë¤ ì¤ ì´ë í ì¤íí¸ë¼ì ë©´ë°íê² ë®ì§ ìì ì¤íí¸ë¼ì ìì±íë¤ë ê²ì ì ì ìë¤. ì¤ì ë¡ ì´ë¬í íìì ì기 ê³ ì£¼íì ëììì ê°ìë ì¤ëì¤ íì§ë¡ì ë¤ë¦´ ì ìë¤.This effect is shown in FIG. 5 shows the amplitude versus frequency for three different LPC synthesis filter frequency responses calculated over a frame of 80 ms. The solid line represents the LPC synthesis filter frequency response of the active channel. The dotted line represents the LPC synthesis filter frequency response of the silent channel. The dashed line represents the resulting LPC synthesis filter frequency response when averaging the LPC modules from both channels in the ISP domain. It can be seen that the averaged LPC filter produces spectra that do not closely resemble any of the real spectra. In practice this may sound as reduced audio quality in the high frequency band.
ë®ì ì²ë¦¬ ë¶í를 ê°ì§ ë¿ë§ ìëë¼ ë 4ì ê³ ëì ë³µí¸ê¸°ë¡ íë¦¬ì§ ìë ì ì½ë¤ì ì¶ê°ë¡ íí¼íë ëª¨ë ¸ ì¤ëì¤ ì í¸(3)를 ì ê³µí ì ì기 ìíì¬, ë 1ì ìì¤í ì ê³ ëì ë³µí¸ê¸°(22)ë 본 ë°ëª ì ì 2 ì¤ììì ë°ë¼ 구íë ì ìë¤.In order to not only have a low processing load but also to provide a mono audio signal 3 which further avoids the constraints which are not solved with the high band decoder of FIG. 4, the high band decoder 22 of the system of FIG. It may be implemented according to the second embodiment.
ë 6ì ì´ë¬í ê³ ëì ë³µí¸ê¸°(22)ì ê°ëµì ì¸ ë¸ë¡ëì´ë¤. ì기 ê³ ëì ë³µ í¸ê¸°(22)ì ì ëì ì¬ê¸° ì ë ¥ì 믹ì(60)ì LPC í©ì± íí°(61)를 íµí´ ì기 ê³ ëì ë³µí¸ê¸°(22)ì ì¶ë ¥ì ì°ê²°ëë¤. ì기 ê³ ëì ë³µí¸ê¸°(22)ë ë¶ê°ì ì¼ë¡ ì기 믹ì(60)ì ì°ê²°ë ì´ë ì í ë¡ì§(62) ë° ì기 LPC í©ì± íí°(61)ì ì°ê²°ë LPC ì í ë¡ì§(63)ì í¬í¨íë¤.6 is a schematic block diagram of such a high band decoder 22. The low band excitation input of the high band decoder 22 is connected to the output of the high band decoder 22 through a mixer 60 and an LPC synthesis filter 61. The high band decoder 22 additionally includes a gain selection logic 62 coupled to the mixer 60 and an LPC selection logic 63 coupled to the LPC synthesis filter 61.
ì´ì ë 6ì ê³ ëì ë¶í¸ê¸°(22)를 ì¬ì©íë ìì¤í ììì ì²ë¦¬ê° ë 7ì 참조íì¬ ì¤ëª ë ê²ì´ë¤. ë 7ì ìë¶ì ì기 ì¤ëì¤ ë¶í¸ê¸°(10)ììì ì²ë¦¬ë¥¼ ëìíê³ íë¶ì ì기 ìì¤í ì ì¤ëì¤ ë³µí¸ê¸°(20)ì ì²ë¦¬ë¥¼ ëìí íë¦ëì´ë¤. ì기 ìë¶ ë° íë¶ë ìí íì ì ìí´ ëëì´ì ¸ ìë¤.Processing in the system using the high band encoder 22 of FIG. 6 will now be described with reference to FIG. Fig. 7 is a flowchart showing the processing in the audio encoder 10 at the top and the processing of the audio decoder 20 in the system at the bottom. The upper and lower portions are divided by horizontal dashed lines.
ì기 ë¶í¸ê¸°ì ëí ì¤í ë ì¤ ì¤ëì¤ ì í¸ ì ë ¥(1)ì ì기 2 ëì ë¶ì íí°ë± í¬(11)ì ìí´ ì 주íì ëìê³¼ ê³ ì£¼íì ëìì¼ë¡ ë¶í ëë¤. ì ëì ë¶í¸ê¸°(12)ë ì기 ì 주íì ëìì ë¶í¸ííë¤. AMR-WB+ ê³ ëì ë¶í¸ê¸°(13)ë ì¢ ë° ì° ì±ëë¤ì ëí´ ê°ë³ì ì¼ë¡ ì기 ê³ ì£¼íì ëìì ë¶í¸ííë¤. í¹í, ê·¸ê²ì ê³ ì£¼íì ëì ë§¤ê° ë³ìë¤ë¡ì ì ì±ëë¤ì ëí ì í ì측 ê³ìë¤ê³¼ ì ì© ì´ë ê³ìë¤ì ê²°ì íë¤.The stereo audio signal input 1 to the encoder is divided into a low frequency band and a high frequency band by the two band analysis filter bank 11. The low band encoder 12 encodes the low frequency band. The AMR-WB + high band encoder 13 encodes the high frequency band separately for the left and right channels. In particular, it determines linear prediction coefficients and dedicated gain coefficients for both channels as high frequency band parameters.
ì기 ë¶í¸íë ëª¨ë ¸ ì 주íì ëì ì í¸, ì기 ì¤í ë ì¤ ì 주íì ëì ë§¤ê° ë³ì ê°ë¤ ë° ì기 ì¤í ë ì¤ ê³ ì£¼íì ëì ë§¤ê° ë³ì ê°ë¤ì ì기 ì¤ëì¤ ë³µí¸ê¸°(20)ì ë¹í¸ ì¤í¸ë¦¼(2)ì¼ë¡ ì¡ì ëë¤.The encoded mono low frequency band signal, the stereo low frequency band parameter values and the stereo high frequency band parameter values are transmitted to the audio decoder 20 in a bit stream 2.
ì기 ì ëì ë³µí¸ê¸°(21)ë ì기 ë¹í¸ ì¤í¸ë¦¼(2)ì ì 주íì ëì ê´ë ¨ ë¶ë¶ì ìì íê³ , ì기 ë¶ë¶ì ë³µí¸ííë¤. ì기 ë³µí¸íìì, ì기 ì ëì ë³µí¸ê¸°(21)ë ì기 ìì ë ì¤í ë ì¤ ë§¤ê° ë³ìë¤ì ìëµíê³ ë¨ì§ ì기 ëª¨ë ¸ ë¶ë¶ë§ì ë³µí¸ííë¤. ê·¸ ê²°ê³¼ë ëª¨ë ¸ ì ëì ì¤ëì¤ ì í¸ì´ë¤.The low band decoder 21 receives a low frequency band related part of the bit stream 2 and decodes the part. In the decoding, the low band decoder 21 omits the received stereo parameters and only decodes the mono part. The result is a mono low band audio signal.
ì기 ê³ ëì ë³µí¸ê¸°(22)ë íí¸ì¼ë¡ ì¢ ì±ë ì´ë ê³ì, ì° ì±ë ì´ë ê³ì, ì기 ì¢ ì±ëì ëí ì í ì측 ê³ìë¤ ë° ì기 ì° ì±ëì ëí ì í ì측 ê³ìë¤ì ìì íê³ , ë¤ë¥¸ íí¸ì¼ë¡ ì기 ì ëì ë³µí¸ê¸°(21)ì ìí´ ì¶ë ¥ë ì ëì ì¬ê¸° ì í¸ë¥¼ ìì íë¤. ì기 ì¢ ì±ë ì´ë ë° ì기 ì° ì±ë ì´ëì ëìì ì±ë íë ì ë³´ë¡ì ì¬ì©ëë¤. ê·¸ëì , ì기 ì¢ ì±ëê³¼ ì기 ì° ì±ëì ëí ì기 ê³ ì£¼íì ëìììì íë ë¶í¬ë¥¼ ëíë´ë ì´ë¤ ë¤ë¥¸ ì±ë íë ì ë³´ê° ì기 ê³ ëì ë¶í¸ê¸°(13)ì ìí´ ë¶ê°ì ì¸ ë§¤ê° ë³ìë¡ì ì ê³µë ì ìë¤ë ê²ì 주목ëì´ì¼ íë¤.The high band decoder 22 receives, on the one hand, a left channel gain coefficient, a right channel gain coefficient, linear prediction coefficients for the left channel and linear prediction coefficients for the right channel, and on the other hand the low band decoder ( Receive the low band excitation signal output by 21). The left channel gain and the right channel gain are used simultaneously as channel activity information. Instead, it should be noted that any other channel activity information indicative of the distribution of activity in the high frequency band for the left channel and the right channel may be provided as an additional parameter by the high band encoder 13. .
ì기 ì±ë íë ì ë³´ë íê°ëê³ , ì기 ì¢ ì±ë ë° ì기 ì° ì±ëì ëí ì´ë ê³ìë¤ì ë¨ì¼ ì´ë ê³ìì ëí íê°ì ë°ë¼ ì기 ì´ë ì í ë¡ì§(62)ì ìí´ ê²°í©ëë¤. ê·¸ë¤ì ì기 ì íë ì´ëì ì기 믹ì(60)를 íµí´ ì기 ì ëì ë³µí¸ê¸°(21)ì ìí´ ì ê³µë ì 주íì ëì ì¬ê¸° ì í¸ì ì ì©ëë¤.The channel activity information is evaluated and the gain coefficients for the left channel and the right channel are combined by the gain selection logic 62 according to the evaluation for a single gain coefficient. The selected gain is then applied to the low frequency band excitation signal provided by the low band decoder 21 via the mixer 60.
ëì±ì´, ì기 ì¢ ì±ë ë° ì기 ì° ì±ëì ëí LPC ê³ìë¤ì ë¨ì¼ ì¸í¸ì LPC ê³ìë¤ì ëí íê°ì ë°ë¼ ì기 LPC ëª¨ë¸ ì í ë¡ì§(63)ì ìí´ ê²°í©ëë¤. ì기 ê²°í©ë LPC 모ë¸ì ì기 LPC í©ì± íí°(61)ì ê³µê¸ëë¤. ì기 LPC í©ì± íí°(61)ë ì기 믹ì(60)ì ìí´ ì ê³µë ì¤ì¼ì¼ë§ë ì 주íì ëì ì¬ê¸° ì í¸ì ì기 ì íë LPC 모ë¸ì ì ì©íë¤.Moreover, LPC coefficients for the left channel and the right channel are combined by the LPC model selection logic 63 according to the evaluation of a single set of LPC coefficients. The combined LPC model is supplied to the LPC synthesis filter 61. The LPC synthesis filter 61 applies the selected LPC model to the scaled low frequency band excitation signal provided by the mixer 60.
ê·¸ë¤ì ì기 ê²°ê³¼ë¡ì ìì±ë ê³ ì£¼íì ëì ì¤ëì¤ ì í¸ë ì기 2 ëì í©ì± íí°ë± í¬(23)ìì ì기 ëª¨ë ¸ ì 주íì ëì ì¤ëì¤ ì í¸ì ê²°í©ëì´ ëª¨ë ¸ ì ëì ì¤ëì¤ ì í¸ê° ëëë°, ì기 ëª¨ë ¸ ì ëì ì¤ëì¤ ì í¸ë ì¤í ë ì¤ ì¤ëì¤ ì í¸ë¤ì ì²ë¦¬í ì ìë ì í리ì¼ì´ì ëë ì¥ì¹ì ìí í리ì í ì´ì ì ìí´ ì¶ë ¥ë ì ìë¤.The resulting high frequency band audio signal is then combined with the mono low frequency band audio signal in the two band synthesis filter bank 23 to become a mono full band audio signal, wherein the mono full band audio signal is a stereo audio signal. It may be output for presentation by an application or device that cannot handle calls.
ë 7ì íë¦ëìì ì´ì¤ ë¼ì¸ë¤ì ì§ë ë¸ë¡ì¼ë¡ íìë, ì기 ì±ë íë ì ë³´ì ì ìë íê° ë° ì기 ë§¤ê° ë³ì ê°ë¤ì íìì ì¸ ê²°í©ì ìì´í ë°©ë²ë¤ë¡ 구íë ì ìë¤. ë 8ê³¼ ë 9ì íë¦ëë¤ì 참조íì¬ ëê°ì§ ìµì ë¤ì´ ì ìë ê²ì´ë¤.The proposed evaluation of the channel activity information and subsequent combination of the parameter values, indicated by the block with double lines in the flow chart of FIG. 7, can be implemented in different ways. Two options will be presented with reference to the flow charts of FIGS. 8 and 9.
ë 8ì ëìë 첫ë²ì§¸ ìµì ìì, ì¢ ì±ëì ëí ì´ë ê³ìë¤ì ì°ì í íë ìì ì§ì ìê°ëì íê· íëê³ , ëì¼íê², ì° ì±ëì ëí ì´ë ê³ìë¤ì í íë ìì ì§ì ìê°ëì íê· íëë¤.In the first option shown in Fig. 8, the gain coefficients for the left channel are first averaged over the duration of one frame, and likewise, the gain coefficients for the right channel are averaged over the duration of one frame.
ê·¸ë¤ì ì기 íê· íë ì° ì±ë ì´ëì ì기 íê· íë ì¢ ì±ë ì´ëì¼ë¡ë¶í° ê°ì°ëì´, ê° íë ìì ëí ì´ë¤ ì´ë 차를 ì´ëíë¤.The averaged right channel gain is then subtracted from the averaged left channel gain, resulting in some gain difference for each frame.
ì기 ì´ë ì°¨ê° ì 1 ìê³ê°ë³´ë¤ ìì ê²½ì°, ì기 íë ìì ëí ê²°í©ë ì´ë ê³ìë¤ì ì° ì±ëì ëí´ ì ê³µë ì´ë ê³ìë¤ê³¼ ëì¼íê² ì¤ì ëë¤. ëì±ì´, ì기 íë ìì ëí ê²°í©ë LPC 모ë¸ë¤ì ì기 ì° ì±ëì ëí´ ì ê³µë LPC 모ë¸ë¤ê³¼ ëì¼íê² ì¤ì ëë¤.If the gain difference is less than the first threshold, the combined gain coefficients for the frame are set equal to the gain coefficients provided for the right channel. Moreover, the combined LPC models for the frame are set equal to the LPC models provided for the right channel.
ì기 ì´ë ì°¨ê° ì 2 ìê³ê°ë³´ë¤ í° ê²½ì°, ì기 íë ìì ëí ê²°í©ë ì´ë ê³ìë¤ì ì기 ì¢ ì±ëì ëí´ ì ê³µë ì´ë ê³ìë¤ê³¼ ëì¼íê² ì¤ì ëë¤. ëì±ì´, ì기 íë ìì ëí ê²°í©ë LPC 모ë¸ë¤ì ì기 ì¢ ì±ëì ëí´ ì ê³µë LPC 모ë¸ë¤ê³¼ ëì¼íê² ì¤ì ëë¤.If the gain difference is greater than the second threshold, the combined gain coefficients for the frame are set equal to the gain coefficients provided for the left channel. Moreover, the combined LPC models for the frame are set equal to the LPC models provided for the left channel.
모ë ë¤ë¥¸ ê²½ì°ì, ì기 íë ìì ëí ê²°í©ë ì´ë ê³ìë¤ì ì기 ì¢ ì±ë ì ëí ê°ê°ì ì´ë ê³ìì ì기 ì° ì±ëì ëí ê°ê°ì ì´ë ê³ìë¤ì ëí íê· ê³¼ ëì¼íê² ì¤ì ëë¤. ì기 íë ìì ëí ê²°í©ë LPC 모ë¸ë¤ì ì기 ì¢ ì±ëì ëí ê°ê°ì LPC 모ë¸ê³¼ ì기 ì° ì±ëì ëí ê°ê°ì LPC 모ë¸ì ëí íê· ê³¼ ëì¼íê² ì¤ì ëë¤.In all other cases, the combined gain coefficients for the frame are set equal to the average for each gain coefficient for the left channel and for each gain coefficient for the right channel. The combined LPC models for the frame are set equal to the average for each LPC model for the left channel and each LPC model for the right channel.
ì기 ì 1 ìê³ê°ê³¼ ì기 ì 2 ìê³ê°ì ì구ëë 민ê°ë ë° ì¤í ë ì¤-ëª¨ë ¸ ë³íì´ ì구ëë ì í리ì¼ì´ì ì ì íì ìì¡´íì¬ ì íëë¤. ì í©í ê°ë¤ì ì를 ë¤ì´ ì기 ì 1 ìê³ê°ì ëí´ -20 dBì´ê³ ì기 ì 2 ìê³ê°ì ëí´ 20 dBì´ë¤.The first threshold and the second threshold are selected depending on the sensitivity required and the type of application for which stereo to mono conversion is desired. Suitable values are, for example, -20 dB for the first threshold and 20 dB for the second threshold.
ë°ë¼ì, ì기 ì±ëë¤ ì¤ í ì±ëì´ ì¬ì¼ë°í¸ ì±ëë¡ì ê°ì£¼ë ì ìê³ ë°ë©´ì ë¤ë¥¸ ì±ëì´ ê°ê°ì íë ìëì íì± ì±ëë¡ì ê°ì£¼ë ì ìë ê²½ì°, ì기 íê· ì´ë ê³ìë¤ììì í° ì°¨ì´ë¡ ì¸íì¬, ì기 ì¬ì¼ë°í¸ ì±ëì ëí LPC 모ë¸ë¤ ë° ì´ë ê³ìë¤ì ì기 íë ìì ì§ì ìê°ëì 무ìëë¤. ì´ê²ì ì기 ì¬ì¼ë°í¸ ì±ëì´ ë¯¹ì±ë ì¤ëì¤ ì¶ë ¥ì ëí´ ìë¬´ë° ì¤ëì¤ì ì¸ ê¸°ì¬ë íì§ ì기 ë문ì ê°ë¥íë¤. ì´ë¬í ë§¤ê° ë³ìë¤ì ê²°í©ì ì기 ì¤íí¸ë¼ í¹ì±ê³¼ ì기 ì í¸ ë ë²¨ì´ ê°ê°ì íì± ì±ëì ê°ë¥í í ë°ì íë¤ë ê²ì ë³´ì¥íë¤.Thus, if one of the channels can be considered as a silent channel while the other can be considered as an active channel during each frame, due to the large difference in the average gain coefficients, LPC models and gain coefficients are ignored for the duration of the frame. This is possible because the silent channel makes no audio contribution to the mixed audio output. The combination of these parameters ensures that the spectral characteristics and the signal level are as close as possible to each active channel.
ì기 ì¤í ë ì¤ ë§¤ê° ë³ìë¤ì ìëµíë ê² ëì ì, ëí ì기 ì ëì ë³µí¸ê¸°ë ë°ë¡ ì기 ê³ ì£¼íì ëì ì²ë¦¬ì ëí´ ì¤ëª ë ë°ì ê°ì´, ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì íì±íê³ ê·¸ë¤ì ì기 ì í¸ì ëª¨ë ¸ ë¶ë¶ì ì ì©í ì ìë¤ë ê²ì 주목ëì´ì¼ íë¤.Instead of omitting the stereo parameters, it is also possible that the low band decoder can form combined parameter values and apply them to the mono portion of the signal, just as described for the high frequency band processing. It should be noted.
ë 9ì ëìë ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ëë²ì§¸ ìµì ìì, ì기 ì¢ ì±ëì ëí ì´ë ê³ìë¤ê³¼ ì기 ì° ì±ëì ëí ì´ë ê³ìë¤ì ê°ê° í íë ìì ì§ì ì ê°ëì ëí íê· íëë¤.In the second option of combining the parameter values shown in Fig. 9, the gain coefficients for the left channel and the gain coefficients for the right channel are each also averaged over the duration of one frame.
ê·¸ë¤ì ì기 íê· íë ì° ì±ë ì´ëì ì기 íê· íë ì¢ ì±ë ì´ëì¼ë¡ë¶í° ê°ì°ëì´, ê° íë ìì ëí ì´ë¤ ì´ë 차를 ì´ëíë¤.The averaged right channel gain is then subtracted from the averaged left channel gain, resulting in some gain difference for each frame.
ì기 ì´ë ì°¨ê° ì 1ì ë®ì ìê³ê°ë³´ë¤ ìì ê²½ì°, ì기 íë ìì ëí ê²°í©ë LPC 모ë¸ë¤ì ì기 ì° ì±ëì ëí´ ì ê³µë LPC 모ë¸ë¤ê³¼ ëì¼íê² ì¤ì ëë¤.If the gain difference is less than the first low threshold, the combined LPC models for the frame are set equal to the LPC models provided for the right channel.
ì기 ì´ë ì°¨ê° ì 2ì ëì ìê³ê°ë³´ë¤ í° ê²½ì°, ì기 íë ìì ëí ê²°í©ë LPC 모ë¸ë¤ì ì기 ì¢ ì±ëì ëí´ ì ê³µë LPC 모ë¸ë¤ê³¼ ëì¼íê² ì¤ì ëë¤.If the gain difference is greater than the second high threshold, the combined LPC models for the frame are set equal to the LPC models provided for the left channel.
모ë ë¤ë¥¸ ê²½ì°ì, ì기 íë ìì ëí ê²°í©ë LPC ê³ìë¤ì ì기 ì¢ ì±ëì ëí ê°ê°ì LPC 모ë¸ê³¼ ì기 ì° ì±ëì ëí ê°ê°ì LPC 모ë¸ì ëí íê· ê³¼ ëì¼íê² ì¤ì ëë¤.In all other cases, the combined LPC coefficients for the frame are set equal to the average for each LPC model for the left channel and each LPC model for the right channel.
ì´ë¤ ê²½ì°ìë ì기 íë ìì ëí ê²°í©ë ì´ë ê³ìë¤ì ì기 ì¢ ì±ëì ëí ê°ê°ì ì´ë ê³ìì ì기 ì° ì±ëì ëí ê°ê°ì ì´ë ê³ìì ëí íê· ê³¼ ëì¼íê² ì¤ì ëë¤.In any case, the combined gain coefficients for the frame are set equal to the average for each gain coefficient for the left channel and for each gain coefficient for the right channel.
ì기 LPC ê³ìë¤ì ì기 í©ì±ë ì í¸ì ì¤íí¸ë¼ í¹ì±ìë§ ì§ì ì ì¸ ìí¥ì 미ì¹ë¤. ë°ë¼ì ë¨ì§ ì기 LPC ê³ìë¤ì ê²°í©íë ê²ì ìíë ì¤íí¸ë¼ í¹ì±ì ì´ëíì§ë§, ì í¸ ê°ì ì 문ì 를 í´ê²°íì§ ëª»íë¤. íì§ë§, ì기 ì 주íì ëìì´ ë³¸ ë°ëª ì ë°ë¼ 믹ì±ëì§ ìë ê²½ì°, ì´ê²ì ì기 ì 주íì ëìê³¼ ì기 ê³ ì£¼íì ëì ê°ì ê· íì´ ë³´ì¡´ëë¤ë ì´ì ì ê°ëë¤. ì기 ê³ ì£¼íì ëììì ì í¸ ë 벨ì ë³´ì¡´íë ê²ì ìë§ë ê°ìë ì¢ ìì ì¸ ì¤ëì¤ íì§ì ì´ëíë, ê³ ì£¼íì ëììì ë¹êµì ë무 í° ì í¸ë¤ì ì¼ê¸°í¨ì¼ë¡ì ì기 ì 주íì ëìë¤ê³¼ ì기 ê³ ì£¼íì ëìë¤ ê°ì ê· íì ë³ê²½ìí¬ ê²ì´ë¤.The LPC coefficients directly affect only the spectral characteristics of the synthesized signal. Thus only combining the LPC coefficients results in the desired spectral characteristics, but does not solve the problem of signal attenuation. However, if the low frequency band is not mixed according to the present invention, this has the advantage that the balance between the low frequency band and the high frequency band is preserved. Preserving the signal level in the high frequency band will change the balance between the low frequency bands and the high frequency bands, possibly causing signals that are relatively too large in the high frequency band, resulting in reduced dependent audio quality. .
ìì ë ì¤ììë¤ì´ ë§ì ë°©ë²ë¤ë¡ ë ë³´ì ë ì ìë ë§¤ì° ë¤ìí ì¤ììë¤ ì¤ ë¨ì§ ëªëª ì¤ììë¤ì´ë¼ë ê²ì 주목ëì´ì¼ íë¤.It should be noted that the above-described embodiments are only some of the wide variety of embodiments that can be further corrected in many ways.
Claims (40) Translated from Koreanìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ìì delete ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë°©ë²ì¼ë¡ì, ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ë°©ë²ì ìì´ì, ì ì´ë ì¤ëì¤ ì£¼íì ëìì ì¼ë¶ë¶ì ëí´:A method of synthesizing a mono audio signal based on an available coded multichannel audio signal, wherein the coded multichannel audio signal comprises individual parameter values for each channel of the multichannel audio signal for at least a portion of an audio frequency band. A method comprising: for at least a portion of an audio frequency band: - ì기 ë§¤ê° ë³ì ëë©ì¸ìì ì기 ë¤ì¤ ì±ëë¤ì ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ë¨ê³; ë°Combining parameter values of the multiple channels in the parameter domain; And - ì기 ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì ì¬ì©íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ë¨ê³ë¥¼ í¬í¨íê³ ,Synthesizing a mono audio signal using the combined parameter values, ì기 ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ë¨ê³ë ì ì´ë íëì ë§¤ê° ë³ìì ëí´ ì기 ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì ë³´ì 기ë°íì¬ ì ì´ëë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².Combining the parameter values is controlled based on information on each activity in the multiple channels for at least one parameter. ì 21íì ìì´ì, ì기 ë§¤ê° ë³ìë¤ì ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ìë¤ ë° ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤ì í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².22. The method of claim 21, wherein the parameters comprise gain coefficients for each of the multiple channels and linear prediction coefficients for each of the multiple channels. ì 21í ëë ì 22íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì ë³´ë23. The method of claim 21 or 22, wherein the information about each activity in the multiple channels is ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ì;A gain factor for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ë¨ê¸°ê°ì ìê°ëìì ì´ë ê³ìë¤ì ê²°í©;Combining short term gain coefficients for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤;Linear prediction coefficients for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì ì´ë ì¼ë¶ë¶ììì ìëì§ ë 벨; ë°An energy level in at least a portion of the frequency band of the multichannel audio signal for each of the multichannels; And ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì ê³µíë ë¶í¸í ì¢ ë¨ì¼ë¡ë¶í° ìì ë ì기 íëì ëí ê°ë³ ë³´ì¡° ì ë³´ ì¤ ì ì´ë íë를 í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².And at least one of individual supplementary information about the activity received from an encoding end providing the encoded multichannel audio signal. ì 21í ëë ì 22íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ ì¤ ì²«ë²ì§¸ ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ë ê²½ì°, ì기 첫ë²ì§¸ ì±ëì ì´ì©ê°ë¥í ì ì´ë íëì ë§¤ê° ë³ìì ê°ì 무ìíë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².23. The method of claim 21 or 22, wherein if the information about activity in the multiple channels indicates that activity in a first of the multiple channels is significantly lower than in another channel of at least one of the multiple channels, Ignoring the value of at least one parameter available for the first channel. ì 24íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ ì¤ ì²«ë²ì§¸ ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ë ê²½ì°, ì기 ë¤ì¤ ì±ëë¤ì ì´ì©ê°ë¥í ì ì´ë íëì ë¤ë¥¸ ë§¤ê° ë³ìì ê°ë¤ì íê· ííë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².25. The method of claim 24, wherein if the information about the activity in the multiple channels indicates that activity in a first of the multiple channels is significantly lower than in at least one of the multiple channels. And averaging the values of at least one other parameter available. ì 21í ëë ì 22íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ ì¤ ì²«ë²ì§¸ ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ì§ ìë ê²½ì°, ì기 ë¤ì¤ ì±ëë¤ì ì´ì©ê°ë¥í ì기 ë§¤ê° ë³ìë¤ì ê°ë¤ì íê· ííë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².23. The method according to claim 21 or 22, wherein the information on activity in the multiple channels does not indicate that activity in the first of the multiple channels is significantly lower than in at least one other channel of the multiple channels. And averaging the values of the parameters available for the multiple channels. ì 21í ëë ì 22íì ìì´ì, ì기 ë¤ì¤ì±ë ì í¸ë ì¤í ë ì¤ ì í¸ì¸ ê²ì í¹ì§ì¼ë¡ íë ë°©ë².23. The method of claim 21 or 22, wherein the multichannel signal is a stereo signal. ì 21í ëë ì 22íì ìì´ì, ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì 주íì ëì ì í¸ì ê³ ì£¼íì ëì ì í¸ë¡ ë¶í íë ë¨ê³, ì기 ì 주íì ì í¸ë¥¼ ë¶í¸ííë ë¨ê³ ë° ì기 ë¤ì¤ ì±ëë¤ì ëí´ ê°ë³ì ì¼ë¡ ì기 ê³ ì£¼íì ëì ì í¸ë¥¼ ë¶í¸ííì¬ ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí´ ì기 ë§¤ê° ë³ì ê°ë¤ì ì´ëíë ë¨ê³ë¡ ë ì í ë¨ê³ë¤ì í¬í¨íê³ , ì ì´ë ì기 ê³ ì£¼íì ëì ì í¸ì ëí´ ê²°ê³¼ë¡ì ìì±ë ë§¤ê° ë³ì ê°ë¤ì ì기 ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìí´ ê²°í©ëë ê²ì í¹ì§ì¼ë¡ íë ë°©ë².23. The method of claim 21 or 22, further comprising: dividing the original multichannel audio signal into a low frequency band signal and a high frequency band signal, encoding the low frequency signal, and separately for the multichannels. Encoding the frequency band signal to result in the parameter values for each of the multiple channels, wherein the resulting parameter values for at least the high frequency band signal are modified by the mono audio signal. Combined for synthesis. ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìí ì¤ëì¤ ë³µí¸ê¸°ë¡ì, ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ì¤ëì¤ ë³µí¸ê¸°ì ìì´ì,An audio decoder for synthesizing a mono audio signal based on an available coded multichannel audio signal, wherein the coded multichannel audio signal comprises at least a portion of the multichannel audio signal for at least a portion of the frequency band of the original multichannel audio signal. In an audio decoder comprising individual parameter values for a channel, - ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì ì´ë ì¼ë¶ë¶ì ëí´ ë§¤ê° ë³ì ëë©ì¸ìì ì기 ë¤ì¤ ì±ëë¤ì ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ì ì´ë íëì ë§¤ê° ë³ì ì íë¶; ë°At least one parameter selector for combining parameter values of the multichannels in a parameter domain for at least a portion of a frequency band of the multichannel audio signal; And ì기 ì ì´ë íëì ë§¤ê° ë³ì ì íë¶ì ìí´ ì ê³µë ê²°í©ë ë§¤ê° ë³ì ê°ë¤ì 기ë°íì¬ ì ì´ë ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±íë ì¤ëì¤ ì í¸ í©ì±ë¶ë¥¼ í¬í¨íê³ ,An audio signal synthesizer for synthesizing a mono audio signal for at least a portion of a frequency band of the multichannel audio signal based on the combined parameter values provided by the at least one parameter selector, ì기 ë§¤ê° ë³ì ì íë¶ë ì기 ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì ë³´ì 기ë°íì¬ ì ì´ë íëì ë§¤ê° ë³ìì ëí´ ì기 ë§¤ê° ë³ì ê°ë¤ì ê²°í©íë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.And the parameter selector combines the parameter values for at least one parameter based on information about each activity in the multiple channels. ì 29íì ìì´ì, ì기 ë§¤ê° ë³ìë¤ì ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ìë¤ ë° ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤ì í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.30. The audio decoder of claim 29, wherein the parameters comprise gain coefficients for each of the multiple channels and linear prediction coefficients for each of the multiple channels. ì 29í ëë ì 30íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ììì ê°ê°ì íëì ëí ì ë³´ë31. The method of claim 29 or 30, wherein the information about each activity in the multiple channels is ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì´ë ê³ì;A gain factor for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ë¨ê¸°ê°ì ìê°ëìì ì´ë ê³ìë¤ì ê²°í©;Combining short term gain coefficients for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì í ì측 ê³ìë¤;Linear prediction coefficients for each of the multiple channels; ì기 ë¤ì¤ ì±ëë¤ ê°ê°ì ëí ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì ì´ë ì¼ë¶ë¶ììì ìëì§ ë 벨; ë°An energy level in at least a portion of the frequency band of the multichannel audio signal for each of the multichannels; And ì기 ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì ê³µíë ë¶í¸í ì¢ ë¨ì¼ë¡ë¶í° ìì ë ì기 íëì ëí ê°ë³ ë³´ì¡° ì ë³´ ì¤ ì ì´ë íë를 í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.And at least one of individual supplementary information about the activity received from an encoding end providing the encoded multichannel audio signal. ì 29í ëë ì 30íì ìì´ì, ì 1 ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ë ê²½ì°, ì기 ë§¤ê° ë³ì ì íë¶ë ì기 ê²°í©ì ì기 ë¤ì¤ ì±ëë¤ ì¤ ì²«ë²ì§¸ ì±ëì ì´ì©ê°ë¥í ì ì´ë íëì ë§¤ê° ë³ìì ê°ì 무ìíë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.31. The method of claim 29 or 30, wherein if the information about the activity in the multiple channels indicates that activity in a first channel is significantly lower than in at least one of the multiple channels, the parameter selector And in the combination ignores the value of at least one parameter available for the first of the multiple channels. ì 32íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ ì¤ ì²«ë²ì§¸ ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ë ê²½ì°, ì기 ë§¤ê° ë³ì ì íë¶ë ì기 ê²°í©ì ì기 ë¤ì¤ ì±ëë¤ì ì´ì©ê°ë¥í ì ì´ë íëì ë¤ë¥¸ ë§¤ê° ë³ìì ê°ë¤ì íê· ííë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.33. The apparatus of claim 32, wherein if the information about activity in the multiple channels indicates that activity in a first of the multiple channels is significantly lower than in at least one of the multiple channels, the parameter selector And averaging the values of at least one other parameter available for the multiple channels in the combining. ì 29í ëë ì 30íì ìì´ì, ì기 ë¤ì¤ ì±ëë¤ ì¤ í ì±ëììì íëì´ ì기 ë¤ì¤ ì±ëë¤ ì¤ ì ì´ë íëì ë¤ë¥¸ ì±ëììë³´ë¤ ìë¹í ë®ë¤ê³ ì기 ë¤ì¤ ì±ëë¤ììì íëì ëí ì ë³´ê° ëíë´ì§ ìë ê²½ì°, ì기 ë§¤ê° ë³ì ì íë¶ë ì기 ê²°í©ì ì기 ë¤ì¤ ì±ëë¤ì ì´ì©ê°ë¥í ì기 ë§¤ê° ë³ìë¤ì ê°ë¤ì íê· ííë ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.31. The method according to claim 29 or 30, wherein the information on activity in the multiple channels does not indicate that activity in one of the multiple channels is significantly lower than in at least one other of the multiple channels. And wherein said parameter selector averages the values of said parameters available for said multiple channels upon said combining. ì 29í ëë ì 30íì ìì´ì, ì기 ë¤ì¤ì±ë ì í¸ë ì¤í ë ì¤ ì í¸ì¸ ê²ì í¹ì§ì¼ë¡ íë ì¤ëì¤ ë³µí¸ê¸°.31. The audio decoder of claim 29 or 30, wherein the multichannel signal is a stereo signal. ì 29í ëë ì 30íì ìí ì¤ëì¤ ë³µí¸ê¸°ë¥¼ í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ì´ë ë¨ë§ê¸°.A mobile terminal comprising an audio decoder according to claim 29 or 30. ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë¥¼ ì ê³µíë ì¤ëì¤ ë¶í¸ê¸° ë° ì 29í ëë ì 30íì ìí ì¤ëì¤ ë³µí¸ê¸°ë¥¼ í¬í¨íë©°, ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ë¶í¸í ìì¤í .31. An audio encoder for providing an encoded multichannel audio signal and an audio decoder according to claim 29 or 30, wherein the encoded multichannel audio signal comprises at least one of the multiples of at least a portion of the frequency band of the original multichannel audio signal. And an individual parameter value for each channel of the channel audio signal. ì 37íì ìì´ì, ì기 ì¤ëì¤ ë¶í¸ê¸°ë ì기 ë¤ì¤ ì±ëë¤ìì íëì ëí ì 보를 ê²°ì íê³ ì기 ì¤ëì¤ ë³µí¸ê¸°ì ìí ì¬ì©ì ìí´ ì기 ì 보를 ì ê³µíë íê° ìì를 í¬í¨íë ê²ì í¹ì§ì¼ë¡ íë ë¶í¸í ìì¤í .38. The coding system of claim 37, wherein the audio encoder comprises an evaluation element for determining information about activity on the multiple channels and providing the information for use by the audio decoder. ì´ì©ê°ë¥í ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 기ë°íì¬ ëª¨ë ¸ ì¤ëì¤ ì í¸ë¥¼ í©ì±í기 ìí ìíí¸ì¨ì´ ì½ëê° ì ì¥ëì´ ìë ì»´í¨í°ë¡ ì½ì ì ìë 매체ë¡ì, ë¶í¸íë ë¤ì¤ì±ë ì¤ëì¤ ì í¸ë ì ì´ë ìëì ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì 주íì ëìì ì¼ë¶ë¶ì ëí´ ì기 ë¤ì¤ì±ë ì¤ëì¤ ì í¸ì ê° ì±ëì ëí ê°ë³ ë§¤ê° ë³ì ê°ë¤ì í¬í¨íë ì»´í¨í°ë¡ ì½ì ì ìë 매체ì ìì´ì,A computer readable medium having stored thereon software code for synthesizing a mono audio signal based on an available coded multichannel audio signal, wherein the coded multichannel audio signal is at least a frequency band of the original multichannel audio signal. A computer readable medium comprising, for a portion, individual parameter values for each channel of the multichannel audio signal, ì기 ìíí¸ì¨ì´ ì½ëë ì¤ëì¤ ë³µí¸ê¸°ìì ì¤íëë ê²½ì° ì 21í ëë ì 22íì ìí ë°©ë²ì ë¨ê³ë¤ì 구ííë ê²ì í¹ì§ì¼ë¡ íë ì»´í¨í°ë¡ ì½ì ì ìë 매체.The software code readable medium embodies the steps of the method of claim 21 when executed in an audio decoder. ìì delete
KR1020067017564A 2004-03-12 2004-03-12 Synthesizing a mono audio signal based on an encoded multichannel audio signal Expired - Lifetime KR100923478B1 (en) Priority Applications (1) Application Number Priority Date Filing Date Title KR1020067017564A KR100923478B1 (en) 2004-03-12 2004-03-12 Synthesizing a mono audio signal based on an encoded multichannel audio signal Applications Claiming Priority (1) Application Number Priority Date Filing Date Title KR1020067017564A KR100923478B1 (en) 2004-03-12 2004-03-12 Synthesizing a mono audio signal based on an encoded multichannel audio signal Related Child Applications (1) Application Number Title Priority Date Filing Date KR1020087015171A Division KR20080059685A (en) 2008-06-20 2004-03-12 Method and apparatus for synthesizing mono audio signal based on encoded multichannel audio signal Publications (2) Family ID=37707389 Family Applications (1) Application Number Title Priority Date Filing Date KR1020067017564A Expired - Lifetime KR100923478B1 (en) 2004-03-12 2004-03-12 Synthesizing a mono audio signal based on an encoded multichannel audio signal Country Status (1) Families Citing this family (1) * Cited by examiner, â Cited by third party Publication number Priority date Publication date Assignee Title WO2020216459A1 (en) * 2019-04-23 2020-10-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method or computer program for generating an output downmix representation Citations (6) * Cited by examiner, â Cited by third party Publication number Priority date Publication date Assignee Title EP0372601A1 (en) * 1988-11-10 1990-06-13 Koninklijke Philips Electronics N.V. Coder for incorporating extra information in a digital audio signal having a predetermined format, decoder for extracting such extra information from a digital signal, device for recording a digital signal on a record carrier, comprising such a coder, and record carrier obtained by means of such a device EP0402973A1 (en) * 1989-06-02 1990-12-19 Koninklijke Philips Electronics N.V. Digital transmission system, transmitter and receiver for use in the transmission system, and record carrier obtained by means of the transmitter in the form of a recording device US5274740A (en) * 1991-01-08 1993-12-28 Dolby Laboratories Licensing Corporation Decoder for variable number of channel presentation of multidimensional sound fields US5878080A (en) 1996-02-08 1999-03-02 U.S. Philips Corporation N-channel transmission, compatible with 2-channel transmission and 1-channel transmission EP1376538A1 (en) * 2002-06-24 2004-01-02 Agere Systems Inc. Hybrid multi-channel/cue coding/decoding of audio signals EP1377123A1 (en) * 2002-06-24 2004-01-02 Agere Systems Inc. Equalization for audio mixingPatent event date: 20060830
Patent event code: PA01051R01D
Comment text: International Patent Application
2006-08-30 PA0201 Request for examination 2006-11-29 PG1501 Laying open of application 2007-09-28 E902 Notification of reason for refusal 2007-09-28 PE0902 Notice of grounds for rejectionComment text: Notification of reason for refusal
Patent event date: 20070928
Patent event code: PE09021S01D
2008-03-26 E601 Decision to refuse application 2008-03-26 PE0601 Decision on rejection of patentPatent event date: 20080326
Comment text: Decision to Refuse Application
Patent event code: PE06012S01D
Patent event date: 20070928
Comment text: Notification of reason for refusal
Patent event code: PE06011S01I
2008-06-20 A107 Divisional application of patent 2008-06-20 J201 Request for trial against refusal decision 2008-06-20 PA0104 Divisional application for international applicationComment text: Divisional Application for International Patent
Patent event code: PA01041R01D
Patent event date: 20080620
2008-06-20 PJ0201 Trial against decision of rejectionPatent event date: 20080620
Comment text: Request for Trial against Decision on Refusal
Patent event code: PJ02012R01D
Patent event date: 20080326
Comment text: Decision to Refuse Application
Patent event code: PJ02011S01I
Appeal kind category: Appeal against decision to decline refusal
Decision date: 20090727
Appeal identifier: 2008101005866
Request date: 20080620
2009-07-27 J301 Trial decisionFree format text: TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20080620
Effective date: 20090727
2009-07-27 PJ1301 Trial decisionPatent event code: PJ13011S01D
Patent event date: 20090727
Comment text: Trial Decision on Objection to Decision on Refusal
Appeal kind category: Appeal against decision to decline refusal
Request date: 20080620
Decision date: 20090727
Appeal identifier: 2008101005866
2009-07-28 PS0901 Examination by remand of revocation 2009-07-28 S901 Examination by remand of revocation 2009-07-29 GRNO Decision to grant (after opposition) 2009-07-29 PS0701 Decision of registration after remand of revocationPatent event date: 20090729
Patent event code: PS07012S01D
Comment text: Decision to Grant Registration
Patent event date: 20090728
Patent event code: PS07011S01I
Comment text: Notice of Trial Decision (Remand of Revocation)
2009-10-19 GRNT Written decision to grant 2009-10-19 PR0701 Registration of establishmentComment text: Registration of Establishment
Patent event date: 20091019
Patent event code: PR07011E01D
2009-10-19 PR1002 Payment of registration feePayment date: 20091020
End annual number: 3
Start annual number: 1
2009-10-27 PG1601 Publication of registration 2012-09-24 FPAY Annual fee paymentPayment date: 20120924
Year of fee payment: 4
2012-09-24 PR1001 Payment of annual feePayment date: 20120924
Start annual number: 4
End annual number: 4
2013-09-26 FPAY Annual fee paymentPayment date: 20130926
Year of fee payment: 5
2013-09-26 PR1001 Payment of annual feePayment date: 20130926
Start annual number: 5
End annual number: 5
2014-09-23 FPAY Annual fee paymentPayment date: 20140923
Year of fee payment: 6
2014-09-23 PR1001 Payment of annual feePayment date: 20140923
Start annual number: 6
End annual number: 6
2015-09-18 FPAY Annual fee paymentPayment date: 20150918
Year of fee payment: 7
2015-09-18 PR1001 Payment of annual feePayment date: 20150918
Start annual number: 7
End annual number: 7
2016-09-20 FPAY Annual fee paymentPayment date: 20160921
Year of fee payment: 8
2016-09-20 PR1001 Payment of annual feePayment date: 20160921
Start annual number: 8
End annual number: 8
2017-09-19 FPAY Annual fee paymentPayment date: 20170919
Year of fee payment: 9
2017-09-19 PR1001 Payment of annual feePayment date: 20170919
Start annual number: 9
End annual number: 9
2018-09-17 FPAY Annual fee paymentPayment date: 20180918
Year of fee payment: 10
2018-09-17 PR1001 Payment of annual feePayment date: 20180918
Start annual number: 10
End annual number: 10
2019-09-16 FPAY Annual fee paymentPayment date: 20190917
Year of fee payment: 11
2019-09-16 PR1001 Payment of annual feePayment date: 20190917
Start annual number: 11
End annual number: 11
2020-09-25 PR1001 Payment of annual feePayment date: 20200928
Start annual number: 12
End annual number: 12
2021-09-15 PR1001 Payment of annual feePayment date: 20210915
Start annual number: 13
End annual number: 13
2022-09-15 PR1001 Payment of annual feePayment date: 20220915
Start annual number: 14
End annual number: 14
2023-09-18 PR1001 Payment of annual feePayment date: 20230918
Start annual number: 15
End annual number: 15
2024-03-13 PC1801 Expiration of termTermination date: 20240912
Termination category: Expiration of duration
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4