A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://en.wikipedia.org/wiki/Overlap-save_method below:

Overlap–save method - Wikipedia

From Wikipedia, the free encyclopedia

In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x [ n ] {\displaystyle x[n]} and a finite impulse response (FIR) filter h [ n ] {\displaystyle h[n]} :

y [ n ] = x [ n ] ∗ h [ n ]   ≜   ∑ m = − ∞ ∞ h [ m ] ⋅ x [ n − m ] = ∑ m = 1 M h [ m ] ⋅ x [ n − m ] , {\displaystyle y[n]=x[n]*h[n]\ \triangleq \ \sum _{m=-\infty }^{\infty }h[m]\cdot x[n-m]=\sum _{m=1}^{M}h[m]\cdot x[n-m],}     Eq.1

where h[m] = 0 for m outside the region [1, M]. This article uses common abstract notations, such as y ( t ) = x ( t ) ∗ h ( t ) , {\textstyle y(t)=x(t)*h(t),} or y ( t ) = H { x ( t ) } , {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants t {\textstyle t} (see Convolution#Notation).

Fig 1: A sequence of four plots depicts one cycle of the overlap–save convolution algorithm. The 1st plot is a long sequence of data to be processed with a lowpass FIR filter. The 2nd plot is one segment of the data to be processed in piecewise fashion. The 3rd plot is the filtered segment, with the usable portion colored red. The 4th plot shows the filtered segment appended to the output stream.[A] The FIR filter is a boxcar lowpass with M=16 samples, the length of the segments is L=100 samples and the overlap is 15 samples.

The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. That requires longer input segments that overlap the next input segment. The overlapped data gets "saved" and used a second time.[1] First we describe that process with just conventional convolution for each output segment. Then we describe how to replace that convolution with a more efficient method.

Consider a segment that begins at n = kL + M, for any integer k, and define:

x k [ n ]   ≜ { x [ n + k L ] , 1 ≤ n ≤ L + M − 1 0 , otherwise . {\displaystyle x_{k}[n]\ \triangleq {\begin{cases}x[n+kL],&1\leq n\leq L+M-1\\0,&{\textrm {otherwise}}.\end{cases}}}
y k [ n ]   ≜   x k [ n ] ∗ h [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − m ] . {\displaystyle y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-m].}

Then, for k L + M + 1 ≤ n ≤ k L + L + M {\displaystyle kL+M+1\leq n\leq kL+L+M} , and equivalently M + 1 ≤ n − k L ≤ L + M {\displaystyle M+1\leq n-kL\leq L+M} , we can write:

y [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − k L − m ]     ≜     y k [ n − k L ] . {\displaystyle y[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-kL-m]\ \ \triangleq \ \ y_{k}[n-kL].}

With the substitution j = n − k L {\displaystyle j=n-kL} , the task is reduced to computing y k [ j ] {\displaystyle y_{k}[j]} for M + 1 ≤ j ≤ L + M {\displaystyle M+1\leq j\leq L+M} . These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1  ≤  j  ≤  L.[B]

If we periodically extend xk[n] with period N  ≥  L + M − 1, according to:

x k , N [ n ]   ≜   ∑ ℓ = − ∞ ∞ x k [ n − ℓ N ] , {\displaystyle x_{k,N}[n]\ \triangleq \ \sum _{\ell =-\infty }^{\infty }x_{k}[n-\ell N],}

the convolutions   ( x k , N ) ∗ h {\displaystyle (x_{k,N})*h\,}   and   x k ∗ h {\displaystyle x_{k}*h\,}   are equivalent in the region M + 1 ≤ n ≤ L + M {\displaystyle M+1\leq n\leq L+M} . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of x k [ n ] {\displaystyle x_{k}[n]\,} with h [ n ] {\displaystyle h[n]\,}   in the region [1, N].  The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded.  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:

y k [ n ]   =   IDFT N (   DFT N ( x k [ n ] ) ⋅   DFT N ( h [ n ] )   ) , {\displaystyle y_{k}[n]\ =\ \scriptstyle {\text{IDFT}}_{N}\displaystyle (\ \scriptstyle {\text{DFT}}_{N}\displaystyle (x_{k}[n])\cdot \ \scriptstyle {\text{DFT}}_{N}\displaystyle (h[n])\ ),}     Eq.2

where:

(Overlap-save algorithm for linear convolution)
h = FIR_impulse_response
M = length(h)
overlap = M − 1
N = 8 × overlap    (see next section for a better choice)
step_size = N − overlap
H = DFT(h, N)
position = 0

while position + N ≤ length(x)
    yt = IDFT(DFT(x(position+(1:N))) × H)
    y(position+(1:step_size)) = yt(M : N)    (discard M−1 y-values)
    position = position + step_size
end
Efficiency considerations[edit] Fig 2: A graph of the values of N (an integer power of 2) that minimize the cost function N ( log 2 ⁡ N + 1 ) N − M + 1 {\displaystyle {\tfrac {N\left(\log _{2}N+1\right)}{N-M+1}}}

When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT.[E] Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about:

N ( log 2 ⁡ ( N ) + 1 ) N − M + 1 . {\displaystyle {\frac {N(\log _{2}(N)+1)}{N-M+1}}.\,}     Eq.3

For example, when M = 201 {\displaystyle M=201} and N = 1024 , {\displaystyle N=1024,} Eq.3 equals 13.67 , {\displaystyle 13.67,} whereas direct evaluation of Eq.1 would require up to 201 {\displaystyle 201} complex multiplications per output sample, the worst case being when both x {\displaystyle x} and h {\displaystyle h} are complex-valued. Also note that for any given M , {\displaystyle M,} Eq.3 has a minimum with respect to N . {\displaystyle N.} Figure 2 is a graph of the values of N {\displaystyle N} that minimize Eq.3 for a range of filter lengths ( M {\displaystyle M} ).

Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length N x {\displaystyle N_{x}} samples. The total number of complex multiplications would be:

N x ⋅ ( log 2 ⁡ ( N x ) + 1 ) . {\displaystyle N_{x}\cdot (\log _{2}(N_{x})+1).}

Comparatively, the number of complex multiplications required by the pseudocode algorithm is:

N x ⋅ ( log 2 ⁡ ( N ) + 1 ) ⋅ N N − M + 1 . {\displaystyle N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.}

Hence the cost of the overlap–save method scales almost as O ( N x log 2 ⁡ N ) {\displaystyle O\left(N_{x}\log _{2}N\right)} while the cost of a single, large circular convolution is almost O ( N x log 2 ⁡ N x ) {\displaystyle O\left(N_{x}\log _{2}N_{x}\right)} .

Overlap–discard[2] and Overlap–scrap[3] are less commonly used labels for the same method described here. However, these labels are actually better (than overlap–save) to distinguish from overlap–add, because both methods "save", but only one discards. "Save" merely refers to the fact that M − 1 input (or output) samples from segment k are needed to process segment k + 1.

Extending overlap–save[edit]

The overlap–save algorithm can be extended to include other common operations of a system:[F][4]

  1. ^ Rabiner and Gold, Fig 2.35, fourth trace.
  2. ^ Shifting the undesirable edge effects to the last M-1 outputs is a potential run-time convenience, because the IDFT can be computed in the y [ n ] {\displaystyle y[n]} buffer, instead of being computed and copied. Then the edge effects can be overwritten by the next IDFT.  A subsequent footnote explains how the shift is done, by a time-shift of the impulse response.
  3. ^ Not to be confused with the Overlap-add method, which preserves separate leading and trailing edge-effects.
  4. ^ The edge effects can be moved from the front to the back of the IDFT output by replacing DFT N ( h [ n ] ) {\displaystyle \scriptstyle {\text{DFT}}_{N}\displaystyle (h[n])} with DFT N ( h [ n + M − 1 ] ) =   DFT N ( h [ n + M − 1 − N ] ) , {\displaystyle \scriptstyle {\text{DFT}}_{N}\displaystyle (h[n+M-1])=\ \scriptstyle {\text{DFT}}_{N}\displaystyle (h[n+M-1-N]),} meaning that the N-length buffer is circularly-shifted (rotated) by M-1 samples. Thus the h(M) element is at n=1. The h(M-1) element is at n=N. h(M-2) is at n=N-1. Etc.
  5. ^ Cooley–Tukey FFT algorithm for N=2k needs (N/2) log2(N) – see FFT – Definition and speed
  6. ^ Carlin et al. 1999, p 31, col 20.
  1. Rabiner, Lawrence R.; Gold, Bernard (1975). "2.25". Theory and application of digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp. 63–67. ISBN 0-13-914101-4.
  2. US patent 6898235, Carlin, Joe; Collins, Terry & Hays, Peter et al., "Wideband communication intercept and direction finding device using hyperchannelization", published 1999-12-10, issued 2005-05-24 , also available at https://patentimages.storage.googleapis.com/4d/39/2a/cec2ae6f33c1e7/US6898235.pdf

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4