18.4. Sparsity in Orthonormal Bases#

We start this section with a quick review of orthonormal bases and orthogonal transforms for finite dimensional signals xCN. We look at several examples of sparse signals in different orthonormal bases. We then demonstrate that while an orthonormal bases is a complete representation of all signals xCN yet, it is not a good tool for exploiting the sparsity in x adequately.

We present an uncertainty principle which explains why a pair of orthonormal bases (like Dirac and Fourier basis) cannot have sparse representation of the same signal simultaneously.

We then demonstrate that a combination of two orthonormal bases can be quite useful in creating a redundant yet sparse representation of a larger class of signals which could not be sparsely represented in either of the two bases individually.

This motivates us to discuss more general over-complete signal dictionaries in the next section.

18.4.1. Orthonormal Bases and Orthogonal Transforms#

In signal processing, we often convert a finite length time domain signal into a different domain using finite length transforms. Some of the most common transforms are discrete Fourier transform, the discrete cosine transform, and the Haar transform. They all belong to the class of transforms called orthogonal transforms.

Orthogonal transforms are characterized by a pair of equations

(18.5)#x=Ψa

and

(18.6)#a=ΨHx

where Ψ is an orthonormal basis for the complex vector space CN. In particular, the columns of Ψ are unit norm, and orthogonal to each other. Thus if we write

Ψ=[ψ1ψ2ψN]

then

ψi,ψj=δ(ij).

In other words:

Ψ1=ΨH.

Eq. (18.5) is known as the synthesis equation (x is synthesized by columns of Ψ). Eq. (18.6) is known as the analysis equation as we compute the coefficients in a by taking the inner product of x with columns of Ψ.

Ψ is known as synthesis operator while ΨH is known as analysis operator. Orthogonal transforms preserve the norm of the signal:

x22=a22.

This result is commonly known as Parseval’s identity in signal processing community.

More generally, orthogonal transforms preserve inner products:

x,y=Ψx,Ψyx,yCN.

Example 18.7 (Dirac basis and sparse signals)

The simplest orthogonal transform is the identity basis or the standard ordered basis for CN.

Ψ=IN.

We have

Ψ1=ΨH=INH=IN.

In this basis

x=INa=a.

We will drop N from suffix for convenience and refer to the matrix as I only.

This basis is also known as Dirac basis. The name Dirac comes from the Dirac delta functions used in signal analysis in continuous time domain.

The basis consists of finite length impulses denoted by ei where

e1=(1,0,,0),e2=(0,1,,0),eN=(0,0,,1)

If a signal x consists of a linear combination of few KN impulses, then it is a sparse signal in this basis. For example the signal

x=(3,4,0,0,2,0,0,,0,0,0)

is 3-sparse in Dirac basis since

x=3e1+4e22e5

can be expressed as a linear combination of just 3 impulses.

In contrast if we consider a complex sinusoid in CN, there is no way we can find a sparse representation for it in the Dirac basis.

18.4.2. Fourier Basis and Sparse Signals#

The most popular finite length orthogonal transform is DFT (Discrete Fourier Transform).

We define the N-th root of unity as

ω=exp(i2πN).

Clearly

ωN=exp(i2π)=1.

We define the synthesis matrix of DFT as

Ψ=FN=1N[ωkn]0kN1,0nN1

where

ωkn=exp(i2πknN).

k iterates over rows of FN while n iterates over columns of FN.

The definition is symmetric. Hence

FN=FNT.

Note that we have multiplied with 1N to make sure that columns of FN are unit norm.

The columns of FN form the Fourier basis for signals in CN.

Example 18.8 (Fourier basis for N=2)

2nd root of unity is given by

ω=exp(iπ)=1.

Hence

F2=12[ω0ω0ω0ω1]=12[1111]

In this case

F2H=F2.

Example 18.9 (Fourier basis for N=3)

3rd root of unity is given by

ω=exp(i2π3)=0.5+0.866i.

Hence

F3=13[ω0ω0ω0ω0ω1ω2ω0ω2ω4]=13[11110.5+0.866i0.50.866i10.50.866i0.5+0.866i]

In this case

F3H=13[11110.50.866i0.5+0.866i10.5+0.866i0.50.866i]

Example 18.10 (Fourier basis for N=4)

4th root of unity is given by

ω=exp(i2π4)=i.

Hence

F4=12[ω0ω0ω0ω0ω0ω1ω2ω3ω0ω2ω4ω6ω0ω3ω6ω9]=12[ω0ω0ω0ω0ω0ω1ω2ω3ω0ω21ω2ω0ω3ω2ω1]=12[11111i1i11111i1i]

In this case

F4H=12[11111i1i11111i1i]

We drop the suffix N wherever convenient and simply refer to the synthesis matrix as F.

If a signal x is a linear combination of only a few (KN) complex sinusoids, then x has a sparse representation in the DFT basis F.

Example 18.11 (Sparse signals in F)

Consider the following signal

x=(0.50.5i1.50.5+i)

Its representation in F4 is given by

a=F4Hx=(1200).

Clearly the signal is 2-sparse in F4.

Now consider a signal e2 which is sparse in the Dirac basis

e2=(0100).

Its representation in F4 is

a=F4He2=(0.50.5i0.50.5i).

Thus we see that while e2 is sparse in I4, it is not at all sparse in F4.

18.4.3. An Uncertainty Principle#

As we noted, Dirac basis can give sparse representations for impulses but not for complex sinusoids. Vice versa Fourier basis can give sparse representations for complex sinusoids but not impulses.

Can we claim that a signal cannot be simultaneously represented both in time (Dirac basis) and in frequency domain (Fourier basis)?

More generally, let Ψ and X be any two arbitrary orthonormal bases for CN. For some xCN, let

x=Ψa=Xb

where a and b are representations of x in Ψ and X respectively. Can we claim something for the relationship between sparsity levels a0 and b0?

The answer turns out to be yes, but it depends on how much the two bases are similar or close to each other. The results in this section were originally developed in [31].

Definition 18.1 (Proximity/Mutual coherence)

The proximity between two orthonormal bases Ψ and X where

Ψ=[ψ1ψN]

and

X=[χ1χN]

is defined as the maximum absolute value of inner products between the columns of these two bases:

μ(Ψ,X)=max1i,jN|ψi,χj|.

This is also known as mutual coherence of the two orthonormal bases.

If the two bases are identical, then clearly μ=1. If any vector in Ψ is very close to some vector in X then we will have a very high value of μ (close to 1).

If the vectors in Ψ and X are highly dissimilar, then we will have very low value of μ close to 0.

Example 18.12 (Proximity or mutual coherence of Dirac and Fourier bases)

Consider the mutual coherence between Dirac and Fourier bases.

For some small values of N (the dimension of ambient space CN) the values are tabulated in below.

N

μ

2

0.7071

4

0.5000

6

0.4082

8

0.3536

For larger values of N we can see the variation of μ in the plot below.

../_images/coherence_dirac_fourier_bases.png

Fig. 18.4 Mutual coherence for Dirac and Fourier bases#

We present some results related to mutual coherence of two orthonormal bases.

Theorem 18.2 (Product of orthonormal bases)

The product of two orthonormal bases Ψ and X for CN given by ΨHX forms an orthonormal basis by itself.

Proof. Consider the matrix ΨHX

ΨHX=[ψ1HψNH][χ1χN]=[ψ1Hχ1ψ1Hχ2ψ1HχNψ2Hχ1ψ2Hχ2ψ2HχNψNHχ1ψNHχ2ψNHχN]

Any column of the product matrix is

[ψ1Hχiψ2HχiψNHχi]=ΨHχi

But then Ψ preserves norms, hence

ΨHχi2=χi2=1.

Thus each column of the product ΨHX is itself a unit norm vector.

Consider the inner product of two columns of ΨHX

ΨHχi,ΨHχj=χjHΨΨHχi=χjHχi=δ(ij).

Thus, the columns of ΨHX are orthogonal to each other.

Hence ΨHX forms an orthonormal basis.

Remark 18.3 (The group of orthonormal bases)

A more general result would be to show that the set of orthonormal bases forms a group under the matrix multiplication operation. The identity element is the Dirac basis. The inverse of an orthonormal basis is also an orthonormal basis. The product of two orthonormal bases is also an orthonormal basis. The matrix multiplication satisfies associative law.

Theorem 18.3 (Mutual coherence bounds)

Mutual coherence of two orthonormal bases Ψ and X for the complex vector space CN is bounded by

(18.7)#1Nμ(Ψ,X)1.

Proof. Since columns of both Ψ and X are unit norm, hence |ψi,χj| cannot be greater than 1.

Now if Ψ=X then we have μ(Ψ,X)=1. This proves the upper bound.

Now consider the matrix ΨHX which forms an orthonormal basis by itself. Consider any column of this matrix

[ψ1Hχiψ2HχiψNHχi]=ΨHχi

Since the column is unit norm, hence some of squares of the absolute values of entries of the column is 1. Thus the absolute value of each of the entries cannot be simultaneously less than 1N.

Hence there exists an entry (in each column) such that

|ψjHχi|1N.

Hence we get the lower bound on mutual coherence of Ψ and X given by

μ(Ψ,X)1N.

Theorem 18.4 (Mutual coherence of Dirac and Fourier bases)

Mutual coherence of Dirac and Fourier bases is 1N.

μ(IN,FN)=1N.

Proof. Theorem 18.3 shows that

μ(IN,FN)1N.

We just need to show that its in fact an equality.

Consider i-th column of I as ei.

Consider j-th column of F:

fj=1N[ω0ωjω2jω(N1)j]

where ω is the N-th root of unity. Then

ei,fj=1Nω(i1)j|ei,fj|=1N.

This doesn’t depend on the choice of i-th column of IN and j-th column of FN. Hence

μ(IN,FN)=1N.

With basic properties of mutual coherence in place, we are now ready to state an uncertainty principle on the sparsity levels of representations of same signal x in two different orthonormal bases:

Theorem 18.5 (Uncertainty principle)

For any arbitrary pair of orthonormal bases Ψ, X with mutual coherence μ(Ψ,X), and for any arbitrary non-zero vector xCN with representations a and b respectively, the following inequality holds true:

(18.8)#a0+b02μ(Ψ,X).

Moreover for unit-length x we have:

(18.9)#a1+b12μ(Ψ,X).

Proof. Dividing x by x2 doesn’t change 0 “norm” of a and b. i.e.,

ax20=a0.

Hence without loss of generality, we will assume that x2=1.

We are given that xHx=1, x=Ψa and x=Xb. Since Ψ and X are orthonormal bases hence a2=b2=1 also.

We can write

1=xHx=(Ψa)H(Xb)=aHΨHXb=i=1Nj=1NaibjψiHχj=|i=1Nj=1NaibjψiHχj|i=1Nj=1N|ai||bj||ψiHχj|μ(Ψ,X)i=1Nj=1N|ai||bj|=μ(Ψ,X)a1b1

where we note that

a1b1=(i=1N|ai|)(j=1N|bj|)=i=1Nj=1N|ai||bj|

and

μ(Ψ,X)|ψiHχj|.

Hence we get the inequality

μ(Ψ,X)a1b11

Using the inequality between algebraic mean and geometric mean

aba+b2a,b0

we get

a1+b12a1b12μ(Ψ,X).

This is an uncertainty principle for the 1 norms of the two representations. We still have to get the uncertainty principle for 0 case. Let

a0=A and b0=B

Consider the sets

XA={y|y=Ψu and u0=A,u2=1}

and

XB={y:y=Xv and v0=B,v2=1}.

Clearly

xXAXB.

The representations u for vectors y in XA have exactly A non-zero entries and are all unit norm representations. Which of them would have the longest 1 norm u1?

This can be written as an optimization problem of the form

(18.10)#maximizeyXAu1 where u=ΨHy.

Let the optimal solution for this problem be va with corresponding representation a=ΨHva. Clearly

a1a1.

Similarly from the set XB let us find the vector vb with maximum 1 norm representation in X

(18.11)#maximizeyXBv1 where v=XHy.

Let the optimal solution for this problem be vb with corresponding representation b=XHvb. Clearly

b1b1.

Returning back to the inequality

a1b11μ(Ψ,X)

we can write

a1b11μ(Ψ,X).

An equivalent formulation of the optimization problem (18.10)is

(18.12)#maximizeaa1subject to a22=aHa=1and a0=A.

This formulation doesn’t require any specific mention of the basis Ψ. Let the optimal value for this problem be given by

a1=g(A)=g(a0)

Here we consider the optimization problem to be parameterized by the 0-“norm” of a; i.e., a0=A and we write the optimal value as a function g of the parameter A.

Then by symmetry, optimal value for the problem (18.11) is

b1=g(B)=g(b0)

Thus we can write

(18.13)#g(a0)g(b0)1μ(Ψ,X).

This is our intended result since we have been able to write the inequality as a function of 0 “norm”s of the representations a and b.

In order to complete the result, we need to find the solution of the optimization problem (18.12) given by the function g.

Without loss of generality, let us assume that the A non-zero entries in the optimization variable a appear in its first A entries and rest are zero. This is fine since changing the order of entries in a doesn’t affect any of the norms of concern a0, a1 and a2.

Let us further assume that all non-zero entries of a are strictly positive real numbers. This assumption is valid since only absolute values are used in this problem. Specifically for any a with non-zero complex entries (a1,a2,,aA,0,,0) there exists a with positive entries (|a1|,|a2|,,|aA|,0,,0) such that a0=a0, a1=a1 and a2=a2, hence solving the optimization problem for complex a is same as solving the optimization problem for a with strictly positive first A entries.

Using Lagrange multipliers, the 0 constraint vanishes (since the assumptions mentioned above allow us to focus on only the first A coordinates of a), and we obtain

L(a)=i=1Aai+λ(1i=1Aai2).

Differentiating w.r.t. ai and equating to 0 we get

12λai=0ai=12λ.

The 2 constraint requires

i=1Aai2=A14λ2=1λ=A2

Thus

ai=1A

and

a1=i=1A|ai|=A.

Thus the optimal value of the optimization problem (18.12) is

g(A)=A=a0.

Similarly

g(B)=B=b0.

Putting back in (18.13) we get

a0b01μ(Ψ,X).

Applying the algebraic mean-geometric mean inequality we get the desired result

a0+b02μ(Ψ,X).

This theorem suggests that if two orthonormal bases have low mutual coherence then

  • the two representations for x cannot be jointly l1-short and

  • the two representations for x cannot be jointly sparse.

Challenge

Can we show that the above result is sharp? i.e. For a pair of orthonormal bases Ψ and X, it is always possible to find a non-zero vector x with corresponding representations x=Ψa and x=Xb which satisfies the lower bound

a0+b0=2μ(Ψ,X)?

Example 18.13 (Sparse representations with Dirac and Fourier bases)

We showed in Theorem 18.4 that

μ(I,F)=1N.

Let xCN. Let its representation in F be given by

x=Fa.

Applying Theorem 18.5 we have

x0+a02μ(Ψ,X)=2N.

This tells us that a signal cannot have fewer than 2N non-zeros in time and frequency domain together.

18.4.4. Linear Combinations of Impulses and Sinusoids#

What happens if a signal x is a linear combination of few complex sinusoids and few impulses?

The set of sinusoids and impulses involved in the construction of x actually specifies the degrees of freedom of x. This is the indicator of inherent sparsity of x provided this set of component signals of x is known a priori.

In absence of prior knowledge of component signals of x, we attempt to look for a sparse representation of x in one of the well understood orthonormal bases. Here we are specifically looking at the two bases Dirac and Fourier.

While the Dirac basis can provide sparse representation for impulses, sinusoids have dense representation in Dirac basis. Vice versa, in Fourier basis, complex sinusoids have sparse representation, yet impulses have dense representations. Thus neither of the two bases is capable of providing a sparse representation for a combination of impulses and sinusoids.

The natural question arises if there is a way to come up with a sparse representation for such signals by combining the Dirac and Fourier basis?

18.4.5. Dirac Fourier Basis#

Now we develop a representation of signals xCN in terms of a combination of Dirac basis I and Fourier basis F.

We define a new synthesis matrix

(18.14)#H=[IF]CN×2N.

We can write I as

I=[e1eN]

and F as

F=[f1fN].

This enables us to write H as

H=[e1eNf1fN].

We will look for a representation of x using the synthesis matrix H as

x=Ha

where aC2N.

Since this representation is under-determined and C(H)=CN hence there are always infinitely many possible representations of x in H.

We would prefer to choose the sparsest representation which can be stated as an optimization problem

(18.15)#minimizeaa0subject to x=Ha.

Example 18.14 (Sparse representation using Dirac Fourier Basis)

Let N=4. Then the Dirac Fourier basis is

H=[1000.5.5.5.50100.5.5i.5.5i0010.5.5.5.50001.5.5i.5.5i]

Let

x=3e12f2=(2i1i).

A sparse representation of x in H is

a=(30000200).

This representation is 2-sparse.

Thus we see that Dirac Fourier basis is able to provide a sparser representation of a linear combination of impulses and sinusoids compared to individual orthonormal bases (the Dirac basis and the Fourier basis).

This gives us motivation to consider such combination of bases which help us provide a sparse representation to a larger class of signals. This is the objective of the next section.

In due course we will revisit the Dirac Fourier basis further in several examples.

18.4.6. Two-Ortho Basis#

Before we leave this section, let us define general two-ortho bases.

Definition 18.2 (Two ortho basis)

Let Ψ and X be any two CN×N matrices whose columns vectors form orthonormal bases for the complex vector space CN individually.

We define

(18.16)#H=[ΨX]CN×2N.

The columns of H form a two-ortho basis for CN.

Clearly columns of H span CN.

Remark 18.4 (Coherence of a two ortho basis)

The coherence of a two ortho basis is defined as

μ(H)=μ(Ψ,X).

It is the magnitude of the largest inner product between columns of H.

We present a very interesting result about the null space of H.

Theorem 18.6 (Denseness of vectors in null space of two ortho bases)

For a two ortho basis H=[ΨX] with low coherence, the non-zero vectors in the null space of H are not sparse.

Concretely

v02μ(H)vN(H).

Proof. Let vN(H) be some non-zero vector.

Now let vψ and vχ be first N and last N entries of v. Then

[ΨX][vψvχ]=0Ψvψ=Xvχ=y0.

If y were 0, then both vψ and vχ would have to be 0 which is not the case since by assumption v0.

We note that vψ and vχ are representations of same vector y in two different orthonormal bases Ψ and X respectively, and vχ0=vχ0.

Applying Theorem 18.5 we have

v0=vψ0+vχ02μ(H).

We also note that since the orthonormal bases preserve norm, hence

y2=vψ2=vχ2.

This shows us that the energy of a null space vector is evenly distributed in the two components corresponding to each orthonormal basis.

Challenge

For a two-ortho basis H=[ΨX] is it possible to find a null space vector v satisfying the lower bound

v0=2μ(H)?

Let xCN be any arbitrary signal. Then its representation in H is given by

x=Ha.

Obviously for every zN(H), the vector a+z is also a representation of x.

What we are particularly interested in are sparse representations of x. A key concern for us is to ensure that a sparse representation of x in H is unique. Under what conditions such is possible?

Formally, let a and b be two different representations of x in H. Can we say that if a is sparse then b won’t be sparse?

This is established in the next uncertainty principle.

Theorem 18.7 (Uncertainty principle for representations in two ortho basis)

Let xCN be any signal and let H be a two ortho basis defined in (18.16). Let a and b be two distinct representations of x in H i.e.

x=Ha=Hb.

Then the following holds

a0+b02μ(H).

This is an uncertainty principle for the sparsity of distinct representations in two ortho basis.

Proof. Let

e=ab

be the difference vector of representations of x. Clearly

He=HaHb=xx=0.

Thus eN(H). Applying Theorem 18.6, we get

e02μ(H).

But since e=ab hence we have

a0+b0e02μ(H).

Challenge

For a two-ortho basis H=[ΨX] is it possible to find a vector x with two alternative representations a and b satisfying the lower bound

a0+b0=2μ(H)?

This theorem suggests as that if μ(H) is small (i.e. the coherence between the two orthonormal bases is small) then two representations of x cannot be simultaneously sparse.

Rather if a representation is sufficiently sparse, then all other representations of x are guaranteed to be non-sparse providing the uniqueness of sparse representation.

This is stated formally in the following uniqueness theorem.

Theorem 18.8 (Uniqueness of sparse representation in two ortho basis)

If a representation of x in the two ortho basis H=[ΨX] has fewer than 1μ(H) non-zero entries, then it is necessarily the sparsest one possible, and any other representation must be denser.

Proof. Let a be a candidate representation with

a0<1μ(H).

Let b be any other candidate representation. By applying Theorem 18.7 we have

a0+b02μ(H).

This gives us

b02μ(H)a0b0>1μ(H).

Thus we find that

b0>a0

which is true for every representation b of x in H other than a.

Hence a is the sparsest possible representation of x in H.

We note here that any arbitrary choice of two bases may not be helpful in coming up with a two ortho basis which can provide us sparse representations for our signals of interest. In next few sections, we will explore this issue further in the more general context of signal dictionaries.

Challenge

Clearly, are signals for which a sufficiently sparse (and unique) representation doesn’t exist in a given two-ortho basis. What kind of relationships may exist between different (insufficiently) sparse representations of such signals?

18.4.7. Dirac-DCT Basis#

Dirac Fourier 2-ortho-basis is optimal in the sense that it has the smallest mutual coherence between the two bases. However, one problem with the Dirac Fourier basis is that it requires us to work with complex numbers. A close second is the Dirac DCT basis which consists of the Dirac basis and the DCT basis both of which are real.

Definition 18.3 (Dirac DCT basis)

The Dirac-DCT basis is a two-ortho basis consisting of the union of the Dirac and the DCT bases.

This two ortho basis is suitable for real signals since both Dirac and DCT are totally real bases RN×N.

The two ortho basis is obtained by combining the N×N identity matrix (Dirac basis) with the N×N DCT matrix for signals in RN.

Let ΨDCT,N denote the DCT matrix for RN. Let IN denote the identity matrix for RN. Then

DDCT=[INΨDCT,N].

Let

ΨDCT,N=[ψ1ψ2ψN]

The k-th column of ΨDCT,N is given by

(18.17)#ψk(n)=2NΩkcos(π2N(2n1)(k1)),n=1,,N,

with Ωk=12 for k=1 and Ωk=1 for 2kN.

Note that for k=1, the entries become

2N12cos0=1N.

Thus, the 2 norm of ψ1 is 1. We can similarly verify the 2 norm of other columns also. They are all one.

18.4.7.1. Example#

We show how a mixture signal consisting of impulses and sinusoids has a sparse representation in a Dirac DCT basis.

../_images/dct_256.png

Fig. 18.5 A DCT basis for N=256#

../_images/dirac_dct_256.png

Fig. 18.6 A Dirac DCT basis for N=256 and D=512.#

../_images/impulse_cosine_combination_signal.png

Fig. 18.7 An N=256 length signal consisting of a mixture of 3 impulses and 2 cosine signals.#

../_images/impulse_cosine_dct_basis.png

Fig. 18.8 Representation of the same mixture signal in DCT basis. Notice the two large magnitude components. They correspond to the two sinusoidal components in the original time domain signal. The three impulses in the time domain signal have been distributed among all the frequencies.#

../_images/impulse_cosine_dirac_dct.png

Fig. 18.9 A sparse representation of the same mixture signal in the two ortho Dirac-DCT basis. The three impulses from the original signal appear on the left half (corresponding to the Dirac basis part). The two cosine sinusoids appear on the right half (corresponding to the DCT basis part). Indeed the representation is not unique. But it is the unique sparsest possible representation. All other representations have far more nonzero components.#

18.4.7.2. Coherence#

Theorem 18.9

The Dirac-DCT basis has the mutual coherence of 2N.

Proof. The mutual coherence of a two ortho basis where one basis is Dirac basis is given by the magnitude of the largest entry in the other basis.

  1. For ΨDCT,N, the largest value is obtained when Ωk=1 and the cos term evaluates to 1.

  2. Clearly,

    μ(DDCT)=2N.

18.4.7.3. Construction of Sparse Representation#

We use sparse recovery/reconstruction algorithms to construct a sparse representation of a given signal in a two ortho basis. The algorithms will be discussed in later chapters. We give the time domain signal (e.g. the mixture signal above) and the two ortho basis to a sparse reconstruction algorithm. The algorithm attempts to construct a sparsest possible representation of the signal in the given two ortho basis. We reconstructed this mixture signal in Dirac DCT basis using three different algorithms:

  • Basis Pursuit

  • Matching Pursuit

  • Orthogonal Matching Pursuit

../_images/dirac_dct_l_1_solution.png

Fig. 18.10 Construction of the sparse representation of the mixture signal using the basis pursuit algorithm.#

../_images/dirac_dct_mp_solution.png

Fig. 18.11 Construction of the sparse representation of the mixture signal using the matching pursuit algorithm.#

../_images/dirac_dct_omp_solution.png

Fig. 18.12 Construction of the sparse representation of the mixture signal using the orthogonal matching pursuit algorithm.#