Friday, May 27, 2016

Building DWT Lifting Networks: Conceptual Note 5 on Ripples in Mathematics




Problem
  
This is my conceptual note 5 on Ch. 3, p. 13-20, in "Ripples in Mathematics" by A. Jensen & A. la Cour-Harbo. These notes document my programmatic journey, sometimes easy, sometimes painful, but always joyful, through this great text. My notes are for those who want to use Java as a programming investigation tool of various wavelet algorithms and Octave for plotting the results. You can find my previous notes by searching my blog on "ripples in mathematics." If you are on the same journey, let me know of any bugs in my code or improvements you have found that let us all, fellow travelers on the wavelet road, get better and more beautiful results.

Sections 3.2 - 3.4 in Ch. 3 explain the principles of lifting. Lifting, conceptually speaking, is used to split signals into coarser signals and wavelets for forward DWTs and synthesize wavelets and coarser signals into more refined signals. Usually, lifting is implemented through array manipulation. But there is no reason for us to implement it in hardware. The disadvantage, of course, is that we are limited to a specific number of forward and backward sweeps. In this post, I will describe how we can conceptualize forward and inverse DWT via lifting networks.



Forward DWT via Lifting
  
The definition of basic forward DWT via lifting is given in Fig. 1. The green database symbols denote some storage devices. They can be as simple as arrays. The input signal at scale j comes from the storage device on the left. The signal is split into an array of evenly indexed samples, i.e., evens, and an array of oddly indexed signals, i.e., odds. The evens are fed into the predictor box. The odds and the predicted samples are given into the minus node where the predicted evens are subtracted from the evens. The output of the minus node are stored in the bottom storage device on the right and are also fed into the plus node where the are added to the unchanged evens. The output of the plus node are stored in the top storage device on the right. The top storage device on the right saves the coarser signal at the next scale. The bottom storage device on the right saves the wavelet coefficients or the differences. Mathematically, this network implements the two forward DWT equations given in the bottom of Fig. 1. In equation 1, the difference vector, i.e., the wavelets, are computed by subtracting the predicted events from the odds. In equation 2, the evens and the updated odds are added to obtain the coarser signal.

Figure 1. Basic forward DWT network
 
The predict and update boxes can implement various functions. For example, the predictor may implement the identify function and the updater may multiply each the input vector by 0.5. Fig. 2 gives an example of how the basic forward DWT network works on (11, 3).

Figure 2. Basic forward DWT network processing (11, 3)
 In Fig. 2, the input signal (11, 3) is split into the evens, i.e., (11), and the odds, i.e., (3), in the SPLIT box. The evens is fed into the PREDICT box. Since the PREDICT box implements the identify function, the evens leaves the PREDICT box unchanged and is fed to the MINUS node below where the predicted evens are subtracted from the odds. The output of the predicted nodes, i.e, (-8), which is saved in the bottom storage. This wavelet coefficient, i.e., -8, measures the change from the first half of the signal, i.e., 11, to the second half of the signal, i.e., 3. The wavelet vector (-8) goes into the UPDATE box and comes out as (-4). The updated wavelets and evens go into the PLUS node where they are summed and stored as the coarser signal in the top storage place as the coarser signal (7). Note that coarser sample 7 is the mean value of the first sample on the previous scale, i.e., 11, and the second sample of the previous scale, i.e., 3. 

Basic forward DWT networks can be layered. In software simulations, we can layer them as many times as we want. If we implement DWT in hardware, there will be physical constraints. Fig. 3 shows how two basic forward DWT networks can be layered to implement two scales of forward DWT. The input to the second forward DWT network is the coarser signal obtained from the first basic forward DWT network. Note that the wavelet coefficients from the first network are saved. They are not used in the second network. The outputs of the second network are the even coarser signal saved in the top storage device on the right  and the even coarser wavelets that are saved into the bottom storage device on the right.

Figure 3. Two layered forward basic DWT networks
 Fig. 4 gives a concrete example of the layered network in Fig. 3 on the signal (5, 1, 2, 8). The network results in one coarser signal (4) and two wavelet vectors (2) and (-4, 6). Note that the coarser signal (4) gives the mean of the original signal (5, 1, 2, 8).

Figure 4. Layered forward DWT network processing (5, 1, 2, 8)



Inverse DWT via Lifting

The basic inverse DWT takes the coarser signal and the corresponding wavelets and synthesizes from them the more refined signal at the previous scale.  In Fig. 5, the DWT network to the left of the dashed line is the basic forward DWT network whereas the DWT network to the right of the dashed line is the basic inverse DWT network. In the inverse network, the operations of the forward DWT are inversed. In other words, the Update is applied first, the prediction is applied second. Then the evens and odds are merged to restore the original signal. The two equations for the inverse DWT are given in the bottom right corner of Fig. 5. The are obtained algebraically from the two forward equations.

Figure 5. Basic forward DWT and its inverse