Greedy dictionary learning methods for sparse representation of signals (PhD)

Show simple item record

dc.contributor.advisor Dr. Anil Kumar Sao
dc.contributor.author Abrol, Vinayak
dc.date.accessioned 2020-06-29T09:51:03Z
dc.date.available 2020-06-29T09:51:03Z
dc.date.issued 2018-01-19
dc.identifier.uri http://hdl.handle.net/123456789/234
dc.description A thesis submitted for the award of the degree of Doctor of Philosophy under the guidance of Dr. Anil Kumar Sao (Faculty, SCEE) en_US
dc.description.abstract In the framework of sparse representation, signals are expressed as a linear combinations of few atoms from a resource called “dictionary”. Sparse representations has received a lot of importance in various signal processing applications, and the chances of getting the sparsest representation is better in an over complete dictionary, where the number of atoms are more than the number of signal dimension. In the last decade,various dictionary learning (DL) algorithms have been developed, with varying algorithmic complexity and success rate. However, the thumb rule “sparser the representation, better the dictionary”, requires a dictionary which is in general a dense matrix. Thus, manipulation of the dictionary can be computationally costly both at the DL stage and later in its use for signal processing applications. This thesis work is an attempt to address this problem and is mainly focused towards an alternative and efficient DL approach for sparse representation of signals and its application to several signal processing applications. In particular, this thesis explores the framework of learning a factorisable dictionary where a dictionary can be expressed as a product of matrices which are sparse or have an analytical form, and the individual factors can be chosen specific for a particular application. This makes the dictionary intrinsically fast to learn or manipulate and it requires less storage. In order to speed up the DL process, the proposed approach performs dictionary preconditioning by modifying the underlying objective function in DL problem to a transformed space, which results in operations with only sparse matrices. This approach is further extended to the cases where the transformation is non invertible. In such cases, the representation is not unique due to the non-trivial null-space of the transformation matrix, and hence the proposed approach employ sparsity as regularizer to obtain the intended representation. Further, to update the dictionary, a fast greedy dictionary selection (GDS) algorithm is proposed, where dictionary is build by selecting the dictionary columns from multiple training candidates. The proposed greedy algorithm is based on sub-modularity property of the DL problem, it is shown that the algorithm achieves at-least a constant fraction of the optimal value of the DL objective function. For problems where the transformation employed in dictionary preconditioning requires to be learned rather than one having analytical or parametric form, a fast exemplar selection (FES) algorithm is proposed. It selects an optimal subset which spans the same space as the whole training data based on the principles of low-rank matrix recovery/approximations. To make the approach scalable to large ensemble of training signals, incremental Cholesky decomposition and block matrix inversion algorithms are employed to speed up the sampling process. In light of the proposed GDL algorithm, experimental results are demonstrated for learning dictionaries in the context of many well known existing matrix factorization problems involving factorisable structure namely double sparse dictionary learning (DSDL), kernel sparse dictionary learning (KSDL), archetypal analysis (AA), functional connectivity analysis in functional magnetic resonance imaging (fMRI) signals. For instance, DSDL problem involve learning a dictionary as a product of an analytical transform and a spare matrix. Similarly, KSDL and AA problems involve learning a dictionary as a product of the training exemplars and a spare matrix. Finally, the work in this thesis is further extended to use the sparse representation for the inference problem of voiced/nonvoiced (V/NV) detection and signal recovery from compressively sensed speech signals. It attempts to exploit the fact that the inherent glottal activity characteristic of the speech production mechanism is captured in the sparse representation, which can be used for V/NV detection. The estimation of sparse vector is influenced by the dictionary, which is difficult to obtain when only compressed samples of signal are available. To address this, this work proposed a new method which shows that it is indeed possible to build the sparsifying dictionary using only compressive samples. In particular, EMD decompositions of compressive samples are used to form the atoms of the dictionary, and is motivated by the fact that CS samples have envelop similar to the envelop of original samples.
dc.language.iso en_US en_US
dc.publisher IITMandi en_US
dc.subject NFM en_US
dc.subject DSDL en_US
dc.title Greedy dictionary learning methods for sparse representation of signals (PhD) en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IIT Mandi Repository


Advanced Search

Browse

My Account