Suppose we are given a finite square matrix . We would like to factorize into (where is finite) matrices
where factors are required to have a certain sparsity pattern, i.e., a subset of their entries must be zero.
Definition: Let and . A sparsity pattern is a set of ordered pairs that indicate which entries of a matrix are constrained to be zero.
Note that . If there are no sparsity constraints, then is an empty set. Set contains the indices of the entries of a matrix which are unconstrained, i.e., “free”.
Problem: Given a matrix and a particular sparsity pattern , how can we find if it is possible to factorize into factor matrices (where is finite) whose sparsity pattern is the specified one? If such factorization is possible, how can we compute each factor and ?
This problem is easy to pose, but I have found it very hard to attack. I have tried various approaches… and none worked. Does anyone have any ideas?
Example: suppose we are given a real matrix
We would like to factorize matrix into factors whose sparsity pattern is represented pictorially by
where a “” in the entry means that the entry is unconstrained, i.e., it is allowed to take any value in . If you are familiar with numerical PDE’s, you certainly encountered matrices with this sparsity pattern when solving diffusion or wave equations in one spatial dimension using finite difference methods.
Even though this example seems fairly “simple” (it’s only a matrix after all!), I have not yet found an approach that works.