==============================================================================

  Fixed-Basis Low-Rank Tensor Approximation (FB-LRTA)
  Version 5

==============================================================================


OVERVIEW:

  This code implements a "fixed-basis" (FB) version of LRTA, an
  algorithm for multispectral/hyperspectral fusion described in the
  paper:

    N. Liu, L. Li, W. Li, R. Tao, J. E. Fowler, and J. Chanussot,
    "Hyperspectral Restoration and Fusion with Multispectral Imagery via
    Low-Rank Tensor-Approximation," IEEE Transactions on Geoscience and
    Remote Sensing, vol. 59, no. 9, pp. 7817-7830, September 2021.

  What is the FB variant of LRTA? First off, the FB variant uses only
  the mode-3 nuclear norm as the optimization objective (i.e., (3) in
  the LRTA paper becomes min ||X_(3)||_* rather than the sum of all
  three norms). This is done since the mode-1 and mode-2 norms end up
  not really being used (e.g., alpha_1 and alpha_2 end up being on the
  order of 0.01 while alpha_3 is ~0.97 in run_original_LRTA.m). Under
  these conditions, LRTA becomes focused on the iterative updating of
  M_3 (Step 7 of Alg. 1) which is accomplished by equation (17), which
  involves singular-value thresholding (SVT). SVT in turn requires an
  SVD, which constitutes the bulk of the computation of LRTA. However,
  I found that you could use the eigenvector matrices from the SVD of
  the hyperspectral image (L_(3) in (2)) as a "fixed basis" for the
  SVT; that is, decompose L_(3) as L_(3) = U*S*V^T at the start of
  LRTA and keep U and V fixed throughout the LRTA iterations (thus
  only one SVD is done). Then, instead of performing a full SVD, you
  assume that you already know the eigenvectors and just do shrinkage
  on the diagonal; i.e., replace the SVT (D_tau(X) in (7)) with
  U*diag(s_hat)*V^T where s_hat = S_tau(diag(U^T*X*V)), and S_tau() is
  the scalar-shrinkage operator. This speeds up LRTA (by about 50% on
  my machine) and, interestingly, increases the PSNR. The reason for
  the speedup is obvious; I'm still working on a theoretical analysis
  as to why this PSNR improvement occurs (and this PSNR improvement
  can range up to several dB and so is often rather significant). Note
  that the implementation here actually performs the fixed-basis SVT
  on X*X^T, and therefore we need to retain only the U eigenvectors.

  The main reason that I devised the FB variant of LRTA is because I
  found that, when attempting to implement LRTA in tensorflow, the
  tensorflow implementation of SVD is horribly unstable (this is due
  to a poor implementation of eigendecomposition in the underlying
  CUDA libraries - SVD does not lend itself well to GPU
  implementation). So, I created the FB-LRTA as a workaround for
  tensorflow's SVD, expecting a cost in PSNR performance, although the
  PSNR actually ended up inceasing.

  FB-LRTA Version 4 revises FB-LRTA Version 3 to switch from
  tensorflow to pytorch. The operation of the code is otherwise
  identical.

  FB-LRTA Version 5 includes some bug fixes as well as feature
  enhancements.


==============================================================================


COPYRIGHT AND LICENSE INFORMATION:

  Copyright (C) 2023-2025  James E. Fowler
  
  The programs and library herein are free software; you can redistribute
  them and/or modify them under the terms of the GNU General Public License
  as published by the Free Software Foundation; either version 2
  of the License, or (at your option) any later version.

  The library and all programs herein are distributed in the hope that
  they will be useful, but WITHOUT ANY WARRANTY; without even the implied
  warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
  the full text of the appropriate license for more details.
  

==============================================================================
