torchdecomp package
Submodules
torchdecomp.cholesky module
- class torchdecomp.cholesky.CholeskyLayer(x)[source]
Bases:
Module
Cholesky Decomposition Layer
A symmetric matrix X (n times n) is decomposed to the product of L (n times n) and L^T (n times n).
- x
A symmetric matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> x = torch.mm(x, x.t()) # Symmetalization >>> cholesky_layer = td.CholeskyLayer(x) # Instantiation
torchdecomp.factor module
- class torchdecomp.factor.FactorLayer(x, n_components)[source]
Bases:
Module
Factor Matrix Layer
A matrix X (n times m) is projected to a smaller matrix XV (n times k, k << m).
- x
A matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> factor_layer = td.FactorLayer(x, 3) # Instantiation
torchdecomp.helper module
- torchdecomp.helper.create_dummy_matrix(class_vector)[source]
Creates a dummy matrix from a class label vector.
- Parameters:
class_vector – A PyTorch array with numeric elements
- Returns:
A PyTorch array filled with dummy vectors
Example
>>> import torchdecomp as td >>> td.create_dummy_matrix(torch.tensor([0, 1, 2, 1, 0, 2, 1, 0]))
Note
The number of rows is the number of classes and the number of columns is the number of data.
- torchdecomp.helper.print_named_parameters(named_params)[source]
Outputs the contents of the named parameters.
- Parameters:
named_params – An object instantiated by user’s original class
nn.Module. (defined from PyTorch's)
- Returns:
Leaf variables defined as PyTorch Tensor(s) set with requires_grad_(), requires_grad=True option, or nn.Parameter (cf. nn.Module).
Example
>>> import torchdecomp as td >>> import torch >>> import torch.nn as nn >>> import torch.nn.functional as F >>> class MLPNet (nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1 * 28 * 28, 512) self.fc2 = nn.Linear(512, 512) self.fc3 = nn.Linear(512, 10) self.dropout1 = nn.Dropout2d(0.2) self.dropout2 = nn.Dropout2d(0.2) def forward(self, x): x = F.relu(self.fc1(x)) x = self.dropout1(x) x = F.relu(self.fc2(x)) x = self.dropout2(x) return F.relu(self.fc3(x)) >>> model = MLPNet() >>> td.print_named_parameters(model.named_parameters())
Note
These Tensor object(s) is/are subject to the optimization by gradient descent (e.g., torch.optim.SGD)
torchdecomp.ica module
- class torchdecomp.ica.DDICALayer(x, sigma, alpha)[source]
Bases:
Module
Deep Deterministic Independent Component Analysis-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> loss = td.DDICALayer(x, sigma=1, alpha=1) # Instantiation
Note
This model is very initial-value sensitive. If the iteration is not proceeded, re-run sometimes.
- class torchdecomp.ica.KurtosisICALayer[source]
Bases:
Module
Kurtosis-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x) >>> loss = td.KurtosisICALayer() >>> loss(x_rotated)
- class torchdecomp.ica.NegentropyICALayer[source]
Bases:
Module
Negentropy-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x) >>> loss = td.NegentropyICALayer() # Instantiation >>> loss(x_rotated)
- class torchdecomp.ica.RotationLayer(x)[source]
Bases:
Module
Rotation Matrix Factorization Layer
A matrix X (n times m) is rotated by a rotation matrix A such as XA (n times m).
- x
A square matrix X (n times m)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x)
torchdecomp.lu module
- class torchdecomp.lu.LULayer(x)[source]
Bases:
Module
LU Decomposition Layer
A square matrix X (n times n) is decomposed to the product of L (n times n) and U (n times n).
- x
A square matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> lu_layer = td.LULayer(x) # Instantiation
torchdecomp.nmf module
- class torchdecomp.nmf.NMFLayer(x, n_components, l1_lambda_w=2.220446049250313e-16, l1_lambda_h=2.220446049250313e-16, l2_lambda_w=2.220446049250313e-16, l2_lambda_h=2.220446049250313e-16, bin_lambda_w=2.220446049250313e-16, bin_lambda_h=2.220446049250313e-16, eps=2.220446049250313e-16, beta=2)[source]
Bases:
Module
Non-negative Matrix Factorization Layer
A non-negative matrix X (n times m) is decomposed to the product of W (n times k) and H (k times m).
- x
A non-negative matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
- l1_lambda_w
L1 regularization parameter for W
- Type:
float
- l1_lambda_h
L1 regularization parameter for H
- Type:
float
- l2_lambda_w
L2 regularization parameter for W
- Type:
float
- l2_lambda_h
L2 regularization parameter for H
- Type:
float
- bin_lambda_w
Binarization regularization parameter for W
- Type:
float
- bin_lambda_h
Binarization regularization parameter for H
- Type:
float
- eps
Offset value to avoid zero division
- Type:
float
- beta
Beta parameter of Beta-divergence
- Type:
float
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> nmf_layer = td.NMFLayer(x, 3) # Instantiation
- loss(pos, neg, pos_w, neg_w, pos_h, neg_h)[source]
Total Loss with the recontruction term and regularization terms
torchdecomp.qr module
- class torchdecomp.qr.QRLayer(x)[source]
Bases:
Module
QR Decomposition Layer
A square matrix X (n times n) is decomposed to the product of Q (n times n) and R (m times n).
- x
A square matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> qr_layer = td.QRLayer(x) # Instantiation
torchdecomp.rec module
- class torchdecomp.rec.RecLayer(x, n_components)[source]
Bases:
Module
Reconstruction Matrix Layer
A matrix X (n times m) is projected to a smaller matrix XV, and then reconstructed such as XVV^T, where the size of V is n times k (k << m).
- x
A matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rec_layer = td.RecLayer(x, 3) # Instantiation
torchdecomp.symrec module
- class torchdecomp.symrec.SymRecLayer(x, n_components)[source]
Bases:
Module
Symmetric Reconstruction Layer
A symmetric matrix X (n times n) is decomposed to the product of Q (n times k), Lambda (k times k), and Q^T (k times n).
- x
A symmetric matrix X (n times n)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> x = torch.mm(x, x.t()) # Symmetalization >>> symrec_layer = td.SymRecLayer(x, 3) # Instantiation
Module contents
A set of matrix decomposition algorithms implemented as PyTorch classes
- class torchdecomp.CholeskyLayer(x)[source]
Bases:
Module
Cholesky Decomposition Layer
A symmetric matrix X (n times n) is decomposed to the product of L (n times n) and L^T (n times n).
- x
A symmetric matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> x = torch.mm(x, x.t()) # Symmetalization >>> cholesky_layer = td.CholeskyLayer(x) # Instantiation
- class torchdecomp.DDICALayer(x, sigma, alpha)[source]
Bases:
Module
Deep Deterministic Independent Component Analysis-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> loss = td.DDICALayer(x, sigma=1, alpha=1) # Instantiation
Note
This model is very initial-value sensitive. If the iteration is not proceeded, re-run sometimes.
- class torchdecomp.FactorLayer(x, n_components)[source]
Bases:
Module
Factor Matrix Layer
A matrix X (n times m) is projected to a smaller matrix XV (n times k, k << m).
- x
A matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> factor_layer = td.FactorLayer(x, 3) # Instantiation
- class torchdecomp.KurtosisICALayer[source]
Bases:
Module
Kurtosis-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x) >>> loss = td.KurtosisICALayer() >>> loss(x_rotated)
- class torchdecomp.LULayer(x)[source]
Bases:
Module
LU Decomposition Layer
A square matrix X (n times n) is decomposed to the product of L (n times n) and U (n times n).
- x
A square matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> lu_layer = td.LULayer(x) # Instantiation
- class torchdecomp.NMFLayer(x, n_components, l1_lambda_w=2.220446049250313e-16, l1_lambda_h=2.220446049250313e-16, l2_lambda_w=2.220446049250313e-16, l2_lambda_h=2.220446049250313e-16, bin_lambda_w=2.220446049250313e-16, bin_lambda_h=2.220446049250313e-16, eps=2.220446049250313e-16, beta=2)[source]
Bases:
Module
Non-negative Matrix Factorization Layer
A non-negative matrix X (n times m) is decomposed to the product of W (n times k) and H (k times m).
- x
A non-negative matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
- l1_lambda_w
L1 regularization parameter for W
- Type:
float
- l1_lambda_h
L1 regularization parameter for H
- Type:
float
- l2_lambda_w
L2 regularization parameter for W
- Type:
float
- l2_lambda_h
L2 regularization parameter for H
- Type:
float
- bin_lambda_w
Binarization regularization parameter for W
- Type:
float
- bin_lambda_h
Binarization regularization parameter for H
- Type:
float
- eps
Offset value to avoid zero division
- Type:
float
- beta
Beta parameter of Beta-divergence
- Type:
float
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> nmf_layer = td.NMFLayer(x, 3) # Instantiation
- loss(pos, neg, pos_w, neg_w, pos_h, neg_h)[source]
Total Loss with the recontruction term and regularization terms
- class torchdecomp.NegentropyICALayer[source]
Bases:
Module
Negentropy-based Independent Component Analysis Layer
Mini-batch data (x) is supposed to be used.
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x) >>> loss = td.NegentropyICALayer() # Instantiation >>> loss(x_rotated)
- class torchdecomp.QRLayer(x)[source]
Bases:
Module
QR Decomposition Layer
A square matrix X (n times n) is decomposed to the product of Q (n times n) and R (m times n).
- x
A square matrix X (n times n)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> qr_layer = td.QRLayer(x) # Instantiation
- class torchdecomp.RecLayer(x, n_components)[source]
Bases:
Module
Reconstruction Matrix Layer
A matrix X (n times m) is projected to a smaller matrix XV, and then reconstructed such as XVV^T, where the size of V is n times k (k << m).
- x
A matrix X (n times m)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rec_layer = td.RecLayer(x, 3) # Instantiation
- class torchdecomp.RotationLayer(x)[source]
Bases:
Module
Rotation Matrix Factorization Layer
A matrix X (n times m) is rotated by a rotation matrix A such as XA (n times m).
- x
A square matrix X (n times m)
- Type:
torch.Tensor
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(10, 6) # Test datasets >>> rotation_layer = td.RotationLayer(x) # Instantiation >>> x_rotated = rotation_layer(x)
- class torchdecomp.SymRecLayer(x, n_components)[source]
Bases:
Module
Symmetric Reconstruction Layer
A symmetric matrix X (n times n) is decomposed to the product of Q (n times k), Lambda (k times k), and Q^T (k times n).
- x
A symmetric matrix X (n times n)
- Type:
torch.Tensor
- n_components
The number of lower dimensions (k)
- Type:
int
Example
>>> import torchdecomp as td >>> import torch >>> torch.manual_seed(123456) >>> x = torch.randn(6, 6) # Test datasets >>> x = torch.mm(x, x.t()) # Symmetalization >>> symrec_layer = td.SymRecLayer(x, 3) # Instantiation
- torchdecomp.create_dummy_matrix(class_vector)[source]
Creates a dummy matrix from a class label vector.
- Parameters:
class_vector – A PyTorch array with numeric elements
- Returns:
A PyTorch array filled with dummy vectors
Example
>>> import torchdecomp as td >>> td.create_dummy_matrix(torch.tensor([0, 1, 2, 1, 0, 2, 1, 0]))
Note
The number of rows is the number of classes and the number of columns is the number of data.
- torchdecomp.print_named_parameters(named_params)[source]
Outputs the contents of the named parameters.
- Parameters:
named_params – An object instantiated by user’s original class
nn.Module. (defined from PyTorch's)
- Returns:
Leaf variables defined as PyTorch Tensor(s) set with requires_grad_(), requires_grad=True option, or nn.Parameter (cf. nn.Module).
Example
>>> import torchdecomp as td >>> import torch >>> import torch.nn as nn >>> import torch.nn.functional as F >>> class MLPNet (nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1 * 28 * 28, 512) self.fc2 = nn.Linear(512, 512) self.fc3 = nn.Linear(512, 10) self.dropout1 = nn.Dropout2d(0.2) self.dropout2 = nn.Dropout2d(0.2) def forward(self, x): x = F.relu(self.fc1(x)) x = self.dropout1(x) x = F.relu(self.fc2(x)) x = self.dropout2(x) return F.relu(self.fc3(x)) >>> model = MLPNet() >>> td.print_named_parameters(model.named_parameters())
Note
These Tensor object(s) is/are subject to the optimization by gradient descent (e.g., torch.optim.SGD)