domainlab.models package¶
Submodules¶
domainlab.models.a_model module¶
operations that all kinds of models should have
- class domainlab.models.a_model.AModel[source]¶
Bases:
Module
operations that all models (classification, segmentation, seq2seq)
- cal_reg_loss(tensor_x, tensor_y, tensor_d, others=None)[source]¶
task independent regularization loss for domain generalization
- abstract cal_task_loss(tensor_x, tensor_y)[source]¶
Calculate the task loss
- Parameters:
tensor_x – input
tensor_y – label
- Returns:
task loss
- dset_decoration_args_algo(args, ddset)[source]¶
decorate dataset to get extra entries in load item, for instance, jigen need permutation index this parent class function delegate decoration to its decoratee
- extract_semantic_feat(tensor_x)[source]¶
extract semantic feature (not domain feature), note that extract semantic feature is an action, it is more general than calling a static network(module)’s forward function since there are extra action like reshape the tensor
- list_inner_product(list_loss, list_multiplier)[source]¶
compute inner product between list of regularization loss and multiplier - the length of the list is the number of regularizers - for each element of the list: the first dimension of the tensor is mini-batch return value of list_inner_product should keep the minibatch structure, thus aggregation here only aggregate along the list
- property metric4msel¶
metric for model selection
- property multiplier4task_loss¶
the multiplier for task loss is default to 1 except for vae family models
- property name¶
get the name of the algorithm
- property net_invar_feat¶
if exist, return a neural network for extracting invariant features
- property p_na_prefix¶
common prefix for Models
- print_parameters()[source]¶
Function to print all parameters of the object. Can be used to print the parameters of every child class.
- reset_aux_net()[source]¶
after feature extractor being reset, the input dim of other networks like domain classification will also change (for commandline usage only)
domainlab.models.a_model_classif module¶
operations that all claasification model should have
- class domainlab.models.a_model_classif.AModelClassif(**kwargs)[source]¶
Bases:
AModel
operations that all classification model should have
- cal_loss_gen_adv(x_natural, x_adv, vec_y)[source]¶
calculate loss function for generation of adversarial images
- cal_task_loss(tensor_x, tensor_y)[source]¶
Calculate the task loss. Used within the cal_loss methods of models that are subclasses of AModelClassif. Cross entropy loss for classification is used here by default but could be modified by subclasses as necessary.
- Parameters:
tensor_x – input
tensor_y – label
- Returns:
task loss
- property dim_y¶
the class embedding dimension
- infer_y_vpicn(tensor)[source]¶
- Parameters:
tensor – input
- Returns:
vpicn v: vector of one-hot class label, p: vector of probability, i: class label index, c: confidence: maximum probability, n: list of name of class
- match_feat_fun_na = 'cal_logit_y'¶
- property metric4msel¶
metric for model selection
- property net_classifier¶
domainlab.models.args_jigen module¶
hyper-parameters for JiGen
domainlab.models.args_vae module¶
domainlab.models.interface_vae_xyd module¶
Base Class for XYD VAE
domainlab.models.model_custom module¶
- class domainlab.models.model_custom.AModelCustom(**kwargs)[source]¶
Bases:
AModelClassif
AModelCustom.
domainlab.models.model_dann module¶
construct feature extractor, task neural network (e.g. classification) and domain classification network
- domainlab.models.model_dann.mk_dann(parent_class=<class 'domainlab.models.a_model_classif.AModelClassif'>, **kwargs)[source]¶
Instantiate a Deep Adversarial Net (DAN) model
- Details:
The model is trained to solve two tasks: 1. Standard image classification. 2. Domain classification. Here for, a feature extractor is adversarially trained to minimize the loss of the image classifier and maximize the loss of the domain classifier. For more details, see: Ganin, Yaroslav, et al. “Domain-adversarial training of neural networks.” The journal of machine learning research 17.1 (2016): 2096-2030.
- Parameters:
parent_class (AModel, optional) – Class object determining the task
AModelClassif. (type. Defaults to) –
- Returns:
model inheriting from parent class
- Return type:
ModelDAN
- Input Parameters:
list_str_y: list of labels, list_d_tr: list of training domains alpha: total_loss = task_loss + $$alpha$$ * domain_classification_loss, net_encoder: neural network to extract the features (input: training data), net_classifier: neural network (input: output of net_encoder; output: label prediction), net_discriminator: neural network (input: output of net_encoder; output: prediction of training domain)
- Usage:
For a concrete example, see: https://github.com/marrlab/DomainLab/blob/master/tests/test_mk_exp_dann.py
domainlab.models.model_diva module¶
DIVA
- domainlab.models.model_diva.mk_diva(parent_class=<class 'domainlab.models.model_vae_xyd_classif.VAEXYDClassif'>, **kwargs)[source]¶
Instantiate a domain invariant variational autoencoder (DIVA) with arbitrary task loss.
- Details:
This method is creating a generative model based on a variational autoencoder, which can reconstruct the input images. Here for, three different encoders with latent variables are trained, each representing a latent subspace for the domain, class and residual features information, respectively. The latent subspaces serve for disentangling the respective sources of variation. To reconstruct the input image, the three latent variables are fed into a decoder. Additionally, two classifiers are trained, which predict the domain and the class label. For more details, see: Ilse, Maximilian, et al. “Diva: Domain invariant variational autoencoders.” Medical Imaging with Deep Learning. PMLR, 2020.
- Parameters:
parent_class – Class object determining the task type. Defaults to VAEXYDClassif.
- Returns:
model inheriting from parent class.
- Return type:
ModelDIVA
- Input Parameters:
zd_dim: size of latent space for domain-specific information, zy_dim: size of latent space for class-specific information, zx_dim: size of latent space for residual variance, chain_node_builder: creates the neural network specified by the user; object of the class “VAEChainNodeGetter” (see domainlab/compos/vae/utils_request_chain_builder.py) being initialized by entering a user request, list_str_y: list of labels, list_d_tr: list of training domains, gamma_d: weighting term for d classifier, gamma_y: weighting term for y classifier, beta_d: weighting term for domain encoder, beta_x: weighting term for residual variation encoder, beta_y: weighting term for class encoder
- Usage:
For a concrete example, see: https://github.com/marrlab/DomainLab/blob/master/tests/test_mk_exp_diva.py
domainlab.models.model_erm module¶
Emperical risk minimization
- domainlab.models.model_erm.mk_erm(parent_class=<class 'domainlab.models.a_model_classif.AModelClassif'>, **kwargs)[source]¶
Instantiate a Deepall (ERM) model
- Details:
Creates a model, which trains a neural network via standard empirical risk minimization (ERM). The fact that the training data stems from different domains is neglected, as all domains are pooled together during training.
- Parameters:
parent_class (AModel, optional) – Class object determining the task type. Defaults to AModelClassif.
- Returns:
model inheriting from parent class
- Return type:
ModelERM
- Input Parameters:
custom neural network, the output dimension must be the number of labels
- Usage:
For a concrete example, see: https://github.com/marrlab/DomainLab/blob/tests/test_mk_exp_erm.py
domainlab.models.model_hduva module¶
Hierarchical Domain Unsupervised Variational Auto-Encoding
- domainlab.models.model_hduva.mk_hduva(parent_class=<class 'domainlab.models.model_vae_xyd_classif.VAEXYDClassif'>, **kwargs)[source]¶
Instantiate a Hierarchical Domain Unsupervised VAE (HDUVA) with arbitrary task loss.
- Details:
The created model builds on a generative approach within the framework of variational autoencoders to facilitate generalization to new domains without supervision. HDUVA learns representations that disentangle domain-specific information from class-label specific information even in complex settings where domain structure is not observed during training. Here for, latent variables are introduced, representing the information for the classes, domains and the residual variance of the inputs, respectively. The domain structure is modelled by a hierarchical level and another latent variable, denoted as topic. Two encoder networks are trained. One for converting an image to be compatible with the latent spaces of the domains and another one for converting an image to a topic distribution. The overall objective is constructed by adding an additional weighted term to the ELBO loss. One benefit of this model is that the domain information during training can be incomplete. For more details, see: Sun, Xudong, and Buettner, Florian. “Hierarchical variational auto-encoding for unsupervised domain generalization.” arXiv preprint arXiv:2101.09436 (2021).
- Parameters:
parent_class – Class object determining the task type. Defaults to VAEXYDClassif.
- Returns:
model inheriting from parent class.
- Return type:
ModelHDUVA
- Input Parameters:
zd_dim: size of latent space for domain-specific information (int), zy_dim: size of latent space for class-specific information (int), zx_dim: size of latent space for residual variance (int, defaults to 0), chain_node_builder: an object which can build different maps via neural network, list_str_y: list of labels (list of strings), gamma_d: weighting term for domain classificaiton loss gamma_y: weighting term for additional term in ELBO loss (float), beta_d: weighting term for the domain component of ELBO loss (float), beta_x: weighting term for residual variation component of ELBO loss (float), beta_y: weighting term for class component of ELBO loss (float), beta_t: weighting term for the topic component of ELBO loss (float), device: device to which the model should be moved (cpu or gpu), topic_dim: size of latent space for topics (int, defaults to 3)
domainlab.models.model_jigen module¶
Jigen Model Similar to DANN model
- domainlab.models.model_jigen.mk_jigen(parent_class=<class 'domainlab.models.a_model_classif.AModelClassif'>, **kwargs)[source]¶
Instantiate a JiGen model
- Details:
The model is trained to solve two tasks: 1. Standard image classification; 2. Source images are decomposed into grids of patches, which are then permuted. The task is recovering the original image by predicting the right permutation of the patches;
The (permuted) input data is first fed into a encoder neural network and then into the two classification networks. For more details, see: Carlucci, Fabio M., et al. “Domain generalization by solving jigsaw puzzles.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
- Parameters:
parent_class (AModel, optional) – Class object determining the task
AModelClassif. (type. Defaults to) –
- Returns:
model inheriting from parent class
- Return type:
ModelJiGen
- Input Parameters:
list_str_y: list of labels, list_str_d: list of domains, net_encoder: neural network (input: training data, standard and shuffled), net_classifier_class: neural network (input: output of net_encoder; output: label prediction), net_classifier_permutation: neural network (input: output of net_encoder; output: prediction of permutation index), coeff_reg: total_loss = img_class_loss + coeff_reg * perm_task_loss nperm: number of permutations to use, by default 31 prob_permutation: probability of shuffling image tiles
- Usage:
For a concrete example, see: https://github.com/marrlab/DomainLab/blob/master/tests/test_mk_exp_jigen.py
domainlab.models.model_vae_xyd_classif module¶
Base Class for XYD VAE Classify
- class domainlab.models.model_vae_xyd_classif.VAEXYDClassif(chain_node_builder, zd_dim, zy_dim, zx_dim, **kwargs)[source]¶
Bases:
AModelClassif
,InterfaceVAEXYD
Base Class for DIVA and HDUVA
- property multiplier4task_loss¶
the multiplier for task loss is default to 1.0 except for vae family models