domainlab.compos.vae.compos package

Submodules

domainlab.compos.vae.compos.decoder_concat_vec_reshape_conv module

decoder which takes concatenated latent representation

class domainlab.compos.vae.compos.decoder_concat_vec_reshape_conv.DecoderConcatLatentFcReshapeConv(z_dim, i_c, i_h, i_w, cls_fun_nll_p_x, net_fc_z2flat_img, net_conv, net_p_x_mean, net_p_x_log_var)[source]

Bases: Module

Latent vector re-arranged to image-size directly, then convolute to get the textures of the original image

cal_p_x_pars_loc_scale(vec_z)[source]

compute mean and variance of each pixel :param z:

concat_ydx(zy, zd, zx)[source]
concat_ytdx(zy, topic, zd, zx)[source]
forward(z, img)[source]
Parameters:
  • z

  • img

domainlab.compos.vae.compos.decoder_concat_vec_reshape_conv_gated_conv module

Bridge Pattern: Separation of interface and implementation. This class is using one implementation to feed into parent class constructor.

class domainlab.compos.vae.compos.decoder_concat_vec_reshape_conv_gated_conv.DecoderConcatLatentFCReshapeConvGatedConv(z_dim, i_c, i_h, i_w)[source]

Bases: DecoderConcatLatentFcReshapeConv

Bridge Pattern: Separation of interface and implementation. This class is using implementation to feed into parent class constructor. Latent vector re-arranged to image-size directly, then convolute to get the textures of the original image

domainlab.compos.vae.compos.decoder_cond_prior module

class domainlab.compos.vae.compos.decoder_cond_prior.LSCondPriorLinearBnReluLinearSoftPlus(hyper_prior_dim, z_dim, hidden_dim=None)[source]

Bases: Module

Location-Scale: from hyper-prior to current layer prior distribution

forward(hyper_prior)[source]

from hyper-prior value to current latent variable distribution

domainlab.compos.vae.compos.decoder_losses module

Upon pixel wise mean and variance

class domainlab.compos.vae.compos.decoder_losses.NLLPixelLogistic256(reduce_dims=(1, 2, 3), bin_size=0.00390625)[source]

Bases: object

Compute pixel wise negative likelihood of image, given pixel wise mean and variance. Pixel intensity is divided into bins of 256 levels. p.d.f. is calculated through c.d.f.(x_{i,j}+bin_size/scale) - c.d.f.(x_{i,j}) # https://github.com/openai/iaf/blob/master/tf_utils/distributions.py#L29

domainlab.compos.vae.compos.encoder module

Pytorch image: i_channel, i_h, i_w Location-Scale Encoder: SoftPlus

class domainlab.compos.vae.compos.encoder.LSEncoderConvBnReluPool(z_dim: int, i_channel, i_h, i_w, conv_stride)[source]

Bases: Module

Location-Scale Encoder with Convolution, Batch Normalization, Relu and Pooling. Softplus for scale

forward(img)[source]

. :param img:

class domainlab.compos.vae.compos.encoder.LSEncoderLinear(z_dim, dim_input)[source]

Bases: Module

Location-Scale Encoder with DenseNet as feature extractor Softplus for scale

forward(hidden)[source]

. :param hidden:

domainlab.compos.vae.compos.encoder_dirichlet module

class domainlab.compos.vae.compos.encoder_dirichlet.EncoderH2Dirichlet(dim_topic, device)[source]

Bases: Module

hidden representation to Dirichlet Distribution

forward(hidden)[source]
Parameters:

hidden

domainlab.compos.vae.compos.encoder_domain_topic module

class domainlab.compos.vae.compos.encoder_domain_topic.EncoderImg2TopicDirZd(i_c, i_h, i_w, num_topics, device, zd_dim, args)[source]

Bases: Module

forward(img)[source]

forward. :param img:

domainlab.compos.vae.compos.encoder_domain_topic_img2topic module

class domainlab.compos.vae.compos.encoder_domain_topic_img2topic.EncoderImg2TopicDistri(isize, num_topics, device, args)[source]

Bases: Module

image to topic distribution (not image to topic hidden representation used by another path)

forward(x)[source]

forward.

Parameters:

x

domainlab.compos.vae.compos.encoder_domain_topic_img_topic2zd module

class domainlab.compos.vae.compos.encoder_domain_topic_img_topic2zd.EncoderSandwichTopicImg2Zd(zd_dim, isize, num_topics, img_h_dim, args)[source]

Bases: Module

sandwich encoder: (img, s)->zd

forward(img, vec_topic)[source]

forward.

Parameters:
  • x

  • topic

domainlab.compos.vae.compos.encoder_xyd_parallel module

class domainlab.compos.vae.compos.encoder_xyd_parallel.XYDEncoderParallel(net_infer_zd, net_infer_zx, net_infer_zy)[source]

Bases: Module

calculate zx, zy, zd vars independently (without order, parallel): x->zx, x->zy, x->zd

forward(img)[source]

encode img into 3 latent variables separately/parallel :param img: :return q_zd, zd_q, q_zx, zx_q, q_zy, zy_q

infer_zy_loc(tensor)[source]

Used by VAE model to predict class label

class domainlab.compos.vae.compos.encoder_xyd_parallel.XYDEncoderParallelAlex(zd_dim, zx_dim, zy_dim, i_c, i_h, i_w, args, conv_stride=1)[source]

Bases: XYDEncoderParallel

This class only reimplemented constructor of parent class, at the end of the constructor of this class, the parent class contructor is called

class domainlab.compos.vae.compos.encoder_xyd_parallel.XYDEncoderParallelConvBnReluPool(zd_dim, zx_dim, zy_dim, i_c, i_h, i_w, conv_stride=1)[source]

Bases: XYDEncoderParallel

This class only reimplemented constructor of parent class

class domainlab.compos.vae.compos.encoder_xyd_parallel.XYDEncoderParallelExtern(zd_dim, zx_dim, zy_dim, args, i_c, i_h, i_w, conv_stride=1)[source]

Bases: XYDEncoderParallel

This class only reimplemented constructor of parent class, at the end of the constructor of this class, the parent class contructor is called

class domainlab.compos.vae.compos.encoder_xyd_parallel.XYDEncoderParallelUser(net_class_d, net_x, net_class_y)[source]

Bases: XYDEncoderParallel

This class only reimplemented constructor of parent class

domainlab.compos.vae.compos.encoder_xydt_elevator module

class domainlab.compos.vae.compos.encoder_xydt_elevator.XYDTEncoderArg(device, topic_dim, zd_dim, zx_dim, zy_dim, i_c, i_h, i_w, args)[source]

Bases: XYDTEncoderElevator

This class only reimplemented constructor of parent class

class domainlab.compos.vae.compos.encoder_xydt_elevator.XYDTEncoderElevator(net_infer_zd_topic, net_infer_zx, net_infer_zy)[source]

Bases: Module

x->zx, x->zy, x->s, (x,s)->zd

forward(img)[source]

encode img into 3 latent variables separately/parallel :param img: :return q_zd, zd_q, q_zx, zx_q, q_zy, zy_q

infer_zy_loc(tensor)[source]

Used by VAE model to predict class label

domainlab.compos.vae.compos.encoder_zy module

class domainlab.compos.vae.compos.encoder_zy.EncoderConnectLastFeatLayer2Z(z_dim, flag_pretrain, i_c, i_h, i_w, args, arg_name, arg_path_name)[source]

Bases: Module

Connect the last layer of a feature extraction neural network to the latent representation This class should be transparent to where to fetch the network

forward(x)[source]
Parameters:

x

Module contents