chop.tools#

chop.tools.check_dependency#

chop.tools.check_dependency.check_deps_tensorRT_pass(silent: bool = True)[source]#
chop.tools.check_dependency.find_missing_dependencies(pass_name: str)[source]#
chop.tools.check_dependency.check_dependencies(pass_name: str, silent: bool = True)[source]#

chop.tools.checkpoint_load#

chop.tools.checkpoint_load.load_lightning_ckpt_to_unwrapped_model(checkpoint: str, model: Module)[source]#

Load a PyTorch Lightning checkpoint to a PyTorch model.

chop.tools.checkpoint_load.load_unwrapped_ckpt(checkpoint: str, model: Module)[source]#

Load a PyTorch state dict or checkpoint containing state dict to a PyTorch model.

chop.tools.checkpoint_load.load_graph_module_ckpt(checkpoint: str, weights_only: bool = False)[source]#

Load a serialized graph module.

chop.tools.checkpoint_load.load_model(load_name: str, load_type: str = 'mz', model: Module = None) Module | GraphModule[source]#

Load a pytorch/lightning/mase checkpoint to a model.

Parameters:
  • load_name (str) – path to the checkpoint

  • load_type (str, optional) – checkpoint type, must be one of [‘pt’, ‘pl’, ‘mz’],

  • extension. (representing pytorch/lightning/mase. Defaults to "auto" inferred from the)

  • model (torch.nn.Module, optional) – Model candidate to load checkpoint.

  • dict (Note that 'ms' checkpoint loads the model as well as state)

  • None. (thus does not need this arg. Defaults to)

Raises:

ValueError – Unknown extension for ‘load_type’.

Returns:

the model with the checkpoint loaded

Return type:

nn.Module/fx.GraphModule

chop.tools.config_load#

chop.tools.config_load.convert_str_na_to_none(d)[source]#

Since toml does not support None, we use “NA” to represent None.

chop.tools.config_load.convert_none_to_str_na(d)[source]#

Since toml does not support None, we use “NA” to represent None. Otherwise the none-value key will be missing in the toml file.

chop.tools.config_load.load_config(config_path)[source]#

Load from a toml config file and convert “NA” to None.

chop.tools.config_load.save_config(config, config_path)[source]#

Convert None to “NA” and save to a toml config file.

chop.tools.config_load.post_parse_load_config(args, defaults)[source]#

Load and merge arguments from a toml configuration file. If the configuration key matches the “dest” value of an existing CLI argument, we use precedence to determine which argument value to choose (i.e. default < configuration < manual overrides). These arguments are then visualised in a table. :)

chop.tools.get_input#

class chop.tools.get_input.ModelSource(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Bases: Enum

The source of the model, must be one of the following: - HF: HuggingFace - MANUAL: manually implemented - PATCHED: patched HuggingFace - TOY: toy model for testing and debugging - PHYSICAL: model that perform classification using physical data point vectors - NERF: model that estimates neural radiance field (NeRF) of a 3D scene

HF_TRANSFORMERS = 'hf_transformers'#
MANUAL = 'manual'#
PATCHED = 'patched'#
TOY = 'toy'#
TORCHVISION = 'torchvision'#
VISION_OTHERS = 'vision_others'#
PHYSICAL = 'physical'#
NERF = 'nerf'#
chop.tools.get_input.get_cf_args(model_info, task: str, model)[source]#

Get concrete forward args for freezing dynamic control flow in forward pass

chop.tools.get_input.get_dummy_input(model_info, data_module, task: str, device: str = 'meta') dict[source]#

Create a single dummy input for a model. The dummy input is a single sample from the training set.

Parameters:
  • datamodule (MaseDataModule) – a LightningDataModule instance (see machop/chop/dataset/__init__.py). Make sure the datamodule is prepared and setup.

  • task (str) – task name, one of [“cls”, “classification”, “lm”, “language_modeling”, “translation”, “tran”]

  • is_nlp_model (bool, optional) – Whether the task is NLP task or not. Defaults to False.

Returns:

a dummy input dict which can be passed to the wrapped lightning model’s forward method, like model(**dummy_input)

Return type:

dict

class chop.tools.get_input.InputGenerator(model_info, data_module, task: str, which_dataloader: Literal['train', 'val', 'test'], max_batches: int = None)[source]#

Bases: object

__init__(model_info, data_module, task: str, which_dataloader: Literal['train', 'val', 'test'], max_batches: int = None) None[source]#

Input generator for feeding batches to models. This is used for software passes.

Parameters:
  • datamodule (MyDataModule) – a MyDataModule instance (see machop/chop/dataset/data_module.py). Make sure the datamodule is prepared and setup.

  • max_batches (int, optional) – Maximum number of batches to generate. Defaults to None will stop when reaching the last batch in dataloader.

Returns:

a dummy input dict which can be passed to the wrapped lightning model’s forward method, like model(**dummy_input)

Return type:

(dict)

chop.tools.logger#

chop.tools.logger.set_logging_verbosity(level: str = 'info')[source]#
chop.tools.logger.get_logger(name: str)[source]#

chop.tools.onnx_operators#

chop.tools.onnx_operators.onnx_gemm(A, B, C=None, alpha=1.0, beta=1.0, transA=False, transB=False)[source]#
chop.tools.onnx_operators.onnx_slice(data, starts, ends, axes=None, steps=None)[source]#
chop.tools.onnx_operators.onnx_squeeze(input, dim)[source]#
chop.tools.onnx_operators.onnx_unsqueeze(input, dim)[source]#
chop.tools.onnx_operators.onnx_gather(input, dim, index)[source]#

Gather operator with support for broadcasting. See pytorch/pytorch#9407

Parameters:
  • input (_type_) – _description_

  • dim (_type_) – _description_

  • index (_type_) – _description_

Returns:

_description_

Return type:

_type_

chop.tools.onnx_operators.onnx_shape(input)[source]#
chop.tools.onnx_operators.onnx_reshape(input, shape)[source]#
chop.tools.onnx_operators.onnx_identity(input)[source]#
chop.tools.onnx_operators.onnx_expand(input, size)[source]#
chop.tools.onnx_operators.onnx_where(condition, input, other)[source]#
chop.tools.onnx_operators.onnx_full(size, fill_value)[source]#
chop.tools.onnx_operators.onnx_min(*args, **kwargs)[source]#
chop.tools.onnx_operators.onnx_permute(input, dims)[source]#

chop.tools.registry#

chop.tools.utils#

chop.tools.utils.is_tensor(x)[source]#
chop.tools.utils.to_numpy(x)[source]#
chop.tools.utils.to_numpy_if_tensor(x)[source]#
chop.tools.utils.to_tensor(x)[source]#
chop.tools.utils.to_tensor_if_numpy(x)[source]#
chop.tools.utils.copy_weights(src_weight: Tensor, tgt_weight: Tensor)[source]#
chop.tools.utils.get_checkpoint_file(checkpoint_dir)[source]#
chop.tools.utils.execute_cli(cmd, log_output: bool = True, log_file=None, cwd='.')[source]#
chop.tools.utils.get_factors(n)[source]#
chop.tools.utils.generate_truth_table(k: int, tables_count: int, device: None) Tensor[source]#

This function generate truth tables with size of k * (2**k) * tables_count

Parameters:
  • k (int) – truth table power

  • tables_count (int) – number of truth table repetition

  • device (str) – target device of the result

Returns:

2d torch tensor with k*tables_count rows and (2**k) columns

Return type:

torch.Tensor

chop.tools.utils.init_LinearLUT_weight(levels, k, original_pruning_mask, original_weight, in_features, out_features, new_module)[source]#
chop.tools.utils.init_Conv2dLUT_weight(levels, k, original_pruning_mask, original_weight, out_channels, in_channels, kernel_size, new_module)[source]#
chop.tools.utils.nested_dict_replacer(compound_dict, fn)[source]#
chop.tools.utils.parse_accelerator(accelerator: str)[source]#
chop.tools.utils.set_excepthook()[source]#
chop.tools.utils.deepsetattr(obj, attr, value)[source]#

Recurses through an attribute chain to set the ultimate value.

chop.tools.utils.deepgetattr(obj, attr, default=None)[source]#

Recurses through an attribute chain to get the ultimate value.