chop.nn.quantized.functional#
chop.nn.quantized.functional.add#
chop.nn.quantized.functional.gelu#
chop.nn.quantized.functional.matmul#
- chop.nn.quantized.functional.matmul.generic_matmul_integer(x, y, config, style='matmul', out_config=None, floor=False)[source]#
- chop.nn.quantized.functional.matmul.generic_matmul_minifloat_denorm(x, y, config, style='matmul')[source]#
- chop.nn.quantized.functional.matmul.generic_matmul_minifloat_ieee(x, y, config, style='matmul')[source]#
- chop.nn.quantized.functional.matmul.generic_matmul_block_minifloat(x, y, config, style='matmul')[source]#
chop.nn.quantized.functional.mult#
chop.nn.quantized.functional.relu#
chop.nn.quantized.functional.selu#
chop.nn.quantized.functional.softermax#
- chop.nn.quantized.functional.softermax.fixed_softermax(input: Tensor, q_config: dict = None, out_q_config: dict = None, dim: int = 0) Tensor [source]#
Fixed-point softermax implementation, according to “Softermax: Hardware/Software Co-Design of an Efficient Softmax for Transformers” paper (https://arxiv.org/abs/2103.09301).
- Parameters:
input (Tensor) – Input tensor
- Returns:
Output tensor
- Return type:
Tensor
chop.nn.quantized.functional.softplus#
- chop.nn.quantized.functional.softplus.softplus_minifloat_denorm(x, inplace=False, config=None)[source]#
- chop.nn.quantized.functional.softplus.softplus_minifloat_ieee(x, inplace=False, config=None)[source]#
chop.nn.quantized.functional.softsign#
- chop.nn.quantized.functional.softsign.softsign_minifloat_denorm(x, inplace=False, config=None)[source]#
- chop.nn.quantized.functional.softsign.softsign_minifloat_ieee(x, inplace=False, config=None)[source]#