Tutorial 6: Mixed Precision Quantization Search with Mase and Optuna#
In this tutorial, we’ll see how Mase can be integrated with Optuna, the popular hyperparameter optimization framework, to search for a Bert model optimized for sequence classification on the IMDb dataset. We’ll take the Optuna-generated model and import it into Mase, then run the CompressionPipeline to prepare the model for edge deployment by quantizing and pruning its weights.
As we’ll see, running Architecture Search with Mase/Optuna involves the following steps.
Define the search space: this is a dictionary containing the range of values for each parameter at each layer in the model.
Write the model constructor: this is a function which uses Optuna utilities to sample a model from the search space, and constructs the model using transformers from_config class method.
Write the objective function: this function calls on the model constructor defined in Step 2 and defines the training/evaluation setup for each search iteration.
Go! Choose an Optuna sampler, create a study and launch the search.
checkpoint = "prajjwal1/bert-tiny"
tokenizer_checkpoint = "bert-base-uncased"
dataset_name = "imdb"
Importing the model#
If you are starting from scratch, you can load the Bert checkpoint directly from HuggingFace.
from transformers import AutoModel
model = AutoModel.from_pretrained(checkpoint)
If you have previously ran the tutorial on Neural Architecture Search (NAS), run the following cell to import the best model obtained from the search process.
from pathlib import Path
import dill
with open(f"{Path.home()}/tutorial_5_best_model.pkl", "rb") as f:
base_model = dill.load(f)
First, fetch the dataset using the get_tokenized_dataset
utility.
from chop.tools import get_tokenized_dataset
dataset, tokenizer = get_tokenized_dataset(
dataset=dataset_name,
checkpoint=tokenizer_checkpoint,
return_tokenizer=True,
)
1. Defining the Search Space#
We’ll start by defining a search space, i.e. enumerating the possible combinations of hyperparameters that Optuna can choose during search. We’ll explore the following range of values for the model’s hidden size, intermediate size, number of layers and number of heads.
import torch
from chop.nn.quantized.modules.linear import (
LinearInteger,
LinearMinifloatDenorm,
LinearMinifloatIEEE,
LinearLog,
LinearBlockFP,
LinearBlockMinifloat,
LinearBlockLog,
LinearBinary,
LinearBinaryScaling,
LinearBinaryResidualSign,
)
search_space = {
"linear_layer_choices": [
torch.nn.Linear,
LinearInteger,
],
}
2. Writing a Model Constructor#
We define the following function, which will get called in each iteration of the search process. The function is passed the trial
argument, which is an Optuna object that comes with many functionalities - see the Trial documentation for more details. Here, we use the trial.suggest_categorical
function, which triggers the chosen sampler to choose a layer type. The suggested integer is the index into the search space for each parameter, which we defined in the previous cell.
from chop.tools.utils import deepsetattr
from copy import deepcopy
def construct_model(trial):
# Fetch the model
trial_model = deepcopy(base_model)
# Quantize layers according to optuna suggestions
for name, layer in trial_model.named_modules():
if isinstance(layer, torch.nn.Linear):
new_layer_cls = trial.suggest_categorical(
f"{name}_type",
search_space["linear_layer_choices"],
)
if new_layer_cls == torch.nn.Linear:
continue
kwargs = {
"in_features": layer.in_features,
"out_features": layer.out_features,
}
# If the chosen layer is integer, define the low precision config
if new_layer_cls == LinearInteger:
kwargs["config"] = {
"data_in_width": 8,
"data_in_frac_width": 4,
"weight_width": 8,
"weight_frac_width": 4,
"bias_width": 8,
"bias_frac_width": 4,
}
# elif... (other precisions)
# Create the new layer (copy the weights)
new_layer = new_layer_cls(**kwargs)
new_layer.weight.data = layer.weight.data
# Replace the layer in the model
deepsetattr(trial_model, name, new_layer)
return trial_model
3. Defining the Objective Function#
Next, we define the objective function for the search, which gets called on each trial. In each trial, we create a new model instace with chosen hyperparameters according to the defined sampler. We then use the get_trainer
utility in Mase to run a training loop on the IMDb dataset for a number of epochs. Finally, we use evaluate
to report back the classification accuracy on the test split.
from chop.tools import get_trainer
import random
def objective(trial):
# Define the model
model = construct_model(trial)
trainer = get_trainer(
model=model,
tokenized_dataset=dataset,
tokenizer=tokenizer,
evaluate_metric="accuracy",
num_train_epochs=1,
)
trainer.train()
eval_results = trainer.evaluate()
trial.set_user_attr("model", model)
return eval_results["eval_accuracy"]
4. Launching the Search#
Optuna provides a number of samplers, for example:
GridSampler: iterates through every possible combination of hyperparameters in the search space
RandomSampler: chooses a random combination of hyperparameters in each iteration
TPESampler: uses Tree-structured Parzen Estimator algorithm to choose hyperparameter values.
You can define the chosen sampler by simply importing from optuna.samplers
as below.
from optuna.samplers import GridSampler, RandomSampler, TPESampler
sampler = RandomSampler()
With all the pieces in place, we can launch the search as follows. The number of trials is set to 1 so you can go get a coffee for 10 minutes, then proceed with the tutorial. However, this will essentially be a random model - for better results, set this to 100 and leave it running overnight!
import optuna
study = optuna.create_study(
direction="maximize",
study_name="bert-tiny-nas-study",
sampler=sampler,
)
study.optimize(
objective,
n_trials=1,
timeout=60 * 60 * 24,
)