Tutorial 5: Neural Architecture Search (NAS) with Mase and Optuna#
In this tutorial, we’ll see how Mase can be integrated with Optuna, the popular hyperparameter optimization framework, to search for a Bert model optimized for sequence classification on the IMDb dataset. We’ll take the Optuna-generated model and import it into Mase, then run the CompressionPipeline to prepare the model for edge deployment by quantizing and pruning its weights.
As we’ll see, running Architecture Search with Mase/Optuna involves the following steps.
Define the search space: this is a dictionary containing the range of values for each parameter at each layer in the model.
Write the model constructor: this is a function which uses Optuna utilities to sample a model from the search space, and constructs the model using transformers from_config class method.
Write the objective function: this function calls on the model constructor defined in Step 2 and defines the training/evaluation setup for each search iteration.
Go! Choose an Optuna sampler, create a study and launch the search.
checkpoint = "prajjwal1/bert-tiny"
tokenizer_checkpoint = "bert-base-uncased"
dataset_name = "imdb"
First, fetch the dataset using the get_tokenized_dataset
utility.
from chop.tools import get_tokenized_dataset
dataset, tokenizer = get_tokenized_dataset(
dataset=dataset_name,
checkpoint=tokenizer_checkpoint,
return_tokenizer=True,
)
/Users/yz10513/anaconda3/envs/mase/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
INFO Tokenizing dataset imdb with AutoTokenizer for bert-base-uncased.
1. Defining the Search Space#
We’ll start by defining a search space, i.e. enumerating the possible combinations of hyperparameters that Optuna can choose during search. We’ll explore the following range of values for the model’s hidden size, intermediate size, number of layers and number of heads, inspired by the NAS-BERT paper.
import torch.nn as nn
from chop.nn.modules import Identity
search_space = {
"num_layers": [2, 4, 8],
"num_heads": [2, 4, 8, 16],
"hidden_size": [128, 192, 256, 384, 512],
"intermediate_size": [512, 768, 1024, 1536, 2048],
"linear_layer_choices": [
nn.Linear,
Identity,
],
}
2. Writing a Model Constructor#
We define the following function, which will get called in each iteration of the search process. The function is passed the trial
argument, which is an Optuna object that comes with many functionalities - see the Trial documentation for more details. Here, we use the trial.suggest_int
and trial.suggest_categorical
functions to trigger the chosen sampler to choose parameter choices and layer types. The suggested integer is the index into the search space for each parameter, which we defined in the previous cell.
from transformers import AutoConfig, AutoModelForSequenceClassification
from chop.tools.utils import deepsetattr
def construct_model(trial):
config = AutoConfig.from_pretrained(checkpoint)
# Update the paramaters in the config
for param in [
"num_layers",
"num_heads",
"hidden_size",
"intermediate_size",
]:
chosen_idx = trial.suggest_int(param, 0, len(search_space[param]) - 1)
setattr(config, param, search_space[param][chosen_idx])
trial_model = AutoModelForSequenceClassification.from_config(config)
for name, layer in trial_model.named_modules():
if isinstance(layer, nn.Linear) and layer.in_features == layer.out_features:
new_layer_cls = trial.suggest_categorical(
f"{name}_type",
search_space["linear_layer_choices"],
)
if new_layer_cls == nn.Linear:
continue
elif new_layer_cls == Identity:
new_layer = Identity()
deepsetattr(trial_model, name, new_layer)
else:
raise ValueError(f"Unknown layer type: {new_layer_cls}")
return trial_model
3. Defining the Objective Function#
Next, we define the objective function for the search, which gets called on each trial. In each trial, we create a new model instace with chosen hyperparameters according to the defined sampler. We then use the get_trainer
utility in Mase to run a training loop on the IMDb dataset for a number of epochs. Finally, we use evaluate
to report back the classification accuracy on the test split.
from chop.tools import get_trainer
def objective(trial):
# Define the model
model = construct_model(trial)
trainer = get_trainer(
model=model,
tokenized_dataset=dataset,
tokenizer=tokenizer,
evaluate_metric="accuracy",
num_train_epochs=1,
)
trainer.train()
eval_results = trainer.evaluate()
# Set the model as an attribute so we can fetch it later
trial.set_user_attr("model", model)
return eval_results["eval_accuracy"]
4. Launching the Search#
Optuna provides a number of samplers, for example:
GridSampler: iterates through every possible combination of hyperparameters in the search space
RandomSampler: chooses a random combination of hyperparameters in each iteration
TPESampler: uses Tree-structured Parzen Estimator algorithm to choose hyperparameter values.
You can define the chosen sampler by simply importing from optuna.samplers
as below.
from optuna.samplers import GridSampler, RandomSampler, TPESampler
sampler = RandomSampler()
With all the pieces in place, we can launch the search as follows. The number of trials is set to 1 so you can go get a coffee for 10 minutes, then proceed with the tutorial. However, this will essentially be a random model - for better results, set this to 100 and leave it running overnight!
import optuna
study = optuna.create_study(
direction="maximize",
study_name="bert-tiny-nas-study",
sampler=sampler,
)
study.optimize(
objective,
n_trials=1,
timeout=60 * 60 * 24,
)
[I 2024-12-01 22:51:50,104] A new study created in memory with name: bert-tiny-nas-study
/Users/yz10513/anaconda3/envs/mase/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
/Users/yz10513/anaconda3/envs/mase/lib/python3.11/site-packages/optuna/distributions.py:524: UserWarning: Choices for a categorical distribution should be a tuple of None, bool, int, float and str for persistent storage but contains <class 'torch.nn.modules.linear.Linear'> which is of type type.
warnings.warn(message)
/Users/yz10513/anaconda3/envs/mase/lib/python3.11/site-packages/optuna/distributions.py:524: UserWarning: Choices for a categorical distribution should be a tuple of None, bool, int, float and str for persistent storage but contains <class 'chop.nn.modules.identity.Identity'> which is of type type.
warnings.warn(message)
[2024-12-01 22:51:52,032] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to mps (auto detect)
W1201 22:51:52.945000 8580182592 torch/distributed/elastic/multiprocessing/redirects.py:27] NOTE: Redirects are currently not supported in Windows or MacOs.
16%|█▌ | 500/3125 [00:44<03:42, 11.81it/s]
{'loss': 0.668, 'grad_norm': 4.309191703796387, 'learning_rate': 4.2e-05, 'epoch': 0.16}
32%|███▏ | 1000/3125 [01:28<05:01, 7.06it/s]
{'loss': 0.5047, 'grad_norm': 23.95120620727539, 'learning_rate': 3.4000000000000007e-05, 'epoch': 0.32}
48%|████▊ | 1500/3125 [02:13<01:41, 16.07it/s]
{'loss': 0.4092, 'grad_norm': 12.666841506958008, 'learning_rate': 2.6000000000000002e-05, 'epoch': 0.48}
64%|██████▍ | 2000/3125 [02:52<01:49, 10.28it/s]
{'loss': 0.3682, 'grad_norm': 21.01616859436035, 'learning_rate': 1.8e-05, 'epoch': 0.64}
80%|████████ | 2500/3125 [03:25<00:38, 16.24it/s]
{'loss': 0.3396, 'grad_norm': 19.34454917907715, 'learning_rate': 1e-05, 'epoch': 0.8}
96%|█████████▌| 3000/3125 [03:59<00:08, 14.06it/s]
{'loss': 0.3538, 'grad_norm': 21.48626708984375, 'learning_rate': 2.0000000000000003e-06, 'epoch': 0.96}
100%|██████████| 3125/3125 [04:07<00:00, 12.60it/s]
{'train_runtime': 247.935, 'train_samples_per_second': 100.833, 'train_steps_per_second': 12.604, 'train_loss': 0.4358914270019531, 'epoch': 1.0}
100%|██████████| 3125/3125 [03:58<00:00, 13.08it/s]
[I 2024-12-01 23:00:00,199] Trial 0 finished with value: 0.86256 and parameters: {'num_layers': 2, 'num_heads': 1, 'hidden_size': 3, 'intermediate_size': 3, 'bert.encoder.layer.0.attention.self.query_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.encoder.layer.0.attention.self.key_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.encoder.layer.0.attention.self.value_type': <class 'chop.nn.modules.identity.Identity'>, 'bert.encoder.layer.0.attention.output.dense_type': <class 'chop.nn.modules.identity.Identity'>, 'bert.encoder.layer.1.attention.self.query_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.encoder.layer.1.attention.self.key_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.encoder.layer.1.attention.self.value_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.encoder.layer.1.attention.output.dense_type': <class 'torch.nn.modules.linear.Linear'>, 'bert.pooler.dense_type': <class 'torch.nn.modules.linear.Linear'>}. Best is trial 0 with value: 0.86256.
Fetch the model associated with the best trial as follows, and export to be used in future tutorials. In Tutorial 6, we’ll see how to run mixed-precision quantization search on top of the model we’ve just found through NAS to further find the optimal quantization mapping.
from pathlib import Path
import dill
model = study.best_trial.user_attrs["model"].cpu()
with open(f"{Path.home()}/tutorial_5_best_model.pkl", "wb") as f:
dill.dump(model, f)
Deploying the Optimized Model with CompressionPipeline#
Now, we can run the CompressionPipeline in Mase to run uniform quantization and pruning over the searched model.
from chop.pipelines import CompressionPipeline
from chop import MaseGraph
mg = MaseGraph(model)
pipe = CompressionPipeline()
quantization_config = {
"by": "type",
"default": {
"config": {
"name": None,
}
},
"linear": {
"config": {
"name": "integer",
# data
"data_in_width": 8,
"data_in_frac_width": 4,
# weight
"weight_width": 8,
"weight_frac_width": 4,
# bias
"bias_width": 8,
"bias_frac_width": 4,
}
},
}
pruning_config = {
"weight": {
"sparsity": 0.5,
"method": "l1-norm",
"scope": "local",
},
"activation": {
"sparsity": 0.5,
"method": "l1-norm",
"scope": "local",
},
}
mg, _ = pipe(
mg,
pass_args={
"quantize_transform_pass": quantization_config,
"prune_transform_pass": pruning_config,
},
)
Finally, export the MaseGraph for the compressed checkpoint to be used in future tutorials for hardware generation and distributed deployment.
mg.export(f"{Path.home()}/tutorial_5_nas_compressed")