1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 【从官方案例学框架Tensorflow/Keras】端到端地构建掩码语言模型BERT 并微调使用

【从官方案例学框架Tensorflow/Keras】端到端地构建掩码语言模型BERT 并微调使用

时间:2023-02-26 20:12:26

相关推荐

【从官方案例学框架Tensorflow/Keras】端到端地构建掩码语言模型BERT 并微调使用

【从官方案例学框架Tensorflow/Keras】端到端地构建掩码语言模型BERT,并微调使用

Keras官方案例链接

Tensorflow官方案例链接

Paddle官方案例链接

Pytorch官方案例链接

注:本系列仅帮助大家快速理解、学习并能独立使用相关框架进行深度学习的研究,理论部分还请自行学习补充,每个框架的官方经典案例写的都非常好,很值得进行学习使用。可以说在完全理解官方经典案例后加以修改便可以解决大多数常见的相关任务。

摘要:【从官方案例学框架Keras】端到端地构建掩码语言模型BERT,实现掩码语言模型(MLM),并将预训练好的模型”BERT“用在IMDB数据集上微调进行情感分类

目录

【从官方案例学框架Tensorflow/Keras】端到端地构建掩码语言模型BERT,并微调使用1 Introduction2 Setup3 Load the data4 Dataset preparation5 Create BERT model (Pretraining Model) for masked language modeling6 Train and Save7 Fine-tune a sentiment classification modelCreate an end-to-end model and evaluate it

1 Introduction

Masked Language Modeling(MLM)掩码语言模型是一项填空任务,其中模型使用围绕掩码标记的上下文单词来尝试预测掩码单词应该是什么。对于包含一个或多个掩码标记的输入,模型将为每个标记生成最可能的替代。

例子:

Input: “I have watched this [MASK] and it was awesome.”Output: “I have watched this movie and it was awesome.”

MLM掩码语言模型是一种很棒的自监督学习方法,无需人工标签。这样的模型能在许多有监督NLP任务中微调解决问题

本例中你将学到如何从零构建BERT模型,并进行MLM掩码语言模型的训练,最后将其应用到情感分类的任务上

2 Setup

本例中将需要实验版tensorflow:tf-nightly而非tensorflow稳定版

anaconda环境下可以新建环境进行pip装库,若不想自行训练预训练模型,只需看懂下面代码即可,无需再安装实验

pip install tf-nightly

导入相关库

import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layersfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorizationfrom dataclasses import dataclassimport pandas as pdimport numpy as npimport globimport reimport osfrom tqdm import tqdmfrom pprint import pprint

定义超参数

@dataclassclass Config:MAX_LEN = 256BATCH_SIZE = 32LR = 0.001VOCAB_SIZE = 30000EMBED_DIM = 128NUM_HEAD = 8 # used in bert modelFF_DIM = 128 # used in bert modelNUM_LAYERS = 1config = Config()

3 Load the data

IMDB数据集的下载,本地可通过链接下载

https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz

!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz!tar -xf aclImdb_v1.tar.gz

官方示例

def get_text_list_from_files(files):text_list = []for name in files:with open(name) as f:for line in f:text_list.append(line)return text_listdef get_data_from_text_files(folder_name):pos_files = glob.glob("aclImdb/" + folder_name + "/pos/*.txt")pos_texts = get_text_list_from_files(pos_files)neg_files = glob.glob("aclImdb/" + folder_name + "/neg/*.txt")neg_texts = get_text_list_from_files(neg_files)df = pd.DataFrame({"review": pos_texts + neg_texts,"sentiment": [0] * len(pos_texts) + [1] * len(neg_texts),})df = df.sample(len(df)).reset_index(drop=True)return dftrain_df = get_data_from_text_files("train")test_df = get_data_from_text_files("test")all_data = train_df.append(test_df)

本地下载的可使用这个读取数据代码

def get_text_list_from_files(files):text_list = []files_name = os.listdir(files)for name in tqdm(files_name):with open(files+name,encoding='utf-8') as f:for line in f:text_list.append(line)return text_listdef get_data_from_text_files(folder_name):pos_files = "../input/aclImdb/{}/pos/".format(folder_name)pos_texts = get_text_list_from_files(pos_files)neg_files = "../input/aclImdb/{}/neg/".format(folder_name)neg_texts = get_text_list_from_files(neg_files)df = pd.DataFrame({"review": pos_texts + neg_texts,"sentiment": [0] * len(pos_texts) + [1] * len(neg_texts),})df = df.sample(len(df)).reset_index(drop=True)return dftrain_df = get_data_from_text_files("train")test_df = get_data_from_text_files("test")all_data = train_df.append(test_df)

4 Dataset preparation

使用TextVectorization将文本转为整数索引,并将字符串转为一串tokens序列,并定义三种预处理函数:

get_vectorize_layer:build TextVectorization layerencode:将原始文本编码成整数索引get_masked_input_and_labels:在每段序列中随机MASK掉15%的tokens

让我们重点了解下下面代码中的函数

custom_standardization:将文本字母均改为小写,去除<br />,和诸多符号字符

get_vectorize_layer:对给定文本text建立词汇表,并将词汇表中最后一个词替换为【mask】,在此注意词汇表的前两词分别为用于padding的" “和未登录词的【UNK】。在改变原有词汇表时,要注意取[2:],即忽略掉这两个词再.set_vocabulary(vocab)建立词汇表,建立词汇表是会默认再将前两个词设定为” "和【UNK】

encode:将文本字符串映射为字典中的索引,并填充/截断为指定长度

get_masked_input_and_labels:最重要的掩码模型所需训练数据集的设定,这里的代码值得推敲

 首先,随机选取15%的位置做MASK标记,并对所有位置中出现0 1 2 即” “、【UNK】、the强制设定不MASK。这是第一阶段MASK

 之后,第二阶段MASK,在第一阶段MASK的基础上,将其中MASK部分取90%做真正的MASK标记,并将其改为【MASK】token id = 29999;剩下的10%做随机替换,替换的值不包括 0 1 2 29999 即” “、【UNK】、the、【MASK】

 最后,将参数位置中没被【MASK】的位置参数设为0,被【MASK】掉的位置参数设为1

def custom_standardization(input_data):lowercase = tf.strings.lower(input_data)stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")return tf.strings.regex_replace(stripped_html, "[%s]" % re.escape("!#$%&'()*+,-./:;<=>?@\^_`{|}~"), "")def get_vectorize_layer(texts, vocab_size, max_seq, special_tokens=["[MASK]"]):"""Build Text vectorization layerArgs:texts (list): List of string i.e input textsvocab_size (int): vocab sizemax_seq (int): Maximum sequence lenght.special_tokens (list, optional): List of special tokens. Defaults to ['[MASK]'].Returns:layers.Layer: Return TextVectorization Keras Layer"""vectorize_layer = TextVectorization(max_tokens=vocab_size,output_mode="int",standardize=custom_standardization,output_sequence_length=max_seq,)vectorize_layer.adapt(texts)# Insert mask token in vocabularyvocab = vectorize_layer.get_vocabulary()vocab = vocab[2 : vocab_size - len(special_tokens)] + ["[mask]"]vectorize_layer.set_vocabulary(vocab)return vectorize_layervectorize_layer = get_vectorize_layer(all_data.review.values.tolist(),config.VOCAB_SIZE,config.MAX_LEN,special_tokens=["[mask]"],)# Get mask token id for masked language modelmask_token_id = vectorize_layer(["[mask]"]).numpy()[0][0] # 29999def encode(texts):encoded_texts = vectorize_layer(texts)return encoded_texts.numpy()def get_masked_input_and_labels(encoded_texts):# 15% BERT maskinginp_mask = np.random.rand(*encoded_texts.shape) < 0.15# Do not mask special tokensinp_mask[encoded_texts <= 2] = False# Set targets to -1 by default, it means ignorelabels = -1 * np.ones(encoded_texts.shape, dtype=int)# Set labels for masked tokenslabels[inp_mask] = encoded_texts[inp_mask]# Prepare inputencoded_texts_masked = np.copy(encoded_texts)# Set input to [MASK] which is the last token for the 90% of tokens# This means leaving 10% unchangedinp_mask_2mask = inp_mask & (np.random.rand(*encoded_texts.shape) < 0.90)encoded_texts_masked[inp_mask_2mask] = mask_token_id # mask token is the last in the dict# Set 10% to a random tokeninp_mask_2random = inp_mask_2mask & (np.random.rand(*encoded_texts.shape) < 1 / 9)encoded_texts_masked[inp_mask_2random] = np.random.randint(3, mask_token_id, inp_mask_2random.sum())# Prepare sample_weights to pass to .fit() methodsample_weights = np.ones(labels.shape)sample_weights[labels == -1] = 0# y_labels would be same as encoded_texts i.e input tokensy_labels = np.copy(encoded_texts)return encoded_texts_masked, y_labels, sample_weights# We have 25000 examples for trainingx_train = encode(train_df.review.values) # encode reviews with vectorizery_train = train_df.sentiment.valuestrain_classifier_ds = (tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1000).batch(config.BATCH_SIZE))# We have 25000 examples for testingx_test = encode(test_df.review.values)y_test = test_df.sentiment.valuestest_classifier_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(config.BATCH_SIZE)# Build dataset for end to end model input (will be used at the end)test_raw_classifier_ds = tf.data.Dataset.from_tensor_slices((test_df.review.values, y_test)).batch(config.BATCH_SIZE)# Prepare data for masked language modelx_all_review = encode(all_data.review.values)x_masked_train, y_masked_labels, sample_weights = get_masked_input_and_labels(x_all_review)mlm_ds = tf.data.Dataset.from_tensor_slices((x_masked_train, y_masked_labels, sample_weights))mlm_ds = mlm_ds.shuffle(1000).batch(config.BATCH_SIZE)

5 Create BERT model (Pretraining Model) for masked language modeling

我们将使用MultiHeadAttention层创建一个BERT的预训练模型。它将token的id作为输入(包括masked token),并对其预测被masked的token

BERT的结构是Transformer结构的Encoder部分,即下图左侧

下面代码中最重要的部分便是bert_module,请参照上图和具体代码进行理解

bert_module:代码中# Multi headed self-attention部分实现了上图的Multi-Head Attention,输入是q、k、v矩阵,i代表第几层(也就是上图中的Nx),初始时q=k=v,并通过(query + attention_output)实现Add残差连接,LayerNormalization实现Norm标准化;ffn由两个Dense层组成当作Feed Forward前向传播网络,同样通过(attention_output + ffn_output)实现Add残差连接,LayerNormalization实现Norm标准化

get_pos_encoding_matrix:以sin、cos实现文本数据的位置编码

MaskedLanguageModel:这个类实现了掩码语言模型的梯度计算、计算损失函数并进行参数优化

create_masked_language_bert_model:构建整个掩码模型

MaskedTextGenerator:回调函数,用于训练时展示训练效果

def bert_module(query, key, value, i):# Multi headed self-attentionattention_output = layers.MultiHeadAttention(num_heads = config.NUM_HEAD,key_dim = config.EMBED_DIM // config.NUM_HEAD,name="encoder_{}/multiheadattention".format(i),)(query, key, value)attention_output = layers.Dropout(0.1, name="encoder_{}/att_dropout".format(i))(attention_output)attention_output = layers.LayerNormalization(epsilon=1e-6, name="encoder_{}/att_layernormalization".format(i))(query + attention_output)# Feed-forward layerffn = keras.Sequential([layers.Dense(config.FF_DIM, activation="relu"),layers.Dense(config.EMBED_DIM),],name="encoder_{}/ffn".format(i),)ffn_output = ffn(attention_output)ffn_output = layers.Dropout(0.1, name="encoder_{}/ffn_dropout".format(i))(ffn_output)sequence_output = layers.LayerNormalization(epsilon=1e-6, name="encoder_{}/ffn_layernormalization".format(i))(attention_output + ffn_output)return sequence_outputdef get_pos_encoding_matrix(max_len, d_emb):pos_enc = np.array([[pos / np.power(10000, 2 * (j // 2) / d_emb) for j in range(d_emb)]if pos != 0else np.zeros(d_emb)for pos in range(max_len)])pos_enc[1:, 0::2] = np.sin(pos_enc[1:, 0::2]) # dim 2ipos_enc[1:, 1::2] = np.cos(pos_enc[1:, 1::2]) # dim 2i+1return pos_encloss_fn = keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)loss_tracker = tf.keras.metrics.Mean(name="loss")class MaskedLanguageModel(tf.keras.Model):def train_step(self, inputs):if len(inputs) == 3:features, labels, sample_weight = inputselse:features, labels = inputssample_weight = Nonewith tf.GradientTape() as tape:predictions = self(features, training=True)loss = loss_fn(labels, predictions, sample_weight=sample_weight)# Compute gradientstrainable_vars = self.trainable_variablesgradients = tape.gradient(loss, trainable_vars)# Update weightsself.optimizer.apply_gradients(zip(gradients, trainable_vars))# Compute our own metricsloss_tracker.update_state(loss, sample_weight=sample_weight)# Return a dict mapping metric names to current valuereturn {"loss": loss_tracker.result()}@propertydef metrics(self):# We list our `Metric` objects here so that `reset_states()` can be# called automatically at the start of each epoch# or at the start of `evaluate()`.# If you don't implement this property, you have to call# `reset_states()` yourself at the time of your choosing.return [loss_tracker]def create_masked_language_bert_model():inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64)word_embeddings = layers.Embedding(config.VOCAB_SIZE, config.EMBED_DIM, name="word_embedding")(inputs)position_embeddings = layers.Embedding(input_dim=config.MAX_LEN,output_dim=config.EMBED_DIM,weights=[get_pos_encoding_matrix(config.MAX_LEN, config.EMBED_DIM)],name="position_embedding",)(tf.range(start=0, limit=config.MAX_LEN, delta=1))embeddings = word_embeddings + position_embeddingsencoder_output = embeddingsfor i in range(config.NUM_LAYERS):encoder_output = bert_module(encoder_output, encoder_output, encoder_output, i)mlm_output = layers.Dense(config.VOCAB_SIZE, name="mlm_cls", activation="softmax")(encoder_output)mlm_model = MaskedLanguageModel(inputs, mlm_output, name="masked_bert_model")optimizer = keras.optimizers.Adam(learning_rate=config.LR)pile(optimizer=optimizer)return mlm_modelid2token = dict(enumerate(vectorize_layer.get_vocabulary()))token2id = {y: x for x, y in id2token.items()}class MaskedTextGenerator(keras.callbacks.Callback):def __init__(self, sample_tokens, top_k=5):self.sample_tokens = sample_tokensself.k = top_kdef decode(self, tokens):return " ".join([id2token[t] for t in tokens if t != 0])def convert_ids_to_tokens(self, id):return id2token[id]def on_epoch_end(self, epoch, logs=None):prediction = self.model.predict(self.sample_tokens)masked_index = np.where(self.sample_tokens == mask_token_id)masked_index = masked_index[1]mask_prediction = prediction[0][masked_index]top_indices = mask_prediction[0].argsort()[-self.k :][::-1]values = mask_prediction[0][top_indices]for i in range(len(top_indices)):p = top_indices[i]v = values[i]tokens = np.copy(sample_tokens[0])tokens[masked_index[0]] = presult = {"input_text": self.decode(sample_tokens[0].numpy()),"prediction": self.decode(tokens),"probability": v,"predicted mask token": self.convert_ids_to_tokens(p),}pprint(result)sample_tokens = vectorize_layer(["I have watched this [mask] and it was awesome"])generator_callback = MaskedTextGenerator(sample_tokens.numpy())bert_masked_model = create_masked_language_bert_model()bert_masked_model.summary()

6 Train and Save

训练并保存预训练模型

bert_masked_model.fit(mlm_ds, epochs=5, callbacks=[generator_callback])bert_masked_model.save("bert_mlm_imdb.h5")

CPU 5个小时左右,GPU 2个小时

7 Fine-tune a sentiment classification model

接下来,我们将在情绪分类的下游任务上对我们的自监督模型进行微调解决任务。为此,让我们在预训练模型BERT上加一个pooling池化和Dense层来创建一个分类器

# Load pretrained bert modelmlm_model = keras.models.load_model("bert_mlm_imdb.h5", custom_objects={"MaskedLanguageModel": MaskedLanguageModel})pretrained_bert_model = tf.keras.Model(mlm_model.input, mlm_model.get_layer("encoder_0/ffn_layernormalization").output)# Freeze itpretrained_bert_model.trainable = Falsedef create_classifier_bert_model():inputs = layers.Input((config.MAX_LEN,), dtype=tf.int64)sequence_output = pretrained_bert_model(inputs)pooled_output = layers.GlobalMaxPooling1D()(sequence_output)hidden_layer = layers.Dense(64, activation="relu")(pooled_output)outputs = layers.Dense(1, activation="sigmoid")(hidden_layer)classifer_model = keras.Model(inputs, outputs, name="classification")optimizer = keras.optimizers.Adam()pile(optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"])return classifer_modelclassifer_model = create_classifier_bert_model()classifer_model.summary()# Train the classifier with frozen BERT stageclassifer_model.fit(train_classifier_ds,epochs=5,validation_data=test_classifier_ds,)# Unfreeze the BERT model for fine-tuningpretrained_bert_model.trainable = Trueoptimizer = keras.optimizers.Adam()pile(optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"])classifer_model.fit(train_classifier_ds,epochs=5,validation_data=test_classifier_ds,)

Create an end-to-end model and evaluate it

当您希望部署一个模型时,最好它已经包含了预处理,这样就不必在生产环境中重新实现预处理逻辑。让我们创建一个包含TextVectorization层的端到端模型,并进行计算。我们的模型将接受原始字符串作为输入。

def get_end_to_end(model):inputs_string = keras.Input(shape=(1,), dtype="string")indices = vectorize_layer(inputs_string)outputs = model(indices)end_to_end_model = keras.Model(inputs_string, outputs, name="end_to_end_model")optimizer = keras.optimizers.Adam(learning_rate=config.LR)pile(optimizer=optimizer, loss="binary_crossentropy", metrics=["accuracy"])return end_to_end_modelend_to_end_classification_model = get_end_to_end(classifer_model)end_to_end_classification_model.evaluate(test_raw_classifier_ds)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。