1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > FGSM生成对抗样本(MNIST数据集)Pytorch代码实现与实验分析

FGSM生成对抗样本(MNIST数据集)Pytorch代码实现与实验分析

时间:2018-10-27 12:47:56

相关推荐

FGSM生成对抗样本(MNIST数据集)Pytorch代码实现与实验分析

文章目录

1 实验目标2 实验流程2.1 搭建LeNet训练,测试准确度2.2 fgsm生成对抗样本2.3 探究不同epsilon值对分类准确度的影响3 实验结果4 完整代码

1 实验目标

pytorch实现fgsm attack原始样本、对抗样本与对抗扰动的可视化探究不同epsilon值对accuracy的影响

2 实验流程

搭建LeNet网络训练MNIST分类模型,测试准确率。生成不同epsilon值的对抗样本,送入训练好的模型,再次测试准确率,得到结果

2.1 搭建LeNet训练,测试准确度

导入pytorch必要库

import os.pathimport torchimport torchvisionimport torchvision.transforms as transformsfrom torch import nnfrom torch.utils.data import DataLoaderimport numpy as npimport matplotlib.pyplot as plt

加载torchvision中的MNIST数据集

train_data = torchvision.datasets.MNIST(root='data',train=True,download=True,transform=transforms.ToTensor())test_data = torchvision.datasets.MNIST(root='data',train=False,download=True,transform=transforms.ToTensor())batch_size = 64train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size)test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size)

matplotlib展示MNIST图像

plt.figure(figsize=(8, 8))iter_dataloader = iter(test_dataloader)n=1# 取出n*batch_size张图片可视化for i in range(n):images, labels = next(iter_dataloader)image_grid = torchvision.utils.make_grid(images)plt.subplot(1, n, i+1)plt.imshow(np.transpose(image_grid.numpy(), (1, 2, 0)))

转移到GPU训练

device = 'cuda:0' if torch.cuda.is_available() else 'cpu'print(device)

搭建LeNet网络

class LeNet(nn.Module):def __init__(self):super(LeNet,self).__init__()self.conv = nn.Sequential(nn.Conv2d(1,6,3,stride=1,padding=1),nn.MaxPool2d(2,2),nn.Conv2d(6,16,5,stride=1,padding=1),nn.MaxPool2d(2,2))self.fc = nn.Sequential(nn.Linear(576,120),nn.Linear(120,84),nn.Linear(84,10))def forward(self,x):out = self.conv(x)out = out.view(out.size(0),-1)out = self.fc(out)return out

定义训练函数

def train(network):losses = []iteration = 0epochs = 10for epoch in range(epochs):loss_sum = 0for i, (X, y) in enumerate(train_dataloader):X, y = X.to(device), y.to(device)pred = network(X)loss = loss_fn(pred, y)loss_sum += loss.item()optimizer.zero_grad()loss.backward()optimizer.step()mean_loss = loss_sum / len(train_dataloader.dataset)losses.append(mean_loss)iteration += 1print(f"Epoch {epoch+1} loss: {mean_loss:>7f}")# 训练完毕保存最后一轮训练的模型torch.save(network.state_dict(), "model.pth")# 绘制损失函数曲线plt.xlabel("Epochs")plt.ylabel("Loss Value")plt.plot(list(range(iteration)), losses)

network = LeNet()network.to(device)loss_fn = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(params=network.parameters(), lr=0.001, momentum=0.9)if os.path.exists('model.pth'):network.load_state_dict(torch.load('model.pth'))else:train(network)

得到损失值与损失值图像

对模型进行测试,得到准确度

positive = 0negative = 0for X, y in test_dataloader:with torch.no_grad():X, y = X.to(device), y.to(device)pred = network(X)for item in zip(pred, y):if torch.argmax(item[0]) == item[1]:positive += 1else:negative += 1acc = positive / (positive + negative)print(f"{acc * 100}%")

2.2 fgsm生成对抗样本

# 寻找对抗样本,并可视化eps = [0.01, 0.05, 0.1, 0.2, 0.5]for X, y in test_dataloader:X, y = X.to(device), y.to(device)X.requires_grad = Truepred = network(X)network.zero_grad()loss = loss_fn(pred, y)loss.backward()plt.figure(figsize=(15, 8))plt.subplot(121)image_grid = torchvision.utils.make_grid(torch.clamp(X.grad.sign(), 0, 1))plt.imshow(np.transpose(image_grid.cpu().numpy(), (1, 2, 0)))X_adv = X + eps[2] * X.grad.sign()X_adv = torch.clamp(X_adv, 0, 1)plt.subplot(122)image_grid = torchvision.utils.make_grid(X_adv)plt.imshow(np.transpose(image_grid.cpu().numpy(), (1, 2, 0)))break

左图为对抗扰动,右图为对抗样本

2.3 探究不同epsilon值对分类准确度的影响

# 用对抗样本替代原始样本,测试准确度# 探究不同epsilon对LeNet分类准确度的影响acc_list = []for epsilon in eps:for X, y in test_dataloader:X, y = X.to(device), y.to(device)X.requires_grad = Truepred = network(X)network.zero_grad()loss = loss_fn(pred, y)loss.backward()X = X + epsilon * X.grad.sign()X_adv = torch.clamp(X, 0, 1)pred = network(X_adv)for item in zip(pred, y):if torch.argmax(item[0]) == item[1]:positive += 1else:negative += 1acc = positive / (positive + negative)print(f"epsilon={epsilon} acc: {acc * 100}%")acc_list.append(acc)plt.xlabel("epsilon")plt.ylabel("Accuracy")plt.plot(eps, acc_list, marker='o')

3 实验结果

对同一个分类模型来说,随着epsilon的增加,fgsm生成的对抗样本使得分类准确度减小

4 完整代码

github: /RyanKao2001/FGSM-MNIST

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。