1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 【深度学习】基于GAN生成对抗网络的Python实现

【深度学习】基于GAN生成对抗网络的Python实现

时间:2021-09-24 11:08:22

相关推荐

【深度学习】基于GAN生成对抗网络的Python实现

前言

此文参考原文Github代码

本文Github代码

GAN是提出的一个框架。简单来说,这个框架有一个生成器和一个判别器,生成器生成数据(假币),判别器判别数据真假(判别假币),基于这种max-min极大极小值博弈算法,最终生成器生成的数据(假币)会使判别器分辨不出,也就是说D(p_data)和D(p_gen_data)均在0.5左右。基于这一框架生成器得到了出色的生成效果引起了许多学者的关注,并应用在了图像生成,图像描述生成以及自然语言生成等几个研究方向中。

今天要介绍的是GAN的Python代码实现,网上代码繁多,本文力求用最简洁的方式来实现GAN,并将一个服从统一分布的数据通过生成器生成一个服从正态分布的数据。通过最后的判别器判断结果(判断真假数据均在0.5)可以看出生成器的效果达到了很好的效果。

实现代码

数据加载器

这个数据加载器是用来加载生成数据的噪点(服从统一分布的数据),生成器就是通过这个噪点数据生成出跟服从真实数据分布的假样本。通过next_batch来生成噪点数据。

"""生成基于size的,统一分布的数据(size,1)"""class GenDataLoader():def __init__(self, size = 200, low = -1, high = 1):self.size = sizeself.low = lowself.high = highdef next_batch(self):z = np.random.uniform(self.low, self.high, [self.size, 1])# z = np.linspace(-5.0, 5.0, self.size) + np.random.random(self.size) * 0.01 # sample noise prior# z = z.reshape([self.size, 1])return z

下面这个是用来加载真实数据(正态分布)的,同上。

"""生成基于mu,sigma,size的正态分布数据(size,1)"""class RealDataLoader():def __init__(self, size = 200, mu = -1, sigma = 1):self.size = sizeself.mu = muself.sigma = sigmadef next_batch(self):data = np.random.normal(self.mu, self.sigma, [self.size ,1]) #(batch_size, size)data.sort()return data

训练准备

MomentumOptimizer

def momentum_optimizer(loss,var_list):batch = tf.Variable(0)learning_rate = tf.train.exponential_decay(0.01,# Base learning rate.batch, # Current index into the dataset.epoch // 4,# Decay step - this decays 4 times throughout training process.0.95,# Decay rate.staircase=True)#optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=batch,var_list=var_list)optimizer=tf.train.MomentumOptimizer(learning_rate,0.6).minimize(loss,global_step=batch,var_list=var_list)return optimizer

以下就是生成器,是一个多层感知机,接受一个服从统一分布的噪点数据,生成一个服从正态分布的数据。注意一下这里的输入是一个样本点,而不是一堆样本点。

class Generator():def __init__(self, inputs, input_size = 1, hidden_size = 6, output_size = 1):with tf.variable_scope("generator"):weight1 = weight_variable(shape=[input_size, hidden_size], name="weight1") #(size, 100)bias1 = bias_variable(shape=[1, hidden_size], name="bias1") #(1, 100)weight2 = weight_variable(shape=[hidden_size, hidden_size], name="weight2")bias2 = bias_variable(shape=[1, hidden_size], name="bias2")weight3 = weight_variable(shape=[hidden_size, output_size], name="weight3")bias3 = bias_variable(shape=[1, output_size], name="bias3")frac1 = tf.nn.tanh(tf.matmul(inputs, weight1) + bias1, name="frac1") #(batch_size, 100)frac2 = tf.nn.tanh(tf.matmul(frac1, weight2) + bias2, name="frac2")frac3 = tf.nn.tanh(tf.matmul(frac2, weight3) + bias3, name="frac3")self.frac = frac3self.var_list = [weight1, bias1, weight2, bias2, weight3, bias3]# self.frac, self.var_list = mlp(inputs, 1)self.frac = tf.multiply(self.frac, 5)def get_param(self):return self.frac, self.var_list

以下就是一个判别器,该判别器是一个多层感知机,职能是判别数据真假。接受的是一个输入样本点,返回的是一个判断样本真假的概率。(从0到1)

class Discriminator():def __init__(self, inputs, input_size = 1, hidden_size = 6):with tf.variable_scope("discriminator", reuse=tf.AUTO_REUSE):weight1 = weight_variable(shape=[input_size, hidden_size], name="weight1") #(size, 100)bias1 = bias_variable(shape=[1, hidden_size], name="bias1") #(1, 100)weight2 = weight_variable(shape=[hidden_size, hidden_size], name="weight2")bias2 = bias_variable(shape=[1, hidden_size], name="bias2")weight3 = weight_variable(shape=[hidden_size, 1], name="weight3")bias3 = bias_variable(shape=[1, 1], name="bias3")frac1 = tf.nn.tanh(tf.matmul(inputs, weight1) + bias1, name="frac1") # (batch_size, 100)frac2 = tf.nn.tanh(tf.matmul(frac1, weight2) + bias2, name="frac2") #range()frac3 = tf.nn.sigmoid(tf.matmul(frac2, weight3) + bias3, name="frac3") #range()self.frac = frac3self.var_list = [weight1, bias1, weight2, bias2, weight3, bias3]# self.frac, self.var_list = mlp(inputs, 1, is_sigmoid = True)def get_param(self):return self.frac, self.var_list

以下就是定义生成器,判别器,并构造出损失函数,使用了momentum优化器来进行优化。

if __name__ == '__main__':size = 200epoch = 10000 #训练次数shape = (size, 1)x_node = tf.placeholder(tf.float32, shape=shape) # input M normally distributed floatsz_node = tf.placeholder(tf.float32, shape=shape)generator = Generator(z_node)G, theta_g = generator.get_param()discriminator2 = Discriminator(G)discriminator1 = Discriminator(x_node)D1, theta_d = discriminator1.get_param()D2, theta_d = discriminator2.get_param()loss_d = tf.reduce_mean(tf.log(D1) + tf.log(1 - D2))loss_g = tf.reduce_mean(tf.log(D2))# set up optimizer for G,Dtrain_op_d = momentum_optimizer(1 - loss_d, theta_d)# train_op_d = tf.train.AdamOptimizer(0.001).minimize(loss =1 - loss_d)train_op_g = momentum_optimizer(1 - loss_g, theta_g) # maximize log(D(G(z)))# train_op_g = tf.train.AdamOptimizer(0.001).minimize(loss=1 - loss_g) # maximize log(D(G(z)))

训练过程

最后就是训练过程,为了简化,本次训练并没有预训练,而是直接进行对抗生成,在许多的生成对抗网络中在生成器和判别器进行预训练,可以更快的收敛。

sess = tf.InteractiveSession()tf.global_variables_initializer().run()gen_data_load = GenDataLoader(size)real_data_load = RealDataLoader(size)for i in range(epoch):for j in range(2):real_data = real_data_load.next_batch()gen_data = gen_data_load.next_batch()sess.run([train_op_d, loss_d], {x_node: real_data, z_node: gen_data})D1_, D2_ = sess.run([D1, D2], {x_node: real_data, z_node: gen_data})for (d1, d2) in zip(D1_, D2_):print("%d" % i)print("D1:", d1[0], ",D2:", d2[0])gen_data = gen_data_load.next_batch()sess.run([train_op_g, loss_g], {z_node: gen_data}) # update generator

经过训练之后,我们通过判别器的结果可以得出,判别器判别出生成的数据和真实的数据概率基本都在0.5左右。

D1: 0.53309953 ,D2: 0.44101745355D1: 0.51637995 ,D2: 0.5160859355D1: 0.45137414 ,D2: 0.44102487355D1: 0.54318506 ,D2: 0.5680956355D1: 0.4530351 ,D2: 0.44106925355D1: 0.49653527 ,D2: 0.5523978355D1: 0.46274388 ,D2: 0.57446367355D1: 0.4966537 ,D2: 0.5380512

结束语

在训练过程中我发现一些坑,造成了无法得到预期的效果,比如:

1.参数的初始化

2.优化器的选择

3.生成器的隐层节点大小

没选择好的话,都会造成生成效果差。

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。