1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > GAN的应用-SRGAN图像超分辨率重构 U-net结构和字“姐”跳动学习心得

GAN的应用-SRGAN图像超分辨率重构 U-net结构和字“姐”跳动学习心得

时间:2020-10-05 15:24:34

相关推荐

GAN的应用-SRGAN图像超分辨率重构 U-net结构和字“姐”跳动学习心得

GAN的应用 -- SRGAN图像超分辨率重构

项目地址:/aistudio/projectdetail/843989

文章来源: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

下载链接:Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

该项目针对的问题是:如果想让一张很小的图片变大,在一般情况下,使用resize操作,但是图片放大倍数越大,图像会变得越模糊。该项目解决的方法:通过神经网络对图像的分辨率进行重构,得到一张既放大又清晰的图片。

此项目选用的是CVPR的论文Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,这篇论文的难度不大,但它的重构思想,从学习的角度来说还是能让我们有很大收获的~

作者认为,这篇文章之前,主要重建工作都集中在最小化均方重建误差上,这篇文章是生成式对抗网络第一次应用于4倍下采样图像的超分辨重建工作。。由此得到的估计值具有较高的峰值信噪比,但它们通常缺少高频细节,并且在感觉上不令人满意,因为它们无法匹配在更高分辨率下预期的保真度。

为了达到能够在4倍放大因子下推断照片真实自然图像的目的,作者提出了一个由对抗性损失和内容损失组成的感知损失函数,该网络使用经过训练的VGG19网络来区分超分辨率图像和原始照片真实感图像,此外,在像素空间中,又使用了一个由感知相似度驱动的内容丢失,而不是像素空间中的相似性。作者的深度残差网络能够在公共基准上从大量减少采样的图像中恢复照片真实感纹理。用SRGAN获得的MOS分数比用任何最先进的方法得到的结果更接近原始高分辨率图像。

下面是具体实现步骤与代码:

解压数据集

In[]:

# unzip dataset!unzip -q data/data50762/SRDATA.zip

VGG19预训练模型准备

In[]:

!wget https://paddle-gan-models./vgg19_spade.tar.gz!tar -zxvf vgg19_spade.tar.gz

加载工具包

In[]:

import cv2import osimport globimport tqdmimport timeimport shutilimport scipyimport randomimport numpy as npimport paddleimport paddle.fluid as fluid

参数设定

In[]:

## Adambatch_size = 32lr = 0.001beta1 = 0.9use_gpu = True## initialize Gn_epoch_init = 50## adversarial learning (SRGAN)n_epoch = 2000## train set locationtrain_hr_img_path = '/home/aistudio/srdata/DIV2K_train_HR' train_lr_img_path = '/home/aistudio/srdata/DIV2K_train_LR_bicubic/X4'## test set locationvalid_hr_img_path = '/home/aistudio/srdata/DIV2K_valid_HR'valid_lr_img_path = '/home/aistudio/srdata/DIV2K_valid_LR_bicubic/X4'

图像预处理函数准备

读取图像名称

In[]:

# load im path to listdef load_file_list(im_path,im_format):return glob.glob(os.path.join(im_path, im_format))

读取图像内容

In[]:

# read im to listdef im_read(im_path):im_dataset = []for i in range(len(im_path)):path = im_path[i]# imread -- bgrim_data = cv2.imread(path)# change im channels ==> bgr to rgbimg = cv2.cvtColor(im_data, cv2.COLOR_BGR2RGB)#print(im_data.shape)im_dataset.append(im_data)return im_dataset

随机截图

In[]:

def random_crop(im_set, image_size):crop_set = []for im in im_set:#print(im.shape)# Random generation x,yh, w, _ = im.shapey = random.randint(0, h-image_size)x = random.randint(0, w-image_size)# Random screenshotcropIm = im[(y):(y + image_size), (x):(x + image_size)]crop_set.append(cropIm)return crop_set

调整图像大小

In[]:

# resize im // change im channelsdef im_resize(imgs, im_w, im_h, pattern='rgb'):resize_dataset = []for im in imgs:im = cv2.resize(im, (im_w, im_h), interpolation=cv2.INTER_LINEAR)resize_dataset.append(im)resize_dataset = np.array(resize_dataset,dtype='float32')return resize_dataset

数据标准化

In[]:

# data standardizationdef standardized(imgs):imgs = np.array([a / 127.5 - 1 for a in imgs])return imgs

操作数据集图像--读取图像名称

In[]:

# load im path to listtrain_hr_img_list = sorted(load_file_list(im_path=train_hr_img_path, im_format='*.png'))train_lr_img_list = sorted(load_file_list(im_path=train_lr_img_path, im_format='*.png'))valid_hr_img_list = sorted(load_file_list(im_path=valid_hr_img_path, im_format='*.png'))valid_lr_img_list = sorted(load_file_list(im_path=valid_lr_img_path, im_format='*.png'))

操作数据集图像--读取图像内容

In[]:

# load im datatrain_hr_imgs = im_read(train_hr_img_list)train_lr_imgs = im_read(train_lr_img_list)valid_hr_imgs = im_read(valid_hr_img_list)valid_lr_imgs = im_read(valid_lr_img_list)

定义模型

生成器网络的体系结构,每个卷积层对应的内核大小(k)、特征映射数(n)和步长(s)。

在生成网络中,输入是一个低分辨率的图像,先进行卷积、relu,又为了能够更好的网络架构和提取特征,还引入了残差模块,最后再通过特征提取、特征重构,得到输出结果。

In[]:

def SRGAN_g(t_image):# Input-Conv-Relun = fluid.layers.conv2d(input=t_image, num_filters=64, filter_size=3, stride=1, padding='SAME', name='n64s1/c', data_format='NCHW')# print('conv0', n.shape)n = fluid.layers.batch_norm(n, momentum=0.99, epsilon=0.001)n = fluid.layers.relu(n, name=None)temp = n# B residual blocks# Conv-BN-Relu-Conv-BN-Elementwise_addfor i in range(16):nn = fluid.layers.conv2d(input=n, num_filters=64, filter_size=3, stride=1, padding='SAME', name='n64s1/c1/%s' % i, data_format='NCHW')nn = fluid.layers.batch_norm(nn, momentum=0.99, epsilon=0.001, name='n64s1/b1/%s' % i)nn = fluid.layers.relu(nn, name=None)log = 'conv%2d' % (i+1)# print(log, nn.shape)nn = fluid.layers.conv2d(input=nn, num_filters=64, filter_size=3, stride=1, padding='SAME', name='n64s1/c2/%s' % i, data_format='NCHW')nn = fluid.layers.batch_norm(nn, momentum=0.99, epsilon=0.001, name='n64s1/b2/%s' % i)nn = fluid.layers.elementwise_add(n, nn, act=None, name='b_residual_add/%s' % i)n = nnn = fluid.layers.conv2d(input=n, num_filters=64, filter_size=3, stride=1, padding='SAME', name='n64s1/c/m', data_format='NCHW')n = fluid.layers.batch_norm(n, momentum=0.99, epsilon=0.001, name='n64s1/b2/%s' % i)n = fluid.layers.elementwise_add(n, temp, act=None, name='add3')# print('conv17', n.shape)# B residual blacks end# Conv-Pixel_shuffle-Conv-Pixel_shuffle-Convn = fluid.layers.conv2d(input=n, num_filters=256, filter_size=3, stride=1, padding='SAME', name='n256s1/1', data_format='NCHW')n = fluid.layers.pixel_shuffle(n, upscale_factor=2)n = fluid.layers.relu(n, name=None)# print('conv18', n.shape)n = fluid.layers.conv2d(input=n, num_filters=256, filter_size=3, stride=1, padding='SAME', name='n256s1/2', data_format='NCHW')n = fluid.layers.pixel_shuffle(n, upscale_factor=2)n = fluid.layers.relu(n, name=None)# print('conv19', n.shape)n = fluid.layers.conv2d(input=n, num_filters=3, filter_size=1, stride=1, padding='SAME', name='out', data_format='NCHW')n = fluid.layers.tanh(n, name=None)# print('conv20', n.shape)return n

鉴别器网络的体系结构,每个卷积层对应的内核大小(k)、特征映射数(n)和步长(s)。

在鉴别网络中,都是些常规的Cnov、BN、Leaky_Relu、fc,为了对生成网络生成的图像数据进行判断,判断其是否是真实的训练数据中的数据。

In[]:

def SRGAN_d(input_images):# Conv-Leaky_Relu net_h0 = fluid.layers.conv2d(input=input_images, num_filters=64, filter_size=4, stride=2, padding='SAME', name='h0/c', data_format='NCHW')net_h0 = fluid.layers.leaky_relu(net_h0, alpha=0.2, name=None)# h1 Cnov-BN-Leaky_Relunet_h1 = fluid.layers.conv2d(input=net_h0, num_filters=128, filter_size=4, stride=2, padding='SAME', name='h1/c', data_format='NCHW')net_h1 = fluid.layers.batch_norm(net_h1, momentum=0.99, epsilon=0.001, name='h1/bn') net_h1 = fluid.layers.leaky_relu(net_h1, alpha=0.2, name=None)# h2 Cnov-BN-Leaky_Relunet_h2 = fluid.layers.conv2d(input=net_h1, num_filters=256, filter_size=4, stride=2, padding='SAME', name='h2/c', data_format='NCHW')net_h2 = fluid.layers.batch_norm(net_h2, momentum=0.99, epsilon=0.001, name='h2/bn')net_h2 = fluid.layers.leaky_relu(net_h2, alpha=0.2, name=None)# h3 Cnov-BN-Leaky_Relunet_h3 = fluid.layers.conv2d(input=net_h2, num_filters=512, filter_size=4, stride=2, padding='SAME', name='h3/c', data_format='NCHW')net_h3 = fluid.layers.batch_norm(net_h3, momentum=0.99, epsilon=0.001, name='h3/bn')net_h3 = fluid.layers.leaky_relu(net_h3, alpha=0.2, name=None)# h4 Cnov-BN-Leaky_Relunet_h4 = fluid.layers.conv2d(input=net_h3, num_filters=1024, filter_size=4, stride=2, padding='SAME', name='h4/c', data_format='NCHW')net_h4 = fluid.layers.batch_norm(net_h4, momentum=0.99, epsilon=0.001, name='h4/bn')net_h4 = fluid.layers.leaky_relu(net_h4, alpha=0.2, name=None)# h5 Cnov-BN-Leaky_Relunet_h5 = fluid.layers.conv2d(input=net_h4, num_filters=2048, filter_size=4, stride=2, padding='SAME', name='h5/c', data_format='NCHW')net_h5 = fluid.layers.batch_norm(net_h5, momentum=0.99, epsilon=0.001, name='h5/bn')net_h5 = fluid.layers.leaky_relu(net_h5, alpha=0.2, name=None)# h6 Cnov-BN-Leaky_Relunet_h6 = fluid.layers.conv2d(input=net_h5, num_filters=1024, filter_size=4, stride=2, padding='SAME', name='h6/c', data_format='NCHW')net_h6 = fluid.layers.batch_norm(net_h6, momentum=0.99, epsilon=0.001, name='h6/bn')net_h6 = fluid.layers.leaky_relu(net_h6, alpha=0.2, name=None)# h7 Cnov-BN-Leaky_Relunet_h7 = fluid.layers.conv2d(input=net_h6, num_filters=512, filter_size=4, stride=2, padding='SAME', name='h7/c', data_format='NCHW')net_h7 = fluid.layers.batch_norm(net_h7, momentum=0.99, epsilon=0.001, name='h7/bn')net_h7 = fluid.layers.leaky_relu(net_h7, alpha=0.2, name=None)#修改原论文网络net = fluid.layers.conv2d(input=net_h7, num_filters=128, filter_size=1, stride=1, padding='SAME', name='res/c', data_format='NCHW')net = fluid.layers.batch_norm(net, momentum=0.99, epsilon=0.001, name='res/bn')net = fluid.layers.leaky_relu(net, alpha=0.2, name=None)net = fluid.layers.conv2d(input=net_h7, num_filters=128, filter_size=3, stride=1, padding='SAME', name='res/c2', data_format='NCHW')net = fluid.layers.batch_norm(net, momentum=0.99, epsilon=0.001, name='res/bn2')net = fluid.layers.leaky_relu(net, alpha=0.2, name=None)net = fluid.layers.conv2d(input=net_h7, num_filters=512, filter_size=3, stride=1, padding='SAME', name='res/c3', data_format='NCHW')net = fluid.layers.batch_norm(net, momentum=0.99, epsilon=0.001, name='res/bn3')net = fluid.layers.leaky_relu(net, alpha=0.2, name=None)net_h8 = fluid.layers.elementwise_add(net_h7, net, act=None, name='res/add')net_h8 = fluid.layers.leaky_relu(net_h8, alpha=0.2, name=None)#net_ho = fluid.layers.flatten(net_h8, axis=0, name='ho/flatten')net_ho = fluid.layers.fc(input=net_h8, size=1024, name='ho/fc')net_ho = fluid.layers.leaky_relu(net_ho, alpha=0.2, name=None)net_ho = fluid.layers.fc(input=net_h8, size=1, name='ho/fc2')# return# logits = net_honet_ho = fluid.layers.sigmoid(net_ho, name=None)return net_ho # , logits

VGG19网络,由于使用了飞桨官方文档的预训练模型,为了方便实现,这部分采用的是飞桨GitHub上的实现版本 --VGG19

def conv_block(input, num_filter, groups, name=None):conv = inputfor i in range(groups):conv = fluid.layers.conv2d(input=conv,num_filters=num_filter,filter_size=3,stride=1,padding=1,act='relu',param_attr=fluid.param_attr.ParamAttr(name=name + str(i + 1) + "_weights"),bias_attr=False)return fluid.layers.pool2d(input=conv, pool_size=2, pool_type='max', pool_stride=2)def vgg19(input, class_dim=1000):# VGG_MEAN = [123.68, 103.939, 116.779]# """ input layer """# net_in = (input + 1) * 127.5# red, green, blue = fluid.layers.split(net_in, num_or_sections=3, dim=1)# net_in = fluid.layers.concat(input=[red-VGG_MEAN[0], green-VGG_MEAN[1], blue-VGG_MEAN[2]], axis=0)layers = 19vgg_spec = {11: ([1, 1, 2, 2, 2]),13: ([2, 2, 2, 2, 2]),16: ([2, 2, 3, 3, 3]),19: ([2, 2, 4, 4, 4])}assert layers in vgg_spec.keys(), \"supported layers are {} but input layer is {}".format(vgg_spec.keys(), layers)nums = vgg_spec[layers]conv1 = conv_block(input, 64, nums[0], name="vgg19_conv1_")conv2 = conv_block(conv1, 128, nums[1], name="vgg19_conv2_")conv3 = conv_block(conv2, 256, nums[2], name="vgg19_conv3_")conv4 = conv_block(conv3, 512, nums[3], name="vgg19_conv4_")conv5 = conv_block(conv4, 512, nums[4], name="vgg19_conv5_")fc_dim = 4096fc_name = ["fc6", "fc7", "fc8"]fc1 = fluid.layers.fc(input=conv5,size=fc_dim,act='relu',param_attr=fluid.param_attr.ParamAttr(name=fc_name[0] + "_weights"),bias_attr=fluid.param_attr.ParamAttr(name=fc_name[0] + "_offset"))fc1 = fluid.layers.dropout(x=fc1, dropout_prob=0.5)fc2 = fluid.layers.fc(input=fc1,size=fc_dim,act='relu',param_attr=fluid.param_attr.ParamAttr(name=fc_name[1] + "_weights"),bias_attr=fluid.param_attr.ParamAttr(name=fc_name[1] + "_offset"))fc2 = fluid.layers.dropout(x=fc2, dropout_prob=0.5)out = fluid.layers.fc(input=fc2,size=class_dim,param_attr=fluid.param_attr.ParamAttr(name=fc_name[2] + "_weights"),bias_attr=fluid.param_attr.ParamAttr(name=fc_name[2] + "_offset"))return out, conv5

训练准备

loss定义

1. MSE:计算当前网络的生成结果跟实际图像的差别(逐个像素点的比较)。

2. VGG19_loss:进行细节上的重构。将生成网络的生成结果G与原图像HR放入VGG网络中,拿到输出特征后,计算特征图之间的差别。

3. Adversarial_loss:将GAN的生成成分添加到感知损失中。这鼓励我们的网络通过试图愚弄鉴别器网络来支持驻留在自然图像上的解决方案。

#DEFINE LOSS# calc_g_lossdef calc_g_loss(net_g, t_target_image, logits_fake, vgg_predict_emb, vgg_target_emb):g_gan_loss = fluid.layers.reduce_mean(1e-3 * fluid.layers.sigmoid_cross_entropy_with_logits(x=logits_fake, label=fluid.layers.zeros_like(logits_fake)))# g_gan_loss = 1e-3 * fluid.layers.sigmoid_cross_entropy_with_logits(x=logits_fake, label=fluid.layers.zeros_like(logits_fake))mse_loss = fluid.layers.reduce_mean(fluid.layers.square_error_cost(net_g, t_target_image))vgg_loss = fluid.layers.reduce_mean(2e-6 * fluid.layers.square_error_cost(vgg_predict_emb, vgg_target_emb))g_loss = fluid.layers.reduce_mean(g_gan_loss + mse_loss + vgg_loss)return g_loss, mse_loss# calc_d_lossdef calc_d_loss(logits_real, logits_fake):d_loss_real = fluid.layers.reduce_mean(fluid.layers.sigmoid_cross_entropy_with_logits(x=logits_real, label=fluid.layers.ones_like(logits_real)))d_loss_fake = fluid.layers.reduce_mean(fluid.layers.sigmoid_cross_entropy_with_logits(x=logits_fake, label=fluid.layers.zeros_like(logits_fake)))d_loss = fluid.layers.elementwise_add(d_loss_real, d_loss_fake)/2return d_loss

定义vgg19_program

# LOAD VGGvgg19_program =fluid.Program()with fluid.program_guard(vgg19_program): vgg19_input = fluid.layers.data(name='vgg19_input',shape=[224, 224, 3],dtype='float32')vgg19_input_transpose = fluid.layers.transpose(vgg19_input, perm=[0, 3, 1, 2])# define vgg19_, vgg_target_emb = vgg19(vgg19_input_transpose)

定义SRGAN_g_programSRGAN_d_program

# DEFINE MODEL ==> SRGAN_g SRGAN_dSRGAN_g_program =fluid.Program()with fluid.program_guard(SRGAN_g_program):# Low resolution imaget_image = fluid.layers.data(name='t_image',shape=[96, 96, 3],dtype='float32')#print(t_image.shape)t_image_transpose = fluid.layers.transpose(t_image, perm=[0, 3, 1, 2])#print(t_image_transpose.shape)# High resolution imaget_target_image = fluid.layers.data(name='t_target_image',shape=[384, 384, 3],dtype='float32')t_target_image_transpose = fluid.layers.transpose(t_target_image, perm=[0, 3, 1, 2])# define SRGAN_gnet_g = SRGAN_g(t_image_transpose)#net_g_test = SRGAN_g(t_image_transpose)test_im = fluid.layers.transpose(net_g, perm=[0, 2, 3, 1])# vgg19_inputvgg19_input = fluid.layers.data(name='vgg19_input',shape=[224, 224, 3],dtype='float32')vgg19_input_transpose = fluid.layers.transpose(vgg19_input, perm=[0, 3, 1, 2])# get vgg_target_emb vgg_predict_embt_predict_image_224 = fluid.layers.image_resize(input=net_g, out_shape=[224, 224], resample="NEAREST")_, vgg_target_emb = vgg19(vgg19_input_transpose)_, vgg_predict_emb = vgg19(t_predict_image_224)# get logits_fakelogits_fake = SRGAN_d(net_g)# g_loss mse_lossg_loss, mse_loss = calc_g_loss(net_g, t_target_image_transpose, logits_fake, vgg_predict_emb, vgg_target_emb)SRGAN_d_program =fluid.Program()with fluid.program_guard(SRGAN_d_program):# Low resolution imaget_image = fluid.layers.data(name='t_image',shape=[96, 96, 3],dtype='float32')t_image_transpose = fluid.layers.transpose(t_image, perm=[0, 3, 1, 2])# High resolution imaget_target_image = fluid.layers.data(name='t_target_image',shape=[384, 384, 3],dtype='float32')t_target_image_transpose = fluid.layers.transpose(t_target_image, perm=[0, 3, 1, 2])net_g = SRGAN_g(t_image_transpose)# define SRGAN_dlogits_real = SRGAN_d(t_target_image_transpose)logits_fake = SRGAN_d(net_g)d_loss = calc_d_loss(logits_real, logits_fake)

获取g_varsd_vars,定义优化器,优化参数(小提示:由于参数过多,建议在执行完此步骤后清空输出,否则可能会有小卡顿)

# DEFINE TRAIN OPS# varsg_vars = [G.name for G in SRGAN_g_program.global_block().all_parameters()]d_vars = [D.name for D in SRGAN_d_program.global_block().all_parameters()]## Pretraing_optim_init = fluid.optimizer.Adam(learning_rate=lr, beta1=beta1)g_optim_init.minimize(loss=mse_loss, parameter_list=g_vars)## SRGANg_optim = fluid.optimizer.Adam(learning_rate=lr, beta1=beta1)g_optim.minimize(loss=g_loss, parameter_list=g_vars)d_optim = fluid.optimizer.Adam(learning_rate=lr, beta1=beta1)d_optim.minimize(loss=d_loss, parameter_list=d_vars)

使用GPU、参数初始化

place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()exe = fluid.Executor(place)# 进行参数初始化exe.run(fluid.default_startup_program())

加载VGG19网络的预训练模型

def load_vars(exe, program, pretrained_model):vars = []for var in program.list_vars():if fluid.io.is_parameter(var) and var.name.startswith("vgg"):vars.append(var)print(var.name)fluid.io.load_vars(exe, pretrained_model, program, vars)save_pretrain_model_path = './VGG19_pretrained'load_vars(exe, vgg19_program, save_pretrain_model_path)

开始训练

先对G网络进行与训练,再将GD网络一起训练

# initt_image = fluid.layers.data(name='t_image',shape=[96, 96, 3],dtype='float32')t_target_image = fluid.layers.data(name='t_target_image',shape=[384, 384, 3],dtype='float32')vgg19_input = fluid.layers.data(name='vgg19_input',shape=[224, 224, 3],dtype='float32')step_num = int(len(train_hr_imgs) / batch_size)# initialize Gfor epoch in range(0, n_epoch_init + 1):epoch_time = time.time()np.random.shuffle(train_hr_imgs)# realsample_imgs_384 = random_crop(train_hr_imgs, 384)sample_imgs_standardized_384 = standardized(sample_imgs_384)# inputsample_imgs_96 = im_resize(sample_imgs_384,96,96)sample_imgs_standardized_96 = standardized(sample_imgs_96)# vgg19sample_imgs_224 = im_resize(sample_imgs_384,224,224)sample_imgs_standardized_224 = standardized(sample_imgs_224)# losstotal_mse_loss, n_iter = 0, 0for i in tqdm.tqdm(range(step_num)):step_time = time.time()imgs_384 = sample_imgs_standardized_384[i * batch_size:(i + 1) * batch_size]imgs_384 = np.array(imgs_384, dtype='float32')imgs_96 = sample_imgs_standardized_96[i * batch_size:(i + 1) * batch_size]imgs_96 = np.array(imgs_96, dtype='float32')# vgg19 dataimgs_224 = sample_imgs_standardized_224[i * batch_size:(i + 1) * batch_size]imgs_224 = np.array(imgs_224, dtype='float32')# print(imgs_384.shape)# print(imgs_96.shape)# print(imgs_224.shape)# update Gmse_loss_n = exe.run(SRGAN_g_program,feed={'t_image': imgs_96, 't_target_image': imgs_384, 'vgg19_input':imgs_224},fetch_list=[mse_loss])[0]#print(mse_loss_n)#print("Epoch [%2d/%2d] %4d time: %4.4fs, mse: %.8f " % (epoch, n_epoch_init, n_iter, time.time() - step_time, mse_loss_n))total_mse_loss += mse_loss_nn_iter += 1log = "[*] Epoch_init: [%2d/%2d] time: %4.4fs, mse: %.8f" % (epoch, n_epoch_init, time.time() - epoch_time, total_mse_loss / n_iter)print(log)if (epoch != 0) and (epoch % 10 == 0):out = exe.run(SRGAN_g_program,feed={'t_image': imgs_96, 't_target_image': imgs_384, 'vgg19_input':imgs_224},fetch_list=[test_im])[0][0]# generate imgim_G = np.array((out+1)*127.5, dtype=np.uint8)im_96 = np.array((imgs_96[0]+1)*127.5, dtype=np.uint8)im_384 = np.array((imgs_384[0]+1)*127.5, dtype=np.uint8)cv2.imwrite('./output/epoch_init_{}_G.jpg'.format(epoch), cv2.cvtColor(im_G, cv2.COLOR_RGB2BGR))cv2.imwrite('./output/epoch_init_{}_96.jpg'.format(epoch), cv2.cvtColor(im_96, cv2.COLOR_RGB2BGR))cv2.imwrite('./output/epoch_init_{}_384.jpg'.format(epoch), cv2.cvtColor(im_384, cv2.COLOR_RGB2BGR))# # save model# save_pretrain_model_path_init = 'models/init/'# # delete old model files# shutil.rmtree(save_pretrain_model_path_init, ignore_errors=True)# # mkdir# os.makedirs(save_pretrain_model_path_init)# fluid.io.save_persistables(executor=exe, dirname=save_pretrain_model_path_init, main_program=SRGAN_g_program)# train GAN (SRGAN)for epoch in range(0, n_epoch + 1):## update learning rateepoch_time = time.time()# realsample_imgs_384 = random_crop(train_hr_imgs, 384)sample_imgs_standardized_384 = standardized(sample_imgs_384)# inputsample_imgs_96 = im_resize(sample_imgs_384,96,96)sample_imgs_standardized_96 = standardized(sample_imgs_96)# vgg19sample_imgs_224 = im_resize(sample_imgs_384,224,224)sample_imgs_standardized_224 = standardized(sample_imgs_224)# losstotal_d_loss, total_g_loss, n_iter = 0, 0, 0for i in tqdm.tqdm(range(step_num)):step_time = time.time()imgs_384 = sample_imgs_standardized_384[i * batch_size:(i + 1) * batch_size]imgs_384 = np.array(imgs_384, dtype='float32')imgs_96 = sample_imgs_standardized_96[i * batch_size:(i + 1) * batch_size]imgs_96 = np.array(imgs_96, dtype='float32')# vgg19 dataimgs_224 = sample_imgs_standardized_224[i * batch_size:(i + 1) * batch_size]imgs_224 = np.array(imgs_224, dtype='float32')## update DerrD = exe.run(SRGAN_d_program,feed={'t_image': imgs_96, 't_target_image': imgs_384},fetch_list=[d_loss])[0]## update GerrG = exe.run(SRGAN_g_program,feed={'t_image': imgs_96, 't_target_image': imgs_384, 'vgg19_input':imgs_224},fetch_list=[g_loss])[0]# print("Epoch [%2d/%2d] %4d time: %4.4fs, d_loss: %.8f g_loss: %.8f (mse: %.6f vgg: %.6f adv: %.6f)" %# (epoch, n_epoch, n_iter, time.time() - step_time, errD, errG, errM, errV, errA))total_d_loss += errDtotal_g_loss += errGn_iter += 1log = "[*] Epoch: [%2d/%2d] time: %4.4fs, d_loss: %.8f g_loss: %.8f" % (epoch, n_epoch, time.time() - epoch_time, total_d_loss / n_iter, total_g_loss / n_iter)print(log)if (epoch != 0) and (epoch % 10 == 0):out = exe.run(SRGAN_g_program,feed={'t_image': imgs_96, 't_target_image': imgs_384, 'vgg19_input':imgs_224},fetch_list=[test_im])[0][0]# generate imgim_G = np.array((out + 1) * 127.5, dtype=np.uint8)im_96 = np.array((imgs_96[0] + 1) * 127.5, dtype=np.uint8)im_384 = np.array((imgs_384[0] + 1) * 127.5, dtype=np.uint8)cv2.imwrite('./output/epoch_{}_G.jpg'.format(epoch), cv2.cvtColor(im_G, cv2.COLOR_RGB2BGR))cv2.imwrite('./output/epoch_{}_96.jpg'.format(epoch), cv2.cvtColor(im_96, cv2.COLOR_RGB2BGR))cv2.imwrite('./output/epoch_{}_384.jpg'.format(epoch), cv2.cvtColor(im_384, cv2.COLOR_RGB2BGR))# save model# d_modelssave_pretrain_model_path_d = 'models/d_models/'# delete old model filesshutil.rmtree(save_pretrain_model_path_d, ignore_errors=True)# mkdiros.makedirs(save_pretrain_model_path_d)fluid.io.save_persistables(executor=exe, dirname=save_pretrain_model_path_d, main_program=SRGAN_g_program)# g_modelssave_pretrain_model_path_g = 'models/g_models/'# delete old model filesshutil.rmtree(save_pretrain_model_path_g, ignore_errors=True)# mkdiros.makedirs(save_pretrain_model_path_g)fluid.io.save_persistables(executor=exe, dirname=save_pretrain_model_path_g, main_program=SRGAN_g_program)

结果展示

In[2]:

import osfrom PIL import Imageimport matplotlib.pyplot as pltimg0 = Image.open('./output/epoch_1780_96.jpg')img1 = Image.open('./output/epoch_1780_384.jpg')img2 = Image.open('./output/epoch_1780_G.jpg')plt.figure("Image Completion Result",dpi=384) # dpi = 384 显示的是原图大小plt.subplot(2,3,1)plt.imshow(img0)plt.title('Low resolution',fontsize='xx-small',fontweight='heavy')plt.axis('off')plt.subplot(2,3,2)plt.imshow(img1)plt.title('Hing resolution',fontsize='xx-small',fontweight='heavy')plt.axis('off')plt.subplot(2,3,3)plt.imshow(img2)plt.title('Generate',fontsize='xx-small',fontweight='heavy')plt.axis('off')plt.show()

U-Net结构

项目内容包括:

U-Net简介

U-Net encoder支持的骨干模型

对上一行有不同的看法,U-Net应该算是端到端的吧,这里项目作者为什么只强调ENCODER呀?

ResNet说明配置说明U-Net代码中各方法的作用使用PaddleSeg提供的眼底医疗分割数据集对不同encoder下的U-Net进行训练与测试,包含267张训练图片、76张验证图片、38张测试图片

项目意义

笔者今年查阅相关文献的时候发现,一些核心、EI级别的论文都是在U-Net的基础上进行了改进,然后应用到了其他的学科领域。例如使用残差结构和注意力机制、残差结构结合空洞卷积和注意力机制等等,大家如果感兴趣可以去知网搜一下期刊,然后在期刊发表文章中搜索unet、图像分割等关键字,其中深度学习的好多类似的(偷偷的讲笔者是从光学学报看到的这类论文)

不在公共数据集上进行比较是不是都是耍流氓,手动滑稽

那些苦于没有小论文的小伙伴(大佬请自动退散!!!)是不是就可以利用起来我们的PaddleSeg,结合上一期的Attention unet或者其他的思路就出现一个模型的原型呢,哈哈

U-Net

发表时间:的MICCAI影响:笔者通过谷歌学术查阅发现目前为止已经有16950的引用次数应用:成为大多做医疗影像语义分割任务的baseline论文地址:/pdf/1505.04597.pdf

U-Net为什么适用于医学图像的语义分割(个人理解):

医学图像数据量较少,因此底层信息至关重要受限于医学图像边界模糊的特点,梯度较为复杂,因此底层信息对于精准分割尤为重要。UNet将高层的语义信息和浅层的位置信息完美融合。

U-Net网络结构理解

U-Net网络代表的是一种结构,一种带有跳跃连接的U型网络结构。

encoder:4次下采样(下采样16倍),保存每次下采样之前的特征图用于skip connectiondecoder:4次上采样,每次上采样之后与encoder中相同分辨率的特征图进行concate操作,通过2次3 * 3的卷积进行融合特征。

UNet encoder

笔者尝试将UNet的encoder部分更换为提取特征能力更强的其他模型,encoder部分支持的骨干网络如下所示:

ResNet

笔者主要主要使用了ResNet及其变体,ResNet中的残差结构有效缓解了梯度消失与梯度爆炸等问题,ResNet网络结构与残差结构如下所示:

ResNet网络结构的各层配置如下所示,从50层开始使用了瓶颈残差结构:

ResNet_vb、vc、vd结构如下所示:

为什么这样改呢,看到的一个理由:ResNet50中的下采样操作是在残差结构中使用步长为2的1×1卷积实现的。输入数据会经过步长为2的1×1卷积将特征图的尺寸减小为原来的一半。步长为2的1×1卷积会导致输入特征图3/4的信息不被利用。

配置文件说明

笔者使用的是PaddleSeg提供的UNet配置文件,修改之后同时支持原本的unet与现有的其他骨干网络,配置文件如下所示:

预训练模型配置 以下列举了本模型encoder支持的所有骨干网络,可根据给出的名称与网络层数修改encoder

#resnet LAYERS in [18, 34, 50, 101, 152]#resnet_vd LAYERS in [18, 34, 50, 101, 152,200]#se_resent LAYERS in [18, 34, 50, 101, 152,200]#resnet_acnet LAYERS in [18, 34, 50, 101, 152]#hrnet LAYERS in [18, 30, 32, 40, 44, 48, 60, 64]#resnet_acnet LAYERS in [11, 13, 16, 19]#使用原始UNet结构则BACKBONE设置:"" 无符号位 不是空格MODEL:MODEL_NAME: "unet"UNET: #[resnet,resnet_vd,se_resent,resnet_acnet,hrnet,vgg]BACKBONE: "resnet"LAYERS: "50"

U-Net代码文件中各函数的意义

试试效果

不同encoder的U-Net代码已被笔者整理在了PaddleSeg中,部分代码已做修改,可以完整的运行AttentionU-Net相关代码以下执行过程简略书写,如想了解PaddleSeg具体的执行内容与参数设置可以查看我的另一个项目,其中对PaddleSeg进行了详细的使用说明使用HRNet实现瓷砖缺陷检测

1. 获取PaddleSeg、移动本项目所需要的文件到指定目录、切换路径

In[]:

!git clone -b release/v0.6.0 /paddlepaddle/PaddleSeg.git

In[]:

!cp -r PaddleSeg_base/. PaddleSeg

In[]:

%cd PaddleSeg/

2. 下载数据集

使用PaddleSeg提供的眼底医疗分割数据集进行训练与测试,包含267张训练图片、76张验证图片、38张测试图片

In[]:

!python dataset/download_optic.py

3. 校验参数设置

使用眼底医疗分割数据集进行测试,包含267张训练图片、76张验证图片、38张测试图片

具体的参数设置如下:

#数据集路径,类别数DATASET:DATA_DIR: "./dataset/optic_disc_seg/"NUM_CLASSES: 2TEST_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"TRAIN_FILE_LIST: "./dataset/optic_disc_seg/train_list.txt"VAL_FILE_LIST: "./dataset/optic_disc_seg/val_list.txt"VIS_FILE_LIST: "./dataset/optic_disc_seg/test_list.txt"# 预训练模型配置#resnet LAYERS in [18, 34, 50, 101, 152]#resnet_vd LAYERS in [18, 34, 50, 101, 152,200]#se_resent LAYERS in [18, 34, 50, 101, 152,200]#resnet_acnet LAYERS in [18, 34, 50, 101, 152]#hrnet LAYERS in [18, 30, 32, 40, 44, 48, 60, 64]#resnet_acnet LAYERS in [11, 13, 16, 19]#使用原始UNet结构则BACKBONE设置:"" 无符号位 不是空格MODEL:MODEL_NAME: "unet"UNET: #[resnet,resnet_vd,se_resent,resnet_acnet,hrnet,vgg]BACKBONE: "resnet"LAYERS: "50"# 其他配置TRAIN_CROP_SIZE: (512, 512)EVAL_CROP_SIZE: (512, 512)AUG:AUG_METHOD: "unpadding"FIX_RESIZE_SIZE: (512, 512)BATCH_SIZE: 16TRAIN:#预训练模型路径PRETRAINED_MODEL_DIR: " "#模型保存路径MODEL_SAVE_DIR: "./saved_model/unet_optic/"#阶段性模型评估SNAPSHOT_EPOCH: 10TEST:#评估模型路径TEST_MODEL: "./saved_model/unet_optic/best_model"SOLVER:#总的EPOCHSNUM_EPOCHS: 400#学习率LR: 0.001#学习率衰减策略LR_POLICY: "poly"#优化器OPTIMIZER: "adam"

In[]:

!python pdseg/check.py --cfg ./configs/unet_optic.yaml

4. 训练模型

使用已经修改好的参数文件进行训练

注意!!! 训练之前在页面左下角的位置通过性能监控查看GPU显存大小,笔者运行时GPU的显存为32G,batch_size=16,如果在运行时GPU的显存为16G,需要适度的调低batch_size

In[9]:

!python pdseg/train.py --use_gpu --cfg ./configs/unet_optic.yaml --do_eval

5. 模型评估

笔者设置的是使用在训练过程中阶段性评估,并保存评估结果最好的模型参数

In[]:

!python pdseg/eval.py --use_gpu --cfg ./configs/unet_optic.yaml

经过400个EPOCH ResNet+UNet 得到的最优模型指标如下所示:

[EVAL]#image=76 acc=0.9970 IoU=0.9228[EVAL]Category IoU: [0.9970 0.8486][EVAL]Category Acc: [0.9983 0.9287][EVAL]Kappa:0.9166

6. 结果可视化

In[]:

!python pdseg/vis.py --use_gpu --cfg ./configs/unet_optic.yaml

7. 预测结果展示

In[]:

import matplotlib.pyplot as pltimport osimport cv2 as cvdef display(img_name):image_dir = os.path.join("./dataset/optic_disc_seg/JPEGImages", img_name.split(".")[0]+".jpg")label_dir = os.path.join("./dataset/optic_disc_seg/Annotations",img_name)mask_dir = os.path.join("./visual", img_name)img_dir = [image_dir, label_dir, mask_dir]plt.figure(figsize=(15, 15))title = ['Image', 'label', 'Predict'] for i in range(len(title)):plt.subplot(1, len(title), i+1)plt.title(title[i])img = cv.imread(img_dir[i],3)img = img[:,:,::-1]# b,g,r = cv.split(img)# img_rgb = cv.merge([r,g,b])plt.imshow(img)plt.axis('off')plt.show()img_list=os.listdir("./visual")for cname in img_list: display(cname)

总结

对U-Net进行了简单的介绍,并说明了U-Net中encoder支持的骨干模型对U-Net配置文件进行解释说明使用PaddleSeg提供的眼底医疗分割数据集对不同encoder的U-Net进行训练与测试

额外说明

UNet官方给出的代码在最后的分类时选择使用33卷积,笔者这里觉得使用11卷积效果可能会好,有兴趣的可以自己尝试修改一下,修改的时候记得修改padding哦!

针对上一行的不同想法:3*3卷积会降低特征图大小,应该就是用1*1吧

笔者目前在测试decoder中的卷积核个数对模型的影响,从原本的unet结构来看,decoder中的卷积核个数应该等于跳跃连接中下采样的特征图的通道数,但是笔者在自己的模型中与现有的模型中使用PaddleSeg给定的256,、128、64、64通道数效果并没有下降,所以笔者需要仔细尝试之后判断,目前给出的设置是按照跳跃连接时下采样特征图的通道数设置decoder的通道数,请关注后续版本。

在本数据上使用不同encoder的U-Net可能结果类似,原因可能受限于数据集本身,因为笔者在自己的论文中使用的senet+改进,在进行消融实验时U-Net与SENet作为encoder的U-Net效果对比时,指标差距还是比较明显的,以下给出笔者论文中的指标结果(自己的数据集,满满的求生欲!!!)。

字"姐"跳动,Character dancing!

赶紧放上项目地址(加粗!):/aistudio/projectdetail/1176398

这个项目是本人最喜欢哒,太有创意了!由于时间仓促,没有更换视频和填充的字母,后续会补上!

一、项目背景

我们身处一个数字时代,代码、网络、算法纵横交错,它们铺天盖地,却又隐匿无形。 如果它们都能被看见,通过色彩来描绘,那会是一种怎样的美妙景象?。

艺术家门将科技融入艺术,发布令人震撼的各类数字艺术展览。 那么我们是否也可以运用学习到的AI知识,来简单实现一个类似的项目呢?

二、效果展示

通过 PaddleSeg 完成人像分割,将人像进行分割,将得到的人像作为词云的填充区域进行填充,然后将填充好的人像添加背景,最后完成我们想要的效果。 下面我们一起来看一下整体效果吧:

对单张图片的转换,左边是原始图片,通过人像分割,再将人像作为词云的填充区域进行填充,右边就是我们实现的效果:

</div>

在实现单张图片转换之后,我们就可以对视频中的每一帧进行处理,通过视频中的人物运动,让字"姐"动起来! 我b站上找了一个性感的小姐姐进行转换(别只看小姐姐哦!),效果如下:观看完整视频请搓下方播放按钮,高清视频请到源站播放

NOTE: 如果您在本地运行该项目示例,需要首先安装PaddleSeg。如果您在线运行,需要首先fork该项目示例。之后按照该示例操作即可。 附:PaddleSeg 课程链接:/aistudio/course/introduce/1767

如果你觉得这个项目还挺有意思,欢迎大家Fork / ❤喜欢 / 评论三连,你的支持是这个项目更新的最大动力!希望后续还能给大家输出更有意思的项目!

三、实现步骤

1. 安装依赖库

PaddleSeg 图像分割库官方地址

In[2]:

# GPU设置%set_env CUDA_VISIBLE_DEVICES=0! pip install wordcloud

项目中已经下载PaddleSeg,若你想在自己的电脑环境中安装PaddleSeg,可运行如下命令:

# 从PaddleSeg的github仓库下载代码git clone /PaddlePaddle/PaddleSeg.git# 运行PaddleSeg的程序需在PaddleSeg目录下cd PaddleSeg/# 安装所需依赖项pip install -r requirements.txt

In[3]:

# 解压从PaddleSeg Github仓库下载好的压缩包!unzip -o PaddleSeg.zip# 运行脚本需在PaddleSeg目录下%cd PaddleSeg# 安装所需依赖项!pip install -r requirements.txt

In[4]:

# 将配置文件humanseg.yaml复制到configs目录下!cp /home/aistudio/work/humanseg.yaml /home/aistudio/PaddleSeg/configs/# 下载预训练模型并放入./pretrained_model目录下%cd /home/aistudio/PaddleSeg/pretrained_model/ !wget https://paddleseg./models/deeplabv3p_xception65_humanseg.tgz !tar -xf deeplabv3p_xception65_humanseg.tgz %cd /home/aistudio

/home/aistudio/PaddleSeg/pretrained_model---12-06 15:24:58-- https://paddleseg./models/deeplabv3p_xception65_humanseg.tgzResolving paddleseg. (paddleseg.)... 182.61.200.195, 182.61.200.229, 2409:8c00:6c21:10ad:0:ff:b00e:67dConnecting to paddleseg. (paddleseg.)|182.61.200.195|:443... connected.HTTP request sent, awaiting response... 200 OKLength: 307082137 (293M) [application/x-gzip]Saving to: ‘deeplabv3p_xception65_humanseg.tgz’deeplabv3p_xception 100%[===================>] 292.86M 56.1MB/s in 6.8s -12-06 15:25:05 (43.1 MB/s) - ‘deeplabv3p_xception65_humanseg.tgz’ saved [307082137/307082137]/home/aistudio

2. 目录和资源

所有资源在work目录下 work/imgs 目录下是从网上找的图片资源work/output_pose 是人像分割后的图片目录work/videos 是视频存放目录work/texts 是填充文本和字体存放目录work/mp4_img 是视频导出的图片work/mp4_img_mask 图片mask结果work/mp4_img_analysis 视频图片分析结果

In[5]:

! mkdir work/videos! mkdir work/texts! mkdir work/mp4_img! mkdir work/mp4_img_mask! mkdir work/mp4_img_analysis# 解压视频素材!cp /home/aistudio/data/data57852/001.mp4 -d /home/aistudio/work/videos/001.mp4# 解压文本素材!unzip -q -o /home/aistudio/data/data57853/texts.zip -d /home/aistudio/work/

3. 查看单张图片的人像提取效果

In[6]:

# import paddlehub as hubimport cv2import numpy as npimport reimport osimport matplotlib.pyplot as pltimport matplotlib.image as mpimg from matplotlib import colorsfrom wordcloud import WordCloud%matplotlib inline

In[7]:

def img_show_bgr(image, size=10, convert=True):'''cv读取的图片显示'''if convert:image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)plt.figure(figsize=(size,size))plt.imshow(image)plt.axis("off")plt.show()

In[8]:

body_img = cv2.imread('work/imgs/body.jpg')img_show_bgr(body_img)

In[9]:

# 模型预测# Note: 若你没有gpu计算资源,只需要在以下脚本中删除参数`--use_gpu`重新运行即可。!python /home/aistudio/PaddleSeg/pdseg/vis.py \--cfg /home/aistudio/work/humanseg_test.yaml \--vis_dir /home/aistudio/work/output_pose \--use_gpu

{'AUG': {'AUG_METHOD': 'unpadding','FIX_RESIZE_SIZE': (513, 513),'FLIP': True,'FLIP_RATIO': 0.2,'INF_RESIZE_VALUE': 513,'MAX_RESIZE_VALUE': 400,'MAX_SCALE_FACTOR': 2.0,'MIN_RESIZE_VALUE': 513,'MIN_SCALE_FACTOR': 0.5,'MIRROR': True,'RICH_CROP': {'ASPECT_RATIO': 0,'BLUR': True,'BLUR_RATIO': 0.1,'BRIGHTNESS_JITTER_RATIO': 0.5,'CONTRAST_JITTER_RATIO': 0.5,'ENABLE': True,'MAX_ROTATION': 45,'MIN_AREA_RATIO': 0,'SATURATION_JITTER_RATIO': 0.5},'SCALE_STEP_SIZE': 0.25,'TO_RGB': False},'BATCH_SIZE': 24,'DATALOADER': {'BUF_SIZE': 256, 'NUM_WORKERS': 8},'DATASET': {'DATA_DIM': 3,'DATA_DIR': '/home/aistudio/work/imgs/','IGNORE_INDEX': 255,'IMAGE_TYPE': 'rgb','NUM_CLASSES': 2,'PADDING_VALUE': [104.00799749999999, 116.66899995, 122.67499965],'SEPARATOR': '|','TEST_FILE_LIST': '/home/aistudio/work/imgs/test_list.txt','TEST_TOTAL_IMAGES': 1,'TRAIN_FILE_LIST': '/home/aistudio/work/imgs/test_list.txt','TRAIN_TOTAL_IMAGES': 1,'VAL_FILE_LIST': '/home/aistudio/work/imgs/test_list.txt','VAL_TOTAL_IMAGES': 1,'VIS_FILE_LIST': '/home/aistudio/work/imgs/test_list.txt'},'EVAL_CROP_SIZE': (513, 513),'FREEZE': {'MODEL_FILENAME': 'model','PARAMS_FILENAME': 'params','SAVE_DIR': 'human_freeze_model'},'MEAN': [0.4078745, 0.45752549, 0.48107843],'MODEL': {'BN_MOMENTUM': 0.99,'DEEPLAB': {'ASPP_WITH_SEP_CONV': True,'BACKBONE': 'xception_65','BACKBONE_LR_MULT_LIST': None,'DECODER': {'CONV_FILTERS': 256,'OUTPUT_IS_LOGITS': False,'USE_SUM_MERGE': False},'DECODER_USE_SEP_CONV': True,'DEPTH_MULTIPLIER': 1.0,'ENABLE_DECODER': True,'ENCODER': {'ADD_IMAGE_LEVEL_FEATURE': True,'ASPP_CONVS_FILTERS': 256,'ASPP_RATIOS': None,'ASPP_WITH_CONCAT_PROJECTION': True,'ASPP_WITH_SE': False,'POOLING_CROP_SIZE': None,'POOLING_STRIDE': [1, 1],'SE_USE_QSIGMOID': False},'ENCODER_WITH_ASPP': True,'OUTPUT_STRIDE': 16},'DEFAULT_EPSILON': 1e-05,'DEFAULT_GROUP_NUMBER': 32,'DEFAULT_NORM_TYPE': 'bn','FP16': False,'HRNET': {'STAGE2': {'NUM_CHANNELS': [40, 80], 'NUM_MODULES': 1},'STAGE3': {'NUM_CHANNELS': [40, 80, 160],'NUM_MODULES': 4},'STAGE4': {'NUM_CHANNELS': [40, 80, 160, 320],'NUM_MODULES': 3}},'ICNET': {'DEPTH_MULTIPLIER': 0.5, 'LAYERS': 50},'MODEL_NAME': 'deeplabv3p','MULTI_LOSS_WEIGHT': [1.0],'OCR': {'OCR_KEY_CHANNELS': 256, 'OCR_MID_CHANNELS': 512},'PSPNET': {'DEPTH_MULTIPLIER': 1, 'LAYERS': 50},'SCALE_LOSS': 'DYNAMIC','UNET': {'UPSAMPLE_MODE': 'bilinear'}},'NUM_TRAINERS': 1,'SLIM': {'KNOWLEDGE_DISTILL': False,'KNOWLEDGE_DISTILL_IS_TEACHER': False,'KNOWLEDGE_DISTILL_TEACHER_MODEL_DIR': '','NAS_ADDRESS': '','NAS_IS_SERVER': True,'NAS_PORT': 23333,'NAS_SEARCH_STEPS': 100,'NAS_SPACE_NAME': '','NAS_START_EVAL_EPOCH': 0,'PREPROCESS': False,'PRUNE_PARAMS': '','PRUNE_RATIOS': []},'SOLVER': {'BEGIN_EPOCH': 1,'CROSS_ENTROPY_WEIGHT': None,'DECAY_EPOCH': [10, 20],'GAMMA': 0.1,'LOSS': ['softmax_loss'],'LOSS_WEIGHT': {'BCE_LOSS': 1,'DICE_LOSS': 1,'LOVASZ_HINGE_LOSS': 1,'LOVASZ_SOFTMAX_LOSS': 1,'SOFTMAX_LOSS': 1},'LR': 0.1,'LR_POLICY': 'poly','LR_WARMUP': False,'LR_WARMUP_STEPS': 2000,'MOMENTUM': 0.9,'MOMENTUM2': 0.999,'NUM_EPOCHS': 40,'OPTIMIZER': 'sgd','POWER': 0.9,'WEIGHT_DECAY': 4e-05},'STD': [0.00392156, 0.00392156, 0.00392156],'TEST': {'TEST_MODEL': '/home/aistudio/PaddleSeg/pretrained_model/deeplabv3p_xception65_humanseg'},'TRAIN': {'MODEL_SAVE_DIR': 'snapshots/humanseg/aic_v2/','PRETRAINED_MODEL_DIR': 'pretrain/xception65_pretrained/','RESUME_MODEL_DIR': '','SNAPSHOT_EPOCH': 5,'SYNC_BATCH_NORM': False},'TRAINER_ID': 0,'TRAIN_CROP_SIZE': (513, 513)}W1206 15:27:57.736532 487 :252] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 11.0, Runtime API Version: 9.0W1206 15:27:57.741894 487 :260] device: 0, cuDNN Version: 7.6.load test model: /home/aistudio/PaddleSeg/pretrained_model/deeplabv3p_xception65_humanseg-12-06 15:27:59,881-WARNING: /home/aistudio/PaddleSeg/pretrained_model/deeplabv3p_xception65_humanseg/model.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ]#1 visualize image path: /home/aistudio/work/output_pose/body.png

In[10]:

body_seg_img = mpimg.imread('work/output_pose/body.png')img_show_bgr(body_seg_img, convert=False)

4. 实现思路

要实现字“姐”跳动的效果我们首先要解析图像,将人像进行分割,将得到的人像作为词云的填充区域进行填充,然后将填充好的人像添加背景,最后完成我们想要的效果。

首先解析图片,将人像进行分割:通过人像分割后得到的 Alpha通道,取值为0-255 (0为全透明,255为不透明),也即取值越大的像素点越可能为人体,就可以得到人体的填充区域信息,使用词云将人体区域进行填充,便可以达到字符人体的效果。将得到字符人体,合并到背景图像中进行输出。 了解完实现思路后,我们就可以开始动手实践了

4.1 读取填充所需文本

In[11]:

stop_words = set(['https', 'com'])def get_text_content(text_file_path):'''获取填充文本内容'''text_content = ''with open(text_file_path, encoding='utf-8') as file:text_content = file.read()# 数据清洗,只保存字符串中的中文,字母,数字text_content_find = re.findall('[\u4e00-\u9fa5a-zA-Z0-9]+', text_content, re.S) text_content = ' '.join(text_content_find)return text_content

In[12]:

text_content = get_text_content('work/texts/text01.txt')

4.2 生成背景

In[13]:

def generate_background_img(width, height, font_path, text_content, stop_words, output_file_path):'''生成背景图像'''wordcloud = WordCloud(font_path=font_path, colormap=colors.ListedColormap(['#f5f6f7']), repeat=True, max_words=20000,prefer_horizontal=0.3,stopwords=stop_words,width=width, height=height, min_font_size=20,background_color='#fefefe', margin=3).generate(text_content)wordcloud.to_file(output_file_path)

In[15]:

background_img_path='work/img_bg.jpg'font_path = 'work/texts/fonts/msyhl.ttc'img_path = "work/imgs/body.jpg"img = cv2.imread(img_path)height, width, _ = img.shapegenerate_background_img(width, height, font_path, text_content, stop_words, background_img_path)img_show_bgr(cv2.imread(background_img_path))

In[16]:

def img_analysis(img_path, background_img, font_path, colormap, text_content, stop_words):'''分析图片生成字符人像'''img_mask = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)height, width = img_mask.shapefor row_index in range(height):for col_index in range(width):if img_mask[row_index][col_index] == 0:img_mask[row_index][col_index] = 255else:img_mask[row_index][col_index] = 0wc = WordCloud(font_path=font_path,repeat=True,background_color='white', min_font_size=1,colormap=colormap,margin=1, max_words=20000,width=width, height=height, prefer_horizontal=0.5,mask=img_mask,stopwords=stop_words,collocation_threshold=1).generate(text_content) # 最后生成数字wordcloud_result_array = wc.to_array()# 添加背景for row_index in range(height):for col_index in range(width):if wordcloud_result_array[row_index][col_index][0] == 255 and wordcloud_result_array[row_index][col_index][1] == 255 and wordcloud_result_array[row_index][col_index][2] == 255:wordcloud_result_array[row_index][col_index][0] = background_img[row_index][col_index][0]wordcloud_result_array[row_index][col_index][1] = background_img[row_index][col_index][1]wordcloud_result_array[row_index][col_index][2] = background_img[row_index][col_index][2]return wordcloud_result_array

In[17]:

colormap = colors.ListedColormap(['#ff4000', '#f9a852','#f69653','#f38654','#f07654','#ed6856','#ef5956','#ee4c58'])background_img = cv2.imread(background_img_path)background_img = cv2.cvtColor(background_img, cv2.COLOR_BGR2RGB)img_mask_path = "work/output_pose/body.png"img_analysis_result = img_analysis(img_mask_path, background_img, font_path, colormap, text_content, stop_words)img_analysis_result = cv2.cvtColor(img_analysis_result,cv2.COLOR_BGR2RGB)img_show_bgr(img_analysis_result, size=18)

5. 让字"姐"动起来

具体实现步骤如下:

准备素材将视频中每一帧保存成图片, 并生成mask结果分析图片中的人体姿势, 并转换为字符,输出结果合并图像到视频,得到最终的结果

5.1 准备素材

含有人体动作视频,需要各位自行下载,本教程已经下载好(work/001.mp4)

In[18]:

# 素材图片位置input_video = 'work/videos/001.mp4'

5.2 将视频中每一帧保存成图片, 并生成mask结果

In[19]:

def transform_video_to_image(video_file_path, img_path):'''将视频中每一帧保存成图片'''video_capture = cv2.VideoCapture(video_file_path)fps = video_capture.get(cv2.CAP_PROP_FPS)count = 0while(True):ret, frame = video_capture.read() if ret:cv2.imwrite(img_path + '%d.jpg' % count, frame)count += 1else:breakvideo_capture.release()filename_list = os.listdir(img_path)with open(os.path.join(img_path, 'img_list.txt'), 'w', encoding='utf-8') as file:file.writelines('\n'.join(filename_list))print('视频图片保存成功, 共有 %d 张' % count)return fps

In[]:

# 将视频中每一帧保存成图片fps = transform_video_to_image(input_video, 'work/mp4_img/')

In[19]:

# 生成mask结果图片!python /home/aistudio/PaddleSeg/pdseg/vis.py \--cfg /home/aistudio/work/humanseg.yaml \--vis_dir /home/aistudio/work/mp4_img_mask \--use_gpu

5.3 分析图片中的人像, 并转换为字符人体,保存输出

In[20]:

def analysis_pose(input_frame_path, output_frame_path, background_img_path, font_path, colormap, text_content, stop_words, is_print=True, is_overwrite=True):'''分析图片中的人像, 并转换为字符人体,输出结果'''file_items = os.listdir(input_frame_path)file_len = len(file_items)background_img = cv2.imread(background_img_path)background_img = cv2.cvtColor(background_img, cv2.COLOR_BGR2RGB)file_len = len(file_items)for i in range(0, file_len):file_item = '%d.png' % iinput_file_path = os.path.join(input_frame_path, '%d.png' % i)output_file_path = os.path.join(output_frame_path, '%d.jpg' % i)if not is_overwrite and os.path.exists(output_file_path):continueif is_print:print(i,'/', file_len, ' doing', input_file_path)img_analysis_result = img_analysis(input_file_path, background_img, font_path, colormap, text_content, stop_words)img_analysis_result = cv2.cvtColor(img_analysis_result,cv2.COLOR_BGR2RGB)cv2.imwrite(output_file_path, img_analysis_result)

In[21]:

colormap = colors.ListedColormap(['#ff4000', '#f9a852','#f69653','#f38654','#f07654','#ed6856','#ef5956','#ee4c58'])background_img_path = 'work/img_bg.jpg'# 分析图片中的人体姿势, 并转换为皮影姿势,输出结果analysis_pose('work/mp4_img_mask/', 'work/mp4_img_analysis/', background_img_path, font_path, colormap, text_content, stop_words, is_print=True, is_overwrite=False)

5.4 合并图像到视频

In[22]:

def combine_image_to_video(comb_path, output_file_path, fps=30, is_print=False):'''合并图像到视频'''fourcc = cv2.VideoWriter_fourcc(*'MP4V') file_items = [item for item in os.listdir(comb_path) if item.endswith('.jpg')]file_len = len(file_items)# print(comb_path, file_items)if file_len > 0 :temp_img = cv2.imread(os.path.join(comb_path, file_items[0]))img_height, img_width, _ = temp_img.shapeout = cv2.VideoWriter(output_file_path, fourcc, fps, (img_width, img_height))for i in range(file_len):pic_name = os.path.join(comb_path, str(i)+".jpg")if is_print:print(i+1,'/', file_len, ' ', pic_name)img = cv2.imread(pic_name)out.write(img)out.release()

In[23]:

# 合并图像到视频combine_image_to_video('work/mp4_img_analysis/', 'work/mp4_analysis.mp4', fps)

In[24]:

# 添加音频 mp4_analysis_result.mp4为最终输出文件! ffmpeg -i work/mp4_analysis.mp4 -i work/videos/001.mp4 -c:v copy -c:a copy work/mp4_analysis_result.mp4 -y

OK, 最后我们得到跳动的字"姐"了!!!

大家可以将视频下载到本地进行查看

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。