浴帽厂家
免费服务热线

Free service

hotline

010-00000000
浴帽厂家
热门搜索:
技术资讯
当前位置:首页 > 技术资讯

只需130行代码用GAN生成二维样本的小例子_[#第一枪]

发布时间:2021-06-07 18:07:27 阅读: 来源:浴帽厂家

雷锋网按:此前雷锋网曾编译了一篇英文教程,详细介绍了如何基于 PyTorch 平台用 50 行代码实现 GAN(生成对抗网络),详情参见:《GAN 很复杂?如何用不到 50 行代码训练 GAN》。近期,针对文中介绍的“50 行代码 GAN 模型”,有开发者指出了局限性,并基于此模型给出了改进版本,也就是本文将要介绍的“130 行代码实现 GAN 二维样本”。本文原载于知乎专栏,作者達聞西,经授权发布。

50行GAN代码的问题

Dev Nag 写的 50 行代码的 GAN,大概是网上流传最广的,关于GAN最简单的小例子。这是一份用一维均匀样本作为特征空间(latent space)样本,经过生成网络变换后,生成高斯分布样本的代码。结构非常清晰,却有一个奇怪的问题,就是判别器(Discriminator)的输入不是2维样本,而是把整个mini-batch整体作为一个维度是batch size(代码中batch size等于cardinality)那么大的样本。也就是说判别网络要判别的不是一个一维的目标分布,而是batch size那么大维度的分布:

...

d_input_size = 100 # Minibatch size - cardinality of distributions

...

class Discriminator(nn.Module):

def __init__(self, input_size, hidden_size, output_size):

super(Discriminator, self).__init__()

self.map1 = nn.Linear(input_size, hidden_size)

self.map2 = nn.Linear(hidden_size, hidden_size)

self.map3 = nn.Linear(hidden_size, output_size)

def forward(self, x):

x = F.elu(self.map1(x))

x = F.elu(self.map2(x))

return F.sigmoid(self.map3(x))

...

D = Discriminator(input_size=d_input_func(d_input_size), hidden_size=d_hidden_size, output_size=d_output_size)

...

for epoch in range(num_epochs):

for d_index in range(d_steps):

# 1. Train D on real+fake

D.zero_grad()

# 1A: Train D on real

d_real_data = Variable(d_sampler(d_input_size))

d_real_decision = D(preprocess(d_real_data))

d_real_error = criterion(d_real_decision, Variable(torch.ones(1))) # ones = true

d_real_error.backward() # compute/store gradients, but don't change params

# 1B: Train D on fake

d_gen_input = Variable(gi_sampler(minibatch_size, g_input_size))

d_fake_data = G(d_gen_input).detach() # detach to avoid training G on these labels

d_fake_decision = D(preprocess(d_fake_data.t()))

d_fake_error = criterion(d_fake_decision, Variable(torch.zeros(1))) # zeros = fake

d_fake_error.backward()

d_optimizer.step() # Only optimizes D's parameters; changes based on stored gradients from backward()

for g_index in range(g_steps):

# 2. Train G on D's response (but DO NOT train D on these labels)

G.zero_grad()

gen_input = Variable(gi_sampler(minibatch_size, g_input_size))

g_fake_data = G(gen_input)

dg_fake_decision = D(preprocess(g_fake_data.t()))

g_error = criterion(dg_fake_decision, Variable(torch.ones(1))) # we want to fool, so pretend it's all genuine

g_error.backward()

g_optimizer.step() # Only optimizes G's parameters

...

不知作者是疏忽了还是有意为之,总之这么做的结果就是如此简单的例子收敛都好。可能作者自己也察觉了收敛问题,就想把方差信息也放进来,于是又写了个预处理函数(decorate_with_diffs)计算出每个样本距离一批样本中心的距离平方,作为给判别网络的额外输入,其实这样还增加了输入维度。结果当然是加不加这个方差信息都能勉强收敛,但是都不稳定。甚至作者自己贴出来的生成样本分布(下图)都不令人满意:

如果直接把这份代码改成二维的,就会发现除了简单的对称分布以外,其他分布基本都无法生成。

理论上讲神经网络作为一种通用的近似函数,只要capacity够,学习多少维分布都不成问题,但是这样写法显然极大增加了收敛难度。更自然的做法应该是:判别网络只接受单个二维样本,通过batch size或是多步迭代学习分布信息。

另:这份代码其实有130行。

从自定义的二维分布采样

不管怎样Dev Nag的代码还是提供了一个用于理解和试验GAN的很好的框架,做一些修改就可以得到一份更适合直观演示,且更容易收敛的代码,也就是本文的例子。

从可视化的角度二维显然比一维更直观,所以我们采用二维样本。第一步,当然是要设定一个目标分布,作为二维的例子,分布的定义方式应该尽量自由,这个例子中我们的思路是通过灰度图像定义的概率密度,进而来产生样本,比如下面这样:

二维情况下,这种采样的一个实现方法是:求一个维度上的边缘(marginal)概率+另一维度上近似的条件概率。比如把图像中白色像素的值作为概率密度的相对大小,然后沿着x求和,然后在y轴上求出marginal probability density,接着再根据y的位置,近似得到对应x关于y的条件概率。采样的时候先采y的值,再采x的值就能近似得到符合图像描述的分布的样本。具体细节就不展开讲解了,看代码:

from functools import partial

import numpy

from skimage import transform

EPS = 1e-6

RESOLUTION = 0.001

num_grids = int(1/RESOLUTION+0.5)

def generate_lut(img):

"""

linear approximation of CDF & marginal

:param density_img:

:return: lut_y, lut_x

"""

density_img = transform.resize(img, (num_grids, num_grids))

x_accumlation = numpy.sum(density_img, axis=1)

sum_xy = numpy.sum(x_accumlation)

y_cdf_of_accumulated_x = [[0., 0.]]

accumulated = 0

for ir, i in enumerate(range(num_grids-1, -1, -1)):

accumulated += x_accumlation[i]

if accumulated == 0:

y_cdf_of_accumulated_x[0][0] = float(ir+1)/float(num_grids)

elif EPS < accumulated < sum_xy - EPS:

y_cdf_of_accumulated_x.append([float(ir+1)/float(num_grids), accumulated/sum_xy])

else:

break

y_cdf_of_accumulated_x.append([float(ir+1)/float(num_grids), 1.])

y_cdf_of_accumulated_x = numpy.array(y_cdf_of_accumulated_x)

x_cdfs = []

for j in range(num_grids):

x_freq = density_img[num_grids-j-1]

sum_x = numpy.sum(x_freq)

x_cdf = [[0., 0.]]

accumulated = 0

for i in range(num_grids):

accumulated += x_freq[i]

if accumulated == 0:

x_cdf[0][0] = float(i+1) / float(num_grids)

elif EPS < accumulated < sum_xy - EPS:

x_cdf.append([float(i+1)/float(num_grids), accumulated/sum_x])

else:

break

x_cdf.append([float(i+1)/float(num_grids), 1.])

if accumulated > EPS:

x_cdf = numpy.array(x_cdf)

x_cdfs.append(x_cdf)

else:

x_cdfs.append(None)

y_lut = partial(numpy.interp, xp=y_cdf_of_accumulated_x[:, 1], fp=y_cdf_of_accumulated_x[:, 0])

x_luts = [partial(numpy.interp, xp=x_cdfs[i][:, 1], fp=x_cdfs[i][:, 0]) if x_cdfs[i] is not None else None for i in range(num_grids)]

return y_lut, x_luts

def sample_2d(lut, N):

y_lut, x_luts = lut

u_rv = numpy.random.random((N, 2))

samples = numpy.zeros(u_rv.shape)

for i, (x, y) in enumerate(u_rv):

ys = y_lut(y)

x_bin = int(ys/RESOLUTION)

xs = x_luts[x_bin](x)

samples[i][0] = xs

samples[i][1] = ys

return samples

if __name__ == '__main__':

from skimage import io

density_img = io.imread('batman.jpg', True)

lut_2d = generate_lut(density_img)

samples = sample_2d(lut_2d, 10000)

from matplotlib import pyplot

fig, (ax0, ax1) = pyplot.subplots(ncols=2, figsize=(9, 4))

fig.canvas.set_window_title('Test 2D Sampling')

ax0.imshow(density_img, cmap='gray')

ax0.xaxis.set_major_locator(pyplot.NullLocator())

ax0.yaxis.set_major_locator(pyplot.NullLocator())

ax1.axis('equal')

ax1.axis([0, 1, 0, 1])

ax1.plot(samples[:, 0], samples[:, 1], 'k,')

pyplot.show()

二维GAN的小例子

虽然网上到处都有,这里还是贴一下GAN的公式:

就是一个你追我赶的零和博弈,这在Dev Nag的代码里体现得很清晰:判别网络训一拨,然后生成网络训一拨,不断往复。按照上节所述,本文例子在Dev Nag代码的基础上,把判别网络每次接受一个batch作为输入的方式变成了:每次接受一个二维样本,通过每个batch的多个样本计算loss。GAN部分的训练代码如下:

DIMENSION = 2

...

generator = SimpleMLP(input_size=z_dim, hidden_size=args.g_hidden_size, output_size=DIMENSION)

discriminator = SimpleMLP(input_size=DIMENSION, hidden_size=args.d_hidden_size, output_size=1)

...

for train_iter in range(args.iterations):

for d_index in range(args.d_steps):

# 1. Train D on real+fake

discriminator.zero_grad()

# 1A: Train D on real

real_samples = sample_2d(lut_2d, bs)

d_real_data = Variable(torch.Tensor(real_samples))

d_real_decision = discriminator(d_real_data)

labels = Variable(torch.ones(bs))

d_real_loss = criterion(d_real_decision, labels) # ones = true

# 1B: Train D on fake

latent_samples = torch.randn(bs, z_dim)

d_gen_input = Variable(latent_samples)

d_fake_data = generator(d_gen_input).detach() # detach to avoid training G on these labels

d_fake_decision = discriminator(d_fake_data)

labels = Variable(torch.zeros(bs))

d_fake_loss = criterion(d_fake_decision, labels) # zeros = fake

d_loss = d_real_loss + d_fake_loss

d_loss.backward()

d_optimizer.step() # Only optimizes D's parameters; changes based on stored gradients from backward()

for g_index in range(args.g_steps):

# 2. Train G on D's response (but DO NOT train D on these labels)

generator.zero_grad()

latent_samples = torch.randn(bs, z_dim)

g_gen_input = Variable(latent_samples)

g_fake_data = generator(g_gen_input)

g_fake_decision = discriminator(g_fake_data)

labels = Variable(torch.ones(bs))

g_loss = criterion(g_fake_decision, labels) # we want to fool, so pretend it's all genuine

g_loss.backward()

g_optimizer.step() # Only optimizes G's parameters

...

...

和Dev Nag的版本比起来除了上面提到的判别网络,和样本维度的修改,还加了可视化方便直观演示和理解,比如用一个二维高斯分布产生一个折线形状的分布,执行:

python gan_demo.py inputs/zig.jpg

训练过程的可视化如下:

更多可视化例子可以参考如下

儿童创意t恤批发

拾草机

烘干饲料机货源

强化复合地板价格