When we are making the GAN model we set discriminator.trainable = False. Now when I do discriminator.summary() I get 0 trainable parameters. I cannot understand the reason for this. I’d already compiled my discriminator model before making the changes. My GAN algorithm somehow works absolutely fin. I’m able to generate accurate imagess
Discriminator.trainable = False makes discriminator parameters also non trainable
Hey @raunaqsingh10, yes theoritically the code in the video, is little wrong, I also wrote my code, and i made sure to correct this mistake,
for step in tqdm(range(NO_OF_BATCHES)):
#randomly select 50% real images
idx = np.random.randint(0,X_Train.shape[0],HALF_BATCH_SIZE)
real_imgs = X_Train[idx]
# generate 50% random images
noise = np.random.normal(0,1,size=(HALF_BATCH_SIZE,NOISE_DIM))
fake_imgs = generator.predict(noise)
# one sided label smoothing
real_y = np.ones((HALF_BATCH_SIZE,1))*0.9 #Label Smoothing, Works well in practice
fake_y = np.zeros((HALF_BATCH_SIZE,1))
# train on real and fake images
discriminator.trainable = True
generator.trainable = False
d_loss_real = discriminator.train_on_batch(real_imgs,real_y) #updates the weights of discriminator
d_loss_fake = discriminator.train_on_batch(fake_imgs,fake_y)
d_loss = 0.5*d_loss_real + 0.5*d_loss_fake
epoch_d_loss += d_loss
# discriminator.trainable = False
#Train Generator (Complete Model Generator + Frozen Discriminator)
discriminator.trainable = False
generator.trainable = True
noise = np.random.normal(0,1,size=(BATCH_SIZE,NOISE_DIM))
real_y = np.ones((BATCH_SIZE,1))
g_loss = model.train_on_batch(noise,real_y)
epoch_g_loss += g_loss
My model was also able to produce satisfactory results. Though it need a little more time,
Hope this resolved your doubt.
Plz mark the doubt as resolved in my doubts section.