Posit AI Blog: Image-to-image translation with pix2pix

0
139
Posit AI Blog: Image-to-image translation with pix2pix


What do we have to practice a neural community? A standard reply is: a mannequin, a price operate, and an optimization algorithm.
(I do know: I’m leaving out crucial factor right here – the information.)

As laptop packages work with numbers, the associated fee operate must be fairly particular: We can’t simply say predict subsequent month’s demand for garden mowers please, and do your finest, now we have to say one thing like this: Minimize the squared deviation of the estimate from the goal worth.

In some circumstances it might be easy to map a activity to a measure of error, in others, it might not. Consider the duty of producing non-existing objects of a sure sort (like a face, a scene, or a video clip). How will we quantify success?
The trick with generative adversarial networks (GANs) is to let the community study the associated fee operate.

As proven in Generating photos with Keras and TensorFlow keen execution, in a easy GAN the setup is that this: One agent, the generator, retains on producing pretend objects. The different, the discriminator, is tasked to inform aside the true objects from the pretend ones. For the generator, loss is augmented when its fraud will get found, which means that the generator’s value operate is dependent upon what the discriminator does. For the discriminator, loss grows when it fails to accurately inform aside generated objects from genuine ones.

In a GAN of the sort simply described, creation begins from white noise. However in the true world, what’s required could also be a type of transformation, not creation. Take, for instance, colorization of black-and-white photos, or conversion of aerials to maps. For functions like these, we situation on extra enter: Hence the identify, conditional adversarial networks.

Put concretely, this implies the generator is handed not (or not solely) white noise, however knowledge of a sure enter construction, resembling edges or shapes. It then has to generate realistic-looking footage of actual objects having these shapes.
The discriminator, too, could obtain the shapes or edges as enter, along with the pretend and actual objects it’s tasked to inform aside.

Here are just a few examples of conditioning, taken from the paper we’ll be implementing (see under):

Figure from Image-to-Image Translation with Conditional Adversarial Networks Isola et al. (2016)

In this submit, we port to R a Google Colaboratory Notebook utilizing Keras with keen execution. We’re implementing the fundamental structure from pix2pix, as described by Isola et al. of their 2016 paper(Isola et al. 2016). It’s an attention-grabbing paper to learn because it validates the strategy on a bunch of various datasets, and shares outcomes of utilizing totally different loss households, too:

Figure from Image-to-Image Translation with Conditional Adversarial Networks Isola et al. (2016)

Prerequisites

The code proven right here will work with the present CRAN variations of tensorflow, keras, and tfdatasets. Also, make sure to examine that you simply’re utilizing a minimum of model 1.9 of TensorFlow. If that isn’t the case, as of this writing, this

will get you model 1.10.

When loading libraries, please be sure you’re executing the primary 4 traces within the actual order proven. We want to ensure we’re utilizing the TensorFlow implementation of Keras (tf.keras in Python land), and now we have to allow keen execution earlier than utilizing TensorFlow in any manner.

No must copy-paste any code snippets – you’ll discover the entire code (so as obligatory for execution) right here: eager-pix2pix.R.

Dataset

For this submit, we’re working with one of many datasets used within the paper, a preprocessed model of the CMP Facade Dataset.

Images include the bottom reality – that we’d want for the generator to generate, and for the discriminator to accurately detect as genuine – and the enter we’re conditioning on (a rough segmention into object courses) subsequent to one another in the identical file.

Figure from https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/

Preprocessing

Obviously, our preprocessing must break up the enter photos into components. That’s the very first thing that occurs within the operate under.

After that, motion is dependent upon whether or not we’re within the coaching or testing phases. If we’re coaching, we carry out random jittering, by way of upsizing the picture to 286x286 after which cropping to the unique measurement of 256x256. In about 50% of the circumstances, we additionally flipping the picture left-to-right.

In each circumstances, coaching and testing, we normalize the picture to the vary between -1 and 1.

Note the usage of the tf$picture module for picture -related operations. This is required as the photographs might be streamed by way of tfdatasets, which works on TensorFlow graphs.

img_width <- 256L
img_height <- 256L

load_image <- operate(image_file, is_train) {

  picture <- tf$read_file(image_file)
  picture <- tf$picture$decode_jpeg(picture)
  
  w <- as.integer(k_shape(picture)[2])
  w2 <- as.integer(w / 2L)
  real_image <- picture[ , 1L:w2, ]
  input_image <- picture[ , (w2 + 1L):w, ]
  
  input_image <- k_cast(input_image, tf$float32)
  real_image <- k_cast(real_image, tf$float32)

  if (is_train) {
    input_image <-
      tf$picture$resize_images(input_image,
                             c(286L, 286L),
                             align_corners = TRUE,
                             methodology = 2)
    real_image <- tf$picture$resize_images(real_image,
                                         c(286L, 286L),
                                         align_corners = TRUE,
                                         methodology = 2)
    
    stacked_image <-
      k_stack(checklist(input_image, real_image), axis = 1)
    cropped_image <-
      tf$random_crop(stacked_image, measurement = c(2L, img_height, img_width, 3L))
    c(input_image, real_image) %<-% 
      checklist(cropped_image[1, , , ], cropped_image[2, , , ])
    
    if (runif(1) > 0.5) {
      input_image <- tf$picture$flip_left_right(input_image)
      real_image <- tf$picture$flip_left_right(real_image)
    }
    
  } else {
    input_image <-
      tf$picture$resize_images(
        input_image,
        measurement = c(img_height, img_width),
        align_corners = TRUE,
        methodology = 2
      )
    real_image <-
      tf$picture$resize_images(
        real_image,
        measurement = c(img_height, img_width),
        align_corners = TRUE,
        methodology = 2
      )
  }
  
  input_image <- (input_image / 127.5) - 1
  real_image <- (real_image / 127.5) - 1
  
  checklist(input_image, real_image)
}

Streaming the information

The photos might be streamed by way of tfdatasets, utilizing a batch measurement of 1.
Note how the load_image operate we outlined above is wrapped in tf$py_func to allow accessing tensor values within the ordinary keen manner (which by default, as of this writing, will not be potential with the TensorFlow datasets API).

# change to the place you unpacked the information
# there might be practice, val and check subdirectories under
data_dir <- "facades"

buffer_size <- 400
batch_size <- 1
batches_per_epoch <- buffer_size / batch_size

train_dataset <-
  tf$knowledge$Dataset$list_files(file.path(data_dir, "practice/*.jpg")) %>%
  dataset_shuffle(buffer_size) %>%
  dataset_map(operate(picture) {
    tf$py_func(load_image, checklist(picture, TRUE), checklist(tf$float32, tf$float32))
  }) %>%
  dataset_batch(batch_size)

test_dataset <-
  tf$knowledge$Dataset$list_files(file.path(data_dir, "check/*.jpg")) %>%
  dataset_map(operate(picture) {
    tf$py_func(load_image, checklist(picture, TRUE), checklist(tf$float32, tf$float32))
  }) %>%
  dataset_batch(batch_size)

Defining the actors

Generator

First, right here’s the generator. Let’s begin with a birds-eye view.

The generator receives as enter a rough segmentation, of measurement 256×256, and may produce a pleasant shade picture of a facade.
It first successively downsamples the enter, as much as a minimal measurement of 1×1. Then after maximal condensation, it begins upsampling once more, till it has reached the required output decision of 256×256.

During downsampling, as spatial decision decreases, the variety of filters will increase. During upsampling, it goes the alternative manner.

generator <- operate(identify = "generator") {
  
  keras_model_custom(identify = identify, operate(self) {
    
    self$down1 <- downsample(64, 4, apply_batchnorm = FALSE)
    self$down2 <- downsample(128, 4)
    self$down3 <- downsample(256, 4)
    self$down4 <- downsample(512, 4)
    self$down5 <- downsample(512, 4)
    self$down6 <- downsample(512, 4)
    self$down7 <- downsample(512, 4)
    self$down8 <- downsample(512, 4)
    
    self$up1 <- upsample(512, 4, apply_dropout = TRUE)
    self$up2 <- upsample(512, 4, apply_dropout = TRUE)
    self$up3 <- upsample(512, 4, apply_dropout = TRUE)
    self$up4 <- upsample(512, 4)
    self$up5 <- upsample(256, 4)
    self$up6 <- upsample(128, 4)
    self$up7 <- upsample(64, 4)
    
    self$final <- layer_conv_2d_transpose(
      filters = 3,
      kernel_size = 4,
      strides = 2,
      padding = "similar",
      kernel_initializer = initializer_random_normal(0, 0.2),
      activation = "tanh"
    )
    
    operate(x, masks = NULL, coaching = TRUE) {           # x form == (bs, 256, 256, 3)
     
      x1 <- x %>% self$down1(coaching = coaching)         # (bs, 128, 128, 64)
      x2 <- self$down2(x1, coaching = coaching)           # (bs, 64, 64, 128)
      x3 <- self$down3(x2, coaching = coaching)           # (bs, 32, 32, 256)
      x4 <- self$down4(x3, coaching = coaching)           # (bs, 16, 16, 512)
      x5 <- self$down5(x4, coaching = coaching)           # (bs, 8, 8, 512)
      x6 <- self$down6(x5, coaching = coaching)           # (bs, 4, 4, 512)
      x7 <- self$down7(x6, coaching = coaching)           # (bs, 2, 2, 512)
      x8 <- self$down8(x7, coaching = coaching)           # (bs, 1, 1, 512)

      x9 <- self$up1(checklist(x8, x7), coaching = coaching)   # (bs, 2, 2, 1024)
      x10 <- self$up2(checklist(x9, x6), coaching = coaching)  # (bs, 4, 4, 1024)
      x11 <- self$up3(checklist(x10, x5), coaching = coaching) # (bs, 8, 8, 1024)
      x12 <- self$up4(checklist(x11, x4), coaching = coaching) # (bs, 16, 16, 1024)
      x13 <- self$up5(checklist(x12, x3), coaching = coaching) # (bs, 32, 32, 512)
      x14 <- self$up6(checklist(x13, x2), coaching = coaching) # (bs, 64, 64, 256)
      x15 <-self$up7(checklist(x14, x1), coaching = coaching)  # (bs, 128, 128, 128)
      x16 <- self$final(x15)                               # (bs, 256, 256, 3)
      x16
    }
  })
}

How can spatial data be preserved if we downsample all the best way all the way down to a single pixel? The generator follows the final precept of a U-Net (Ronneberger, Fischer, and Brox 2015), the place skip connections exist from layers earlier within the downsampling course of to layers afterward the best way up.

Figure from (Ronneberger, Fischer, and Brox 2015)

Let’s take the road

x15 <-self$up7(checklist(x14, x1), coaching = coaching)

from the name methodology.

Here, the inputs to self$up are x14, which went via the entire down- and upsampling, and x1, the output from the very first downsampling step. The former has decision 64×64, the latter, 128×128. How do they get mixed?

That’s taken care of by upsample, technically a customized mannequin of its personal.
As an apart, we comment how customized fashions allow you to pack your code into good, reusable modules.

upsample <- operate(filters,
                     measurement,
                     apply_dropout = FALSE,
                     identify = "upsample") {
  
  keras_model_custom(identify = NULL, operate(self) {
    
    self$apply_dropout <- apply_dropout
    self$up_conv <- layer_conv_2d_transpose(
      filters = filters,
      kernel_size = measurement,
      strides = 2,
      padding = "similar",
      kernel_initializer = initializer_random_normal(),
      use_bias = FALSE
    )
    self$batchnorm <- layer_batch_normalization()
    if (self$apply_dropout) {
      self$dropout <- layer_dropout(charge = 0.5)
    }
    
    operate(xs, masks = NULL, coaching = TRUE) {
      
      c(x1, x2) %<-% xs
      x <- self$up_conv(x1) %>% self$batchnorm(coaching = coaching)
      if (self$apply_dropout) {
        x %>% self$dropout(coaching = coaching)
      }
      x %>% layer_activation("relu")
      concat <- k_concatenate(checklist(x, x2))
      concat
    }
  })
}

x14 is upsampled to double its measurement, and x1 is appended as is.
The axis of concatenation right here is axis 4, the characteristic map / channels axis. x1 comes with 64 channels, x14 comes out of layer_conv_2d_transpose with 64 channels, too (as a result of self$up7 has been outlined that manner). So we find yourself with a picture of decision 128×128 and 128 characteristic maps for the output of step x15.

Downsampling, too, is factored out to its personal mannequin. Here too, the variety of filters is configurable.

downsample <- operate(filters,
                       measurement,
                       apply_batchnorm = TRUE,
                       identify = "downsample") {
  
  keras_model_custom(identify = identify, operate(self) {
    
    self$apply_batchnorm <- apply_batchnorm
    self$conv1 <- layer_conv_2d(
      filters = filters,
      kernel_size = measurement,
      strides = 2,
      padding = 'similar',
      kernel_initializer = initializer_random_normal(0, 0.2),
      use_bias = FALSE
    )
    if (self$apply_batchnorm) {
      self$batchnorm <- layer_batch_normalization()
    }
    
    operate(x, masks = NULL, coaching = TRUE) {
      
      x <- self$conv1(x)
      if (self$apply_batchnorm) {
        x %>% self$batchnorm(coaching = coaching)
      }
      x %>% layer_activation_leaky_relu()
    }
  })
}

Now for the discriminator.

Discriminator

Again, let’s begin with a birds-eye view.
The discriminator receives as enter each the coarse segmentation and the bottom reality. Both are concatenated and processed collectively. Just just like the generator, the discriminator is thus conditioned on the segmentation.

What does the discriminator return? The output of self$final has one channel, however a spatial decision of 30×30: We’re outputting a likelihood for every of 30×30 picture patches (which is why the authors are calling this a PatchGAN).

The discriminator thus engaged on small picture patches means it solely cares about native construction, and consequently, enforces correctness within the excessive frequencies solely. Correctness within the low frequencies is taken care of by a further L1 element within the discriminator loss that operates over the entire picture (as we’ll see under).

discriminator <- operate(identify = "discriminator") {
  
  keras_model_custom(identify = identify, operate(self) {
    
    self$down1 <- disc_downsample(64, 4, FALSE)
    self$down2 <- disc_downsample(128, 4)
    self$down3 <- disc_downsample(256, 4)
    self$zero_pad1 <- layer_zero_padding_2d()
    self$conv <- layer_conv_2d(
      filters = 512,
      kernel_size = 4,
      strides = 1,
      kernel_initializer = initializer_random_normal(),
      use_bias = FALSE
    )
    self$batchnorm <- layer_batch_normalization()
    self$zero_pad2 <- layer_zero_padding_2d()
    self$final <- layer_conv_2d(
      filters = 1,
      kernel_size = 4,
      strides = 1,
      kernel_initializer = initializer_random_normal()
    )
    
    operate(x, y, masks = NULL, coaching = TRUE) {
      
      x <- k_concatenate(checklist(x, y)) %>%            # (bs, 256, 256, channels*2)
        self$down1(coaching = coaching) %>%         # (bs, 128, 128, 64)
        self$down2(coaching = coaching) %>%         # (bs, 64, 64, 128)
        self$down3(coaching = coaching) %>%         # (bs, 32, 32, 256)
        self$zero_pad1() %>%                        # (bs, 34, 34, 256)
        self$conv() %>%                             # (bs, 31, 31, 512)
        self$batchnorm(coaching = coaching) %>%
        layer_activation_leaky_relu() %>%
        self$zero_pad2() %>%                        # (bs, 33, 33, 512)
        self$final()                                 # (bs, 30, 30, 1)
      x
    }
  })
}

And right here’s the factored-out downsampling performance, once more offering the means to configure the variety of filters.

disc_downsample <- operate(filters,
                            measurement,
                            apply_batchnorm = TRUE,
                            identify = "disc_downsample") {
  
  keras_model_custom(identify = identify, operate(self) {
    
    self$apply_batchnorm <- apply_batchnorm
    self$conv1 <- layer_conv_2d(
      filters = filters,
      kernel_size = measurement,
      strides = 2,
      padding = 'similar',
      kernel_initializer = initializer_random_normal(0, 0.2),
      use_bias = FALSE
    )
    if (self$apply_batchnorm) {
      self$batchnorm <- layer_batch_normalization()
    }
    
    operate(x, masks = NULL, coaching = TRUE) {
      x <- self$conv1(x)
      if (self$apply_batchnorm) {
        x %>% self$batchnorm(coaching = coaching)
      }
      x %>% layer_activation_leaky_relu()
    }
  })
}

Losses and optimizer

As we mentioned within the introduction, the concept of a GAN is to have the community study the associated fee operate.
More concretely, the factor it ought to study is the steadiness between two losses, the generator loss and the discriminator loss.
Each of them individually, after all, must be supplied with a loss operate, so there are nonetheless choices to be made.

For the generator, two issues issue into the loss: First, does the discriminator debunk my creations as pretend?
Second, how huge is absolutely the deviation of the generated picture from the goal?
The latter issue doesn’t must be current in a conditional GAN, however was included by the authors to additional encourage proximity to the goal, and empirically discovered to ship higher outcomes.

lambda <- 100 # worth chosen by the authors of the paper
generator_loss <- operate(disc_judgment, generated_output, goal) {
    gan_loss <- tf$losses$sigmoid_cross_entropy(
      tf$ones_like(disc_judgment),
      disc_judgment
    )
    l1_loss <- tf$reduce_mean(tf$abs(goal - generated_output))
    gan_loss + (lambda * l1_loss)
  }

The discriminator loss seems as in a regular (un-conditional) GAN. Its first element is decided by how precisely it classifies actual photos as actual, whereas the second is dependent upon its competence in judging pretend photos as pretend.

discriminator_loss <- operate(real_output, generated_output) {
  real_loss <- tf$losses$sigmoid_cross_entropy(
    multi_class_labels = tf$ones_like(real_output),
    logits = real_output
  )
  generated_loss <- tf$losses$sigmoid_cross_entropy(
    multi_class_labels = tf$zeros_like(generated_output),
    logits = generated_output
  )
  real_loss + generated_loss
}

For optimization, we depend on Adam for each the generator and the discriminator.

discriminator_optimizer <- tf$practice$AdamOptimizer(2e-4, beta1 = 0.5)
generator_optimizer <- tf$practice$AdamOptimizer(2e-4, beta1 = 0.5)

The sport

We’re able to have the generator and the discriminator play the sport!
Below, we use defun to compile the respective R capabilities into TensorFlow graphs, to hurry up computations.

generator <- generator()
discriminator <- discriminator()

generator$name = tf$contrib$keen$defun(generator$name)
discriminator$name = tf$contrib$keen$defun(discriminator$name)

We additionally create a tf$practice$Checkpoint object that can enable us to avoid wasting and restore coaching weights.

checkpoint_dir <- "./checkpoints_pix2pix"
checkpoint_prefix <- file.path(checkpoint_dir, "ckpt")
checkpoint <- tf$practice$Checkpoint(
    generator_optimizer = generator_optimizer,
    discriminator_optimizer = discriminator_optimizer,
    generator = generator,
    discriminator = discriminator
)

Training is a loop over epochs with an inside loop over batches yielded by the dataset.
As ordinary with keen execution, tf$GradientTape takes care of recording the ahead go and figuring out the gradients, whereas the optimizer – there are two of them on this setup – adjusts the networks’ weights.

Every tenth epoch, we save the weights, and inform the generator to have a go on the first instance of the check set, so we are able to monitor community progress. See generate_images within the companion code for this performance.

practice <- operate(dataset, num_epochs) {
  
  for (epoch in 1:num_epochs) {
    total_loss_gen <- 0
    total_loss_disc <- 0
    iter <- make_iterator_one_shot(train_dataset)
    
    until_out_of_range({
      batch <- iterator_get_next(iter)
      input_image <- batch[[1]]
      goal <- batch[[2]]
      
      with(tf$GradientTape() %as% gen_tape, {
        with(tf$GradientTape() %as% disc_tape, {
          
          gen_output <- generator(input_image, coaching = TRUE)
          disc_real_output <-
            discriminator(input_image, goal, coaching = TRUE)
          disc_generated_output <-
            discriminator(input_image, gen_output, coaching = TRUE)
          gen_loss <-
            generator_loss(disc_generated_output, gen_output, goal)
          disc_loss <-
            discriminator_loss(disc_real_output, disc_generated_output)
          total_loss_gen <- total_loss_gen + gen_loss
          total_loss_disc <- total_loss_disc + disc_loss
        })
      })
      
      generator_gradients <- gen_tape$gradient(gen_loss,
                                               generator$variables)
      discriminator_gradients <- disc_tape$gradient(disc_loss,
                                                    discriminator$variables)
      
      generator_optimizer$apply_gradients(transpose(checklist(
        generator_gradients,
        generator$variables
      )))
      discriminator_optimizer$apply_gradients(transpose(
        checklist(discriminator_gradients,
             discriminator$variables)
      ))
      
    })
    
    cat("Epoch ", epoch, "n")
    cat("Generator loss: ",
        total_loss_gen$numpy() / batches_per_epoch,
        "n")
    cat("Discriminator loss: ",
        total_loss_disc$numpy() / batches_per_epoch,
        "nn")
    
    if (epoch %% 10 == 0) {
      test_iter <- make_iterator_one_shot(test_dataset)
      batch <- iterator_get_next(test_iter)
      enter <- batch[[1]]
      goal <- batch[[2]]
      generate_images(generator, enter, goal, paste0("epoch_", i))
    }
    
    if (epoch %% 10 == 0) {
      checkpoint$save(file_prefix = checkpoint_prefix)
    }
  }
}

if (!restore) {
  practice(train_dataset, 200)
} 

The outcomes

What has the community realized?

Here’s a fairly typical consequence from the check set. It doesn’t look so unhealthy.

Here’s one other one. Interestingly, the colours used within the pretend picture match the earlier one’s fairly effectively, despite the fact that we used a further L1 loss to penalize deviations from the unique.

This decide from the check set once more reveals comparable hues, and it would already convey an impression one will get when going via the entire check set: The community has not simply realized some steadiness between creatively turning a rough masks into an in depth picture on the one hand, and reproducing a concrete instance however. It additionally has internalized the primary architectural model current within the dataset.

For an excessive instance, take this. The masks leaves an unlimited lot of freedom, whereas the goal picture is a fairly untypical (maybe probably the most untypical) decide from the check set. The consequence is a construction that might signify a constructing, or a part of a constructing, of particular texture and shade shades.

Conclusion

When we are saying the community has internalized the dominant model of the coaching set, is that this a nasty factor? (We’re used to considering when it comes to overfitting on the coaching set.)

With GANs although, one may say all of it is dependent upon the aim. If it doesn’t match our objective, one factor we may attempt is coaching on a number of datasets on the similar time.

Again relying on what we wish to obtain, one other weak spot might be the shortage of stochasticity within the mannequin, as acknowledged by the authors of the paper themselves. This might be onerous to keep away from when working with paired datasets as those utilized in pix2pix. An attention-grabbing various is CycleGAN(Zhu et al. 2017) that allows you to switch model between full datasets with out utilizing paired situations:

Figure from Zhu et al. (2017)

Finally closing on a extra technical observe, you’ll have seen the outstanding checkerboard results within the above pretend examples. This phenomenon (and methods to handle it) is beautifully defined in a 2016 article on distill.pub (Odena, Dumoulin, and Olah 2016).
In our case, it can principally be as a consequence of the usage of layer_conv_2d_transpose for upsampling.

As per the authors (Odena, Dumoulin, and Olah 2016), a greater various is upsizing adopted by padding and (customary) convolution.
If you’re , it needs to be easy to change the instance code to make use of tf$picture$resize_images (utilizing ResizeMethod.NEAREST_NEIGHBOR as really useful by the authors), tf$pad and layer_conv2d.

Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. “Image-to-Image Translation with Conditional Adversarial Networks.” CoRR abs/1611.07004. http://arxiv.org/abs/1611.07004.
Odena, Augustus, Vincent Dumoulin, and Chris Olah. 2016. “Deconvolution and Checkerboard Artifacts.” Distill. https://doi.org/10.23915/distill.00003.
Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. 2015. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” CoRR abs/1505.04597. http://arxiv.org/abs/1505.04597.
Zhu, Jun-Yan, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks.” CoRR abs/1703.10593. http://arxiv.org/abs/1703.10593.

LEAVE A REPLY

Please enter your comment!
Please enter your name here