Copy with weights can be used if you want to transfer knowledge from one of your models to another model. This is known as transfer learning in machine learning terms. It can be useful if you have trained on a big dataset (such as ImageNet) and want to reuse parts of that knowledge to solve other, related, problems.
Say you want to tell if an image depicts a truck or a car. Then train you first model ImageNet, the model then learns that a car has wheels. It was never given any information about the wheels, it was inferred from lots of pictures of cars, and the label “car”. The model represents this information in the weights it learns.
Now copy the weights from the first trained model into a new model. The new model could now easily be used to recognize trucks, as they to have wheels. Your new model could also learn the difference between a car and a truck. Without needing to train on cars.
Another possible application is when you build and train an autoencoder and then copy the encoder part (and possibly the decoder part too) separately into a new model.