hi,
when i use my own images to pretrain the encodes , i found that at the process of pretrainMain function after first 1000 iteration the loss become nan for the first layer training.
At the fist layer pretrain the net includes all the encoder layers and the last layer of decoder, then the loss will be the euclidean distance between input data and the output of decoder.
Is this loss ok?