After you made sure that your data pipeline is correct. There are a few things to consider here, I hope one of the bellow mentioned helps:
1. Choose the right loss function
Binary crossentropy might lead your network in the direction of optimizing for all labels, now if you have an unbalanced amount of labels in your image, it might draw your network to just give back either white, gray or black image predictions. Try using the dice coefficient loss
2. Change the line in testGenerator
A thing that seems to be an issue in data.py
and the testGenerator
method is the following line:
img = img / 255
Change it to:
img /=255.
3. Reduce learning rate
if your learning rate is too high you might converge in non-sufficient optima, which also tend to optimize for gray, black or white predictions only.
Try a learning rate around Adam(lr = 3e-5)
and train for a sufficient amount of epochs, you should print dice loss and not accuracy to check your convergence.
4. Do not use activation functions for the last set of convolutions
For the last set of convolutions, that is 128-> 64 -> 64 -> 1, the activation function should not be used! The activation function causes the values to vanish!
5. Your saving method could have a "bug" make sure you scale your image to values between 0 and 255 before saving. Skimage usually warns you with a low contrast image warning.
from skimage import img_as_uint
io.imsave(os.path.join(save_path,"%d_predict.tif"%(i)),img_as_uint(img))
6. Your saving format could have a "bug" make sure you save your image in a proper format. I experienced that saving as .png gives only black or gray images, whereas .tif works like a charm.
7. You might just not train enough often you'll just freak out when your network does not do as you would like it to and abort the training. Chance is, additional training epochs is exactly what it would have needed.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…