I have a more general question regarding MobileNet and EfficientNet inverted residual blocks. I have a classification task for an image dataset that is of lower complexity. Therefore I have chosen an architecture with few parameters (EfficientNet B0). But in terms of validation loss, I run into overfitting. A shallow ResNet, ResNext, etc. worked much better. These architectures use regular residual blocks and therefore have more parameters. So it seems that there is no relation between number of parameters and model complexity here? Can someone please explain what I am missing here?
This is a very interesting question. I would also be very interested in a response to that topic.
1.4m articles
1.4m replys
5 comments
57.0k users