EfficientNet-V2
This study trained EfficientNet-V2 to classify images. EfficientNet is a
convolutional neural network that scales down the number of layers while
scaling down the model (Tan and Le 2019). EfficientNet-V2 is an improved
version of EfficientNet with increased training speed and parameter
efficiency relative to the previous EfficientNet (Tan and Le 2021). The
EfficientNet-V2 model employs neural architecture search (NAS) to
optimize model accuracy, size, and training speed. In this study, the
EfficientNetV2-B0 model was used as the network, and fine-tuning was
performed using a model that had been pre-trained with the Imagenet21k
data set. The number of epochs was set to 50, and the batch size was set
to 32 for training. Adam was used as the optimization algorithm
(optimizer), and dropout was set to 0.3. We employed early stopping to
prevent overfitting. Automatic termination was performed when validation
loss did not improve more than 0.001 for five consecutive epochs, and we
used the weights when validation loss was the best. These analyses were
performed using the NVIDIA DGX Station A100. Finally, overall accuracy
was used for evaluation.