CNN 模型的acc 和 val_acc 提升不上去,一直卡在0.6左右

如题,在调试各种超参数后,改变隐藏层结构,准确度还是无法提升至7成以上。
将资料丢入迁移式学习,准确度可达到0.95,确认过这个资料集是没问题的。
资料集:https://www.kaggle.com/datasets/biancaferreira/african-wildlife
资料集分为,60% Train 20% Validation 20% Test (901張train 300張validation 300張test)
希望大家能帮忙解答下,这到底是参数问题还是模型问题。

还有个问题就是当train set被归一化或标准化后,val loss没法收敛,会从个位数升到十位或百位数,val acc也升不上去。

from keras.preprocessing.image import ImageDataGenerator
train_datagen=ImageDataGenerator(  #資料預處理
    horizontal_flip=True,
    width_shift_range=0.1,
    height_shift_range=0.1,
    zoom_range=0.2
    )
os.makedirs("/content/train/")
train_generator=train_datagen.flow(tr,tr_la,
                                   batch_size=16
                                  )

model = models.Sequential()
model.add(layers.Conv2D(64, (3,3),strides=2,input_shape=(256, 256, 3)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(128, (3, 3),strides=2))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(256, (3, 3),strides=2))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Flatten())
model.add(layers.Dense(64, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
model.add(layers.Dropout(0.25))
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
model.add(layers.Dropout(0.25))
model.add(layers.Dense(4, activation='softmax'))

model.summary()

Earlystopping=tf.keras.callbacks.EarlyStopping(
                                              monitor='val_loss', min_delta=0, 
                                              patience=10, verbose=1, mode='max',
                                              baseline=None, restore_best_weights=False
                                              )

model.compile(optimizer='Adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['accuracy'])

initial_learning_rate = 0.001
def lr_step_decay(epoch, lr):
    drop_rate = 0.1
    epochs_drop =5.0
    return initial_learning_rate * math.pow(drop_rate, math.floor(epoch/epochs_drop))

history = model.fit(train_generator, epochs=15, 
                    callbacks=[LearningRateScheduler(lr_step_decay, verbose=1),Earlystopping],
                    validation_data=(va,va_la))

用以上參數跑過

Epoch 1: LearningRateScheduler setting learning rate to 0.001.
Epoch 1/15
57/57 [==============================] - 15s 254ms/step - loss: 1.6336 - accuracy: 0.3263 - val_loss: 1.8376 - val_accuracy: 0.2900 - lr: 0.0010

Epoch 2: LearningRateScheduler setting learning rate to 0.001.
Epoch 2/15
57/57 [==============================] - 14s 252ms/step - loss: 1.3830 - accuracy: 0.4340 - val_loss: 2.3658 - val_accuracy: 0.3533 - lr: 0.0010

Epoch 3: LearningRateScheduler setting learning rate to 0.001.
Epoch 3/15
57/57 [==============================] - 14s 250ms/step - loss: 1.3083 - accuracy: 0.4528 - val_loss: 1.2480 - val_accuracy: 0.5167 - lr: 0.0010

Epoch 4: LearningRateScheduler setting learning rate to 0.001.
Epoch 4/15
57/57 [==============================] - 14s 250ms/step - loss: 1.2538 - accuracy: 0.4861 - val_loss: 1.1332 - val_accuracy: 0.5700 - lr: 0.0010

Epoch 5: LearningRateScheduler setting learning rate to 0.001.
Epoch 5/15
57/57 [==============================] - 14s 250ms/step - loss: 1.1707 - accuracy: 0.5572 - val_loss: 1.2636 - val_accuracy: 0.5467 - lr: 0.0010

Epoch 6: LearningRateScheduler setting learning rate to 0.0001.
Epoch 6/15
57/57 [==============================] - 14s 252ms/step - loss: 1.1375 - accuracy: 0.5339 - val_loss: 1.0421 - val_accuracy: 0.6033 - lr: 1.0000e-04

Epoch 7: LearningRateScheduler setting learning rate to 0.0001.
Epoch 7/15
57/57 [==============================] - 14s 251ms/step - loss: 1.0945 - accuracy: 0.5583 - val_loss: 0.9792 - val_accuracy: 0.6400 - lr: 1.0000e-04

Epoch 8: LearningRateScheduler setting learning rate to 0.0001.
Epoch 8/15
57/57 [==============================] - 14s 250ms/step - loss: 1.1035 - accuracy: 0.5405 - val_loss: 0.9621 - val_accuracy: 0.6433 - lr: 1.0000e-04

Epoch 9: LearningRateScheduler setting learning rate to 0.0001.
Epoch 9/15
57/57 [==============================] - 14s 251ms/step - loss: 1.1143 - accuracy: 0.5505 - val_loss: 0.9472 - val_accuracy: 0.6367 - lr: 1.0000e-04

Epoch 10: LearningRateScheduler setting learning rate to 0.0001.
Epoch 10/15
57/57 [==============================] - 14s 249ms/step - loss: 1.0645 - accuracy: 0.5605 - val_loss: 0.9538 - val_accuracy: 0.6433 - lr: 1.0000e-04

Epoch 11: LearningRateScheduler setting learning rate to 1.0000000000000003e-05.
Epoch 11/15
57/57 [==============================] - 14s 249ms/step - loss: 1.0316 - accuracy: 0.5594 - val_loss: 0.9564 - val_accuracy: 0.6500 - lr: 1.0000e-05

Epoch 12: LearningRateScheduler setting learning rate to 1.0000000000000003e-05.
Epoch 12/15
57/57 [==============================] - 16s 276ms/step - loss: 1.0271 - accuracy: 0.6049 - val_loss: 0.9626 - val_accuracy: 0.6500 - lr: 1.0000e-05
Epoch 12: early stopping

用了迁移学习?还是咋的?