keras 回归模型训练 accuracy不变

在进行回归模型训练时,accuracy的值一直为一个固定的值是什么原因呀

小魔女参考了bing和GPT部分内容调写:
accuracy不变可能是模型过拟合或者欠拟合的原因,过拟合就是模型在训练集上表现很好,但是在测试集上表现不好,而欠拟合则是模型在训练集和测试集上都表现不好,accuracy值一直不变。

解决这个问题的方法有很多,比如增加训练数据,增加模型的复杂度,改变模型的结构,添加正则化项,改变学习率等等。

# 例子
model.add(Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.01)))

另外,在训练模型时,可以使用K折交叉验证,以确保模型的泛化能力,避免过拟合。K折交叉验证是将训练集分成K份,每次取其中一份作为验证集,其余K-1份作为训练集,这样重复K次,最后取K次训练结果的平均值作为最终的训练结果。

# 例子
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
cvscores = []
for train, test in kfold.split(X, Y):
    model.fit(X[train], Y[train], epochs=150, batch_size=10, verbose=0)
    scores = model.evaluate(X[test], Y[test], verbose=0)
    cvscores.append(scores[1] * 100)

总之,accuracy不变可能是模型过拟合或者欠拟合的原因,可以通过增加训练数据,增加模型复杂度,改变模型结构,添加正则化项,改变学习率等方法来解决,同时也可以使用K折交叉验证来确保模型的泛化能力。
回答不易,记得采纳呀。

  • 这篇文章:Keras-训练网络时的问题:loss一直为nan,accuracy一直为一个固定的数 也许有你想要的答案,你可以看看
  • 除此之外, 这篇博客: Keras-训练网络时的问题:loss一直为nan,accuracy一直为一个固定的数中的 问题描述 部分也许能够解决你的问题, 你可以仔细阅读以下内容或跳转源博客中阅读:
  • 在使用VGG19做分类任务时,遇到一个问题:loss一直为nan,accuracy一直为一个固定的数,如下输出所示,即使加入了自动调整学习率 (ReduceLROnPlateau) 也没法解决问题。

    # Change learning_rate auto
    reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', patience=10, mode='auto')
    
    earlystopping = EarlyStopping(monitor='val_accuracy', verbose=1, patience=30)
    
    # Vgg19
    # Train the model with the new callback
    history = model_vgg19.fit(train_gen, 
                        validation_data=valid_gen, 
                        epochs=200,
                        steps_per_epoch=len(train_gen),
                        validation_steps=len(valid_gen),
                        callbacks=[reduce_lr, earlystopping]) # callbacks=[cp_callback] Pass callback to training
    

    输出:

    Epoch 1/200
    176/176 [==============================] - 31s 177ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 2/200
    176/176 [==============================] - 31s 176ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 3/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 4/200
    176/176 [==============================] - 31s 176ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 5/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 6/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 7/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 8/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 9/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 10/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 0.0010
    Epoch 11/200
    176/176 [==============================] - 30s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 12/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 13/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 14/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 15/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 16/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 17/200
    176/176 [==============================] - 30s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 18/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 19/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 20/200
    176/176 [==============================] - 30s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-04
    Epoch 21/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 22/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 23/200
    176/176 [==============================] - 31s 175ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 24/200
    176/176 [==============================] - 31s 177ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 25/200
    176/176 [==============================] - 31s 178ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 26/200
    176/176 [==============================] - 31s 177ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 27/200
    176/176 [==============================] - 31s 177ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 28/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 29/200
    176/176 [==============================] - 31s 173ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 30/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-05
    Epoch 31/200
    176/176 [==============================] - 31s 174ms/step - loss: nan - accuracy: 0.0100 - val_loss: nan - val_accuracy: 0.0100 - lr: 1.0000e-06
    Epoch 00031: early stopping