python自定义函数后报错,batchsize出现不匹配的情况

python报错:tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [850] vs. [32]
[[{{node training_4/Adam/gradients/gradients/loss_2/time_distributed_2_loss/mul_grad/BroadcastGradientArgs}}]]
自编码器自定义了损失函数之后,在运行代码时出现以上报错。850为输入训练样本数,32为batchsize,当把batchsize调为850或1时可以训练。
代码如下:
"
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, RepeatVector, TimeDistributed
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras import backend as K
stuck = Sequential([

LSTM(16, activation='relu',input_shape=(timesteps, num_features), return_sequences=True),

LSTM(4, activation='relu'),

RepeatVector(timesteps),
LSTM(16, activation='relu', return_sequences=True),


TimeDistributed(Dense(num_features))
 ])

def loss_function(y_true, y_pred):

#y_true = tf.cast(y_true, dtype=tf.float32)
#y_pred = tf.cast(y_pred, dtype=tf.float32)
global bq
bq=K.reshape(bq,(lenx, ))
bq = tf.cast(bq, dtype=tf.float32)
mae_abs = K.reshape(abs(y_true - y_pred), (-1, lieshu * timesteps))
mae_= K.mean(mae_abs, axis=1)
a0 = tf.zeros(lenx,dtype=tf.float32,)
mae_ = tf.cast(mae_, dtype=tf.float32)
mae = (1 - bq) * mae_ + bq *K.maximum(a0,1-mae_)
mae = K.mean(mae, axis=0)
loss_function = K.mean(mae)
return loss_function
stuck.compile(loss=loss_function, optimizer='adam',)
stuck.summary()

history = stuck.fit(
X_train, X_train,
epochs=50,
batch_size=32,#256千万不要动!!
#validation_data=(X_test_nor, X_test_nor),
# callbacks = [es],
shuffle=False
)

报错信息提示:在执行“BroadcastGradientArgs”操作时,形状不兼容,[850]和[32]不匹配。错误可能是因为在自定义损失函数中没有考虑batch_size。在计算损失函数的时候,应该考虑到形状的要求,确保输入的形状是可以正确运算的。你可以在计算损失函数之前对输入进行重塑,以符合需求的形状。

回答不易,求求您采纳点赞哦

这个错误消息表明你在进行梯度更新时,训练数据的形状与 batch size 的形状不匹配。把batch size调整为与训练样本数相等或者为1即可。你可以考虑将训练数据分成多批次训练,每次训练的数据量为 batch size,这是常见的训练方法。