TensorFlow深度学习,MNIST数据集分类里如何用格雷编码 代替 one-hot 编码

问题遇到的现象和发生背景

TensorFlow深度学习project:MNIST数据集分类里如何用格雷编码 代替 one-hot

问题是 多层感知机(perceptron)的MNIST认知

用代码块功能插入代码,请勿粘贴截图
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam

# 将 MNIST 读取并转换为输入神经网络的类型
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000,784) # 张量变换
x_test = x_test.reshape(10000,784)
x_train=x_train.astype(np.float32)/255.0 # 变成ndarray 
x_test=x_test.astype(np.float32)/255.0
y_train=tf.keras.utils.to_categorical(y_train,10) # 转换为one-hot码
y_test=tf.keras.utils.to_categorical(y_test,10)
n_input=784 
n_hidden=1024 
n_output=10 
mlp=Sequential() 
mlp.add(Dense(units=n_hidden,activation='tanh',input_shape=(n_input,), 
kernel_initializer='random_uniform',bias_initializer='zeros')) 
mlp.add(Dense(units=n_output,activation='tanh',kernel_
initializer='random_uniform',bias_initializer='zeros')) 
mlp.compile(loss='mean_squared_error',optimizer=Adam(learning_
rate=0.001),metrics=['accuracy']) 
hist=mlp.fit(x_train,y_train,batch_size=128,epochs=30,validation
_data=(x_test,y_test),verbose=2) 
 
res=mlp.evaluate(x_test,y_test,verbose=0) 
print(" 准确率是",res[1]*100)

运行结果及报错内容

img

我的解答思路和尝试过的方法

结果是正常运行的,问题是设计一下去掉代码中的 one-hot 代码 达到可学习目标值的方法,有结果最好

我想要达到的结果

去掉代码中的 one-hot 代码 达到可学习目标值的方法,有截图最好

这是一个分类问题,不需要对目标值做one-hot编码,loss用tf.keras.losses.sparse_categorical_crossentropy
下面的代码是根据你代码修改的

# 将 MNIST 读取并转换为输入神经网络的类型
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000,784) # 张量变换
x_test = x_test.reshape(10000,784)
x_train = x_train.astype(np.float32)/255.0 # 变成ndarray 
x_test = x_test.astype(np.float32)/255.0

n_input = 784 
n_hidden = 1024 
n_output = 10 
mlp = Sequential() 
mlp.add(Dense(units=n_hidden, activation='relu', input_shape=(n_input,)))
mlp.add(Dense(units=n_output))
mlp.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=Adam(learning_rate=0.001), metrics=['accuracy']) 
hist = mlp.fit(x_train,y_train,batch_size=128,epochs=30,validation_data=(x_test,y_test), verbose=2) 

res = mlp.evaluate(x_test, y_test, verbose=0) 
print(" 准确率是",res[1]*100)