TensorFlow中制备卷积神经网络的数据集得到的张量维度错误

根据TensorFlow北大的课程在做神经网络,但自制数据集是以单通道为例做的。

训练VGG16网络时,load的cifar10数据集后,样本x和标签y打印如下:

print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
print(x_train[0])

显示结果为:

(50000, 32, 32, 3)
(50000, 1)
(10000, 32, 32, 3)
(10000, 1)
[[[0.23137255 0.24313725 0.24705882]
  [0.16862745 0.18039216 0.17647059]
  [0.19607843 0.18823529 0.16862745]
  [0.26666667 0.21176471 0.16470588]
  [0.38431373 0.28627451 0.20392157]
  [0.46666667 0.35686275 0.24705882]
  [0.54509804 0.41960784 0.29411765]
(以下省略)

但是自制数据集时,同样的print打印效果如下:

(90, 250, 250)
(90,)
(10, 250, 250)
(10,)
[[0.21568627 0.21568627 0.21960784 0.21960784 0.21960784 0.21568627
  0.21176471 0.21176471 0.21568627 0.21176471 0.20784314 0.20392157
  0.20392157 0.20392157 0.20392157 0.20392157 0.20784314 0.21176471
  0.21568627 0.21568627 0.21960784 0.22352941 0.22745098 0.22745098
  0.23137255 0.23137255 0.23137255 0.22745098 0.22745098 0.22352941
  0.22352941 0.22352941 0.21960784 0.21960784 0.21568627 0.21176471
  0.20784314 0.20784314 0.20392157 0.2        0.2        0.2
(以下省略)

我的读取代码为:

def generateds(path, txt):
    f = open(txt, 'r')
    contents = f.readlines()  # 按行读取
    f.close()
    x, y_ = [], []
    for content in contents:
        value = content.split()  # 以空格分开,存入数组
        img_path = path + value[0]
        img = Image.open(img_path)
        img = np.array(img.convert('L'))
        img = img / 255.
        x.append(img)
        y_.append(value[1])
        print('loading : ' + content)

    x = np.array(x)
    y_ = np.array(y_)
    y_ = y_.astype(np.int64)
    return x, y_

请问如何才能制备ciifar10那样的数据集?