pytorch把模型参数导入optimizer时出现错误

因为分布问题时说禁止输入重复字符,我把nn删掉了

问题遇到的现象和发生背景

把模型参数导入optimizer时出现错误

问题相关代码,请勿粘贴截图

optimizer = torch.optim.Adam(itertools.chain(generator.parameters(),discrimitor.parameters(),embing), lr=2e-5)
下面是定义的网络参数

生成器

class Generator(Module):
def init(self):
super(Generator, self).init()
# 定义嵌入层参数 [标签数量,嵌入层大小]
self.EMb=parameter.Parameter(data=create_tensor(torch.normal(0,0.05,[10,256])), requires_grad=True)
# 进行转置卷积,上采样
self.transpose_conv1=ConvTranspose1d(1,1,kernel_size=16,stride=2,padding=7)
self.conv1=Conv1d(1,16,kernel_size=15,padding=7)
self.bn1=BatchNorm1d(16,momentum=0.8)
# 进行转置卷积,上采样
self.transpose_conv2 = ConvTranspose1d(16, 16, kernel_size=16, stride=2, padding=7)
self.conv2 =Conv1d(16, 1, kernel_size=15, padding=7)
self.bn2=BatchNorm1d(1,momentum=0.8)

def forward(self,x,label):
    y=torch.mul(x,self.EMb[label,:].view([-1,1,256]))
    y=F.leaky_relu(self.transpose_conv1(y),0.2)
    y=F.leaky_relu(self.bn1(self.conv1(y)),0.2)

    y = F.leaky_relu(self.transpose_conv2(y), 0.2)
    y = F.tanh(self.bn2(self.conv2(y)))

    return y
运行结果及报错内容

ValueError: can't optimize a non-leaf Tensor

我的解答思路和尝试过的方法

尝试查看
for name, parameter in generator.named_parameters():
print('%-40s%-20s%s' % (name, parameter.requires_grad, parameter.is_leaf))
结果为:
EMb True True
transpose_conv1.weight True True
transpose_conv1.bias True True
conv1.weight True True
conv1.bias True True
bn1.weight True True
bn1.bias True True
transpose_conv2.weight True True
transpose_conv2.bias True True
conv2.weight True True
conv2.bias True True
bn2.weight True True
bn2.bias True True