我想把数据分到十个客户端去训练,使用tqdm显示客户端内的训练进度
没有报错,但是进度条异常
def train(df, w):
traindata, labels = load_data(df, 'train')
for ep in tqdm(range(epoch)):
for it in range(traindata.size(0)):
w = logistic_regression(traindata[it], w, labels[it])
print('\n')
return w
desc: 0%| | 0/1 [00:00client 1 start train ...
client 2 start train ...
desc: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
desc: 100%|██████████| 1/1 [00:00<00:00, 1.15it/s]
client 3 start train ...
desc: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
……
……
client 6 start train ...
desc: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
desc: 0%| | 0/1 [00:00
client 7 start train ...
desc: 100%|██████████| 1/1 [00:00<00:00, 1.16it/s]
desc: 0%| | 0/1 [00:00
client1的进度条在client2下,client3正常,client6和7下出现空进度条,正常应该是client3下展示的情况
考虑过是不是tqdm和print冲突,也没有解决。整个代码只有这个函数中用到了tqdm,最后输出时代码如下;
for i in range(10):
print('client %d start train ...'%(i+1))
w = train(data[i], g_w)
l_w += w
client3下进度条显示样式:
client 3 start train ...
desc: 100%|██████████| 1/1 [00:00<00:00, 1.14it/s]
该回答引用GPT,有帮助的话请帮我点个采纳
tqdm在多个进程或线程间是不能共享的,因此每一个进程或线程都会创建一个独立的tqdm实例。当你的代码有多个进程或线程同时执行进度条时,它们的输出就会混在一起产生问题。
解决方法:
为每一个进程或线程单独创建一个tqdm实例,以便于每一个进程或线程的进度条单独被更新。例如:
python
def train(df, w):
traindata, labels = load_data(df, 'train')
pbar = tqdm(total=epoch)
for ep in range(epoch):
for it in range(traindata.size(0)):
w = logistic_regression(traindata[it], w, labels[it])
pbar.update(1)
pbar.close()
print('\n')
return w
关闭全局tqdm配置,禁止tqdm在多个进程或线程间共享。例如:
python
tqdm.tqdm = tqdm
for i in range(10):
print('client %d start train ...'%(i+1))
w = train(data[i], g_w)
l_w += w