HDFS排除AddblockRequestProto中的数据节点

I am implementing a datanode failover for writing in HDFS, that HDFS can still write a block when the first datanode of the block fails.

The algorithm is. First, the failure node would be identified. Then, a new block is requested. The HDFS port api provides excludeNodes, which I used to tell Namenode not to allocate new block there. failedDatanodes are identified failed datanodes, and they are correct in logs.

req := &hdfs.AddBlockRequestProto{
    Src:           proto.String(bw.src),
    ClientName:    proto.String(bw.clientName),
    ExcludeNodes:  failedDatanodes,
}

But, the namenode still locates the block to the failed datanodes.

Anyone knows why? Did I miss anything here? Thank you.

I found the solution that, first abandon the block and then request the new block. In the previous design, the new requested block cannot replace the old one