hadoop ha模式 standby namenode启动失败

输入命令时 出现如下报错
hdfs namenode -bootstrapStandby

 22/07/27 10:15:27 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
22/07/27 10:15:27 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
22/07/27 10:15:27 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: org.apache.hadoop.HadoopIllegalArgumentException: Shared edits storage is not enabled for this namenode.
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:460)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1680)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: Shared edits storage is not enabled for this namenode.
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:411)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:107)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
    at org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:455)
    ... 2 more
22/07/27 10:15:27 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: org.apache.hadoop.HadoopIllegalArgumentException: Shared edits storage is not enabled for this namenode.
22/07/27 10:15:27 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.110.249
************************************************************/

hdfs-core.xml配置如下

<configuration>
        <property>
        
                <name>dfs.replicationname>
                <value>2value>
        property>
    <property>
        
                <name>dfs.block.sizename>
                <value>134217728value>
        property>
        <property>
        
                <name>dfs.namenode.name.dirname>
                <value>file:///usr/local/hadoop/data/hdfs/namenodevalue>
                property>
        <property>
        
                <name>dfs.datanode.data.dirname>
                <value>file:///usr/local/hadoop/data/hdfs/datanodevalue>
        property>
    <property>
        
                <name>dfs.namenode.edits.dirname>
                <value>file:///usr/local/hadoop/data/hdfs/nn/editsvalue>
        property>
    
    <property>
                <name>dfs.nameservicesname>
                <value>nsvalue>
        property>
    
    <property>
                <name>dfs.ha.namenodes.nsname>
                <value>nn1,nn2value>
        property>
    <property>
                <name>dfs.namenode.rpc-address.ns.nn1name>
                <value>hadoop01:9000value>
        property>
    <property>
                <name>dfs.namenode.rpc-address.ns.nn2name>
                <value>hadoop02:9000value>
        property>
    <property>
                <name>dfs.namenode.http-address.ns.nn1name>
                <value>hadoop01:50070value>
        property>
    <property>
                <name>dfs.namenode.http-address.ns.nn2name>
                <value>hadoop02:50070value>
        property>
    <property>
                <name>dfs.namenode.shard.edits.dirname>
                <value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/nsvalue>
        property>
        <property>
        
                <name>dfs.namenode.secondary.http-addressname>
                <value>hadoop01:9001value>
        property>
        <property>
        
                <name>dfs.webhdfs.enabledname>
                <value>truevalue>
        property>
    <property>
                <name>dfs.ha.automatic-failover.enabled.nsname>
                <value>truevalue>
        property>
        <property>
        
                <name>dfs.permissionsname>
                <value>falsevalue>
        property>
    <property>
                <name>dfs.ha.fencing.methodsname>
                <value>sshfencevalue>
        property>
    <property>
                <name>dfs.ha.fencing.ssh.private-key-filesname>
                <value>~/.ssh/id_rsavalue>
        property>
    
    <property>
                <name>dfs.client.failover.proxy.provider.nsname>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvidervalue>
        property>
configuration>

不知道你是怎么配的 应该是配置hdfs-site.xml文件
下面这些内容你没有配置

#指定journal地址和端口号
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://master:8485;slave1:8485;slave2;8485/ns</value>
</property>
#指定namenode元数据在journalnode上的存放位置
<property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/usr/local/src/hadoop/journal</value>
</property>
#开启HA自动切换
<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>
#设置失败自动切换方式
<property>
    <name>dfs.client.failover.proxy.provider.ns</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
#设置隔离机制 ssh会因为无权限访问导致自动切换失败,此时直接使用shell
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
#设置使用ssh隔离时需要ssh免密登录
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value/root/.ssh/id_rsa></value>
</property>

报错应该是指这个配置 当然 这是我自己的主机上的配置 具体端口 主机名需要你自己修改

<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://master:8485;slave1:8485;slave2;8485/ns</value>
</property>