关于#hadoop#的启动问题,如何解决?

[root@node1 sbin]# ./start-dfs.sh
Starting namenodes on [node1]
node1: starting namenode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-namenode-node1.out
node3: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-node3.out
node4: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-node4.out
node5: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-node5.out
node2: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-node2.out
Starting secondary namenodes [master]
master: ssh: Could not resolve hostname master: Name or service not known

[root@node1 sbin]# ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.7.4/logs/yarn-root-resourcemanager-node1.out
node4: starting nodemanager, logging to /usr/local/hadoop-2.7.4/logs/yarn-root-nodemanager-node4.out
node3: starting nodemanager, logging to /usr/local/hadoop-2.7.4/logs/yarn-root-nodemanager-node3.out
node5: starting nodemanager, logging to /usr/local/hadoop-2.7.4/logs/yarn-root-nodemanager-node5.out
node2: starting nodemanager, logging to /usr/local/hadoop-2.7.4/logs/yarn-root-nodemanager-node2.out
[root@node1 sbin]# ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /usr/local/hadoop-2.7.4/logs/mapred-root-historyserver-node1.out
[root@node1 sbin]# jps
1948 Jps
1484 NameNode
[root@node1 sbin]# jps
1958 Jps
1484 NameNode
[root@node1 sbin]#

报错提示

Starting secondary namenodes [master]
master: ssh: Could not resolve hostname master: Name or service not known

原因
主机名尚未添加导致的


解决方法
在每台服务器修改添加主机名
如:

vim /etc/hosts

Then add you ip and the name of the host

192.168.0.2     master
192.168.0.3     node1
192.168.0.5     node2

如有问题及时沟通。

这个默认的master 主机名不存在吧,改为本机的用户名试试,

我博客中有介绍
https://blog.csdn.net/qq_15604349/article/details/123717540

需要配置hosts 主机名字

img


有疑问可以问

解决了,是因为在修改文件时master和各节点名字不一致。


进入目录:
cd /usr/local/hadoop-2.7.4/etc/hadoop/
依次修改下面的文件:
 core-site.xml
<configuration>
    <property>
    <name>fs.defaultFS</name>  
      <value>hdfs://master:8020</value>  
      </property>  
    <property>
      <name>hadoop.tmp.dir</name>
      <value>/var/log/hadoop/tmp</value>
    </property>
</configuration>


4.2 hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_151

4.3 hdfs-site.xml

<configuration>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///data/hadoop/hdfs/name</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///data/hadoop/hdfs/data</value>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
                                                                   <value>master:50090</value>
</property>
<property>
     <name>dfs.replication</name>
     <value>3</value>
</property>
</configuration>


4.4 mapred-site.xml
复制cp mapred-site.xml.template mapred-site.xml
<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<!-- jobhistory properties -->
<property>
    <name>mapreduce.jobhistory.address</name>
                            <value>master:10020</value>
</property>
<property>
     <name>mapreduce.jobhistory.webapp.address</name>
                          <value>master:19888</value>
</property>
</configuration>



4.5 yarn-site.xml
<configuration>
  <property>
    <name>yarn.resourcemanager.hostname</name>
                                    <value>master</value>
  </property>    
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>${yarn.resourcemanager.hostname}:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>${yarn.resourcemanager.hostname}:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>${yarn.resourcemanager.hostname}:8088</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>${yarn.resourcemanager.hostname}:8090</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>${yarn.resourcemanager.hostname}:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>${yarn.resourcemanager.hostname}:8033</value>
  </property>
  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/data/hadoop/yarn/local</value>
  </property>
  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/data/tmp/logs</value>
  </property>
<property> 
 <name>yarn.log.server.url</name> 
                  <value>http://master:19888/jobhistory/logs/</value>
 <description>URL for job history server</description>
</property>
<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>
 <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
      </property>
<property>  
        <name>yarn.nodemanager.resource.memory-mb</name>  
        <value>2048</value>  
 </property>  
 <property>  
        <name>yarn.scheduler.minimum-allocation-mb</name>  
        <value>512</value>  
 </property>   
 <property>  
        <name>yarn.scheduler.maximum-allocation-mb</name>  
        <value>4096</value>  
 </property> 
 <property> 
    <name>mapreduce.map.memory.mb</name> 
    <value>2048</value> 
 </property> 
 <property> 
    <name>mapreduce.reduce.memory.mb</name> 
    <value>2048</value> 
 </property> 
 <property> 
    <name>yarn.nodemanager.resource.cpu-vcores</name> 
    <value>1</value> 
 </property>
</configuration>


Starting secondary namenodes [master]
master: ssh: Could not resolve hostname master: Name or service not known

不认识这个主机名,去hosts里配置一下主机以及对应的ip

解决方法:
在/etc/profile文件中添加:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib
然后source /etc/profile即可

Name or service not known,主机名不存在,估计你修改主机名没有重启网络才导致的