当我们打开系统输入hadoop启动命令start-all.sh时出现以下错误:
[root@master ~]# start-all.sh starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-master.out master: ssh: connect to host master port 22: Network is unreachable master: ssh: connect to host master port 22: Network is unreachable starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-master.out master: ssh: connect to host master port 22: Network is unreachable
我们查看进程:
[root@master ~]# jps 2739 Jps [root@master ~]#
显示我们hadoop没有启动。
我的解决方法:
我们先查看我们当前虚拟机系统的ip为192.168.40.128,
[root@master ~]# ifconfig eth2 Link encap:Ethernet HWaddr 00:0C:29:4E:BC:7A inet addr:192.168.40.128 Bcast:192.168.40.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe4e:bc7a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:127 errors:0 dropped:0 overruns:0 frame:0 TX packets:134 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12497 (12.2 KiB) TX bytes:14291 (13.9 KiB) Interrupt:19 Base address:0x2024 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1616 (1.5 KiB) TX bytes:1616 (1.5 KiB)
然后我们编辑查看/etc/hosts文件里的配置:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.80.100 master#hadoop的ip配置 ~ ~ ~ "/etc/hosts" 3L, 180C
我们可以看到master,hadoop的ip配置为:192.168.80.100,与当前系统的ip不一致。键盘按i编辑修改ip为当前系统ip(192.168.40.128),按esc输入:wq退出保存。然后重新启动start-all.sh
[root@master ~]# start-all.sh starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-master.out master: Warning: Permanently added the RSA host key for IP address '192.168.40.128' to the list of known hosts. master: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-master.out master: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-secondarynamenode-master.out starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-master.out master: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-master.out
启动成功,查看进程
[root@master ~]# jps 3065 SecondaryNameNode 3400 Jps 3146 JobTracker 2951 DataNode 2843 NameNode 3260 TaskTracker
成功
原创文章,作者:kepupublish,如若转载,请注明出处:https://blog.ytso.com/192768.html