Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署

系统系统环境:

OS: CentOS 6.8 
内存:2G 
CPU:1核 
Software:jdk-8u151-linux-x64.rpm 
       hadoop-2.7.4.tar.gz

hadoop下载地址:

sudo wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.4/hadoop-2.7.4.tar.gz 

主机列表信息:

主机名

IP 地址

安装软件

Hadoop role

Node role

hadoop-01

192.168.153.128

Jdk,hadoop

NameNode

namenode

hadoop-02

192.168.153.129

Jdk,hadoop

DataNode

datenode

hadoop-03

192.168.153.130

Jdk,hadoop

DataNode

datenode

基础配置:

1.Hosts文件设置(三台主机的host文件需要保持一致)

[root@hadoop-01 ~]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 
####################################### 
192.168.153.128 hadoop-01 
192.168.153.129 hadoop-02 
192.168.153.130 hadoop-03

2.创建hadoop账户,使用该账户运行hadoop服务(三台主机都创建hadoop用户),创建hadoop用户,设置拥有sudo权限

[[email protected] ~]# useradd hadoop && echo hadoop | passwd --stdin hadoop 
Changing password for user hadoop. 
passwd: all authentication tokens updated successfully. 
[[email protected] ~]# echo "hadoop ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

3.生成SSH免秘钥认证文件(在三台主机都执行)

[[email protected] ~]# su - hadoop 
[[email protected] ~]$ ssh-keygen -t rsa 

4.SSH密钥认证文件分发.

1.>将hadoop-01的公钥信息复制到另外两台机器上面

[[email protected] ~]$ ssh-copy-id 192.168.153.128 
[[email protected] ~]$ ssh-copy-id 192.168.153.129 
[[email protected] ~]$ ssh-copy-id 192.168.153.130

测试ssh免密码登录

[[email protected] ~]$ ssh 192.168.153.128 
Last login: Wed Nov 15 22:37:10 2017 from hadoop-01 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.128 closed. 
[[email protected] ~]$ ssh 192.168.153.129 
Last login: Wed Nov 15 22:37:10 2017 from hadoop-01 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.129 closed. 
[[email protected] ~]$ ssh 192.168.153.130 
Last login: Thu Nov 16 07:46:35 2017 from hadoop-01 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.130 closed. 

2.>[email protected]

[email protected] ~]$ ssh-copy-id 192.168.153.128 
[[email protected] ~]$ ssh-copy-id 192.168.153.129 
[[email protected] ~]$ ssh-copy-id 192.168.153.130 

[email protected] ssh免密码登录其他三台主机,包括自己本身的登录.

[[email protected] ~]$ ssh 192.168.153.128 
Last login: Sat Oct 28 19:37:16 2017 from hadoop-01 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.128 closed. 
[[email protected] ~]$ ssh 192.168.153.129 
Last login: Wed Nov 15 21:26:12 2017 from hadoop-01 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.129 closed. 
[[email protected] ~]$ ssh 192.168.153.130 
Last login: Thu Nov 16 06:35:35 2017 from hadoop-01 
[[email protected] ~]$  
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.130 closed. 

3.>[email protected]

[[email protected] ~]$ ssh-copy-id 192.168.153.128 
[[email protected] ~]$ ssh-copy-id 192.168.153.129 
[[email protected] ~]$ ssh-copy-id 192.168.153.130 

[email protected]

[[email protected] ~]$ ssh 192.168.153.128 
Last login: Sat Oct 28 19:43:39 2017 from hadoop-02 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.128 closed. 
[[email protected] ~]$ ssh 192.168.153.129 
Last login: Wed Nov 15 21:32:31 2017 from hadoop-02 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.129 closed. 
[[email protected] ~]$ ssh 192.168.153.130 
Last login: Thu Nov 16 06:41:53 2017 from hadoop-02 
[[email protected] ~]$ exit 
logout 
Connection to 192.168.153.130 closed. 
[[email protected] ~]$ 

这样三台机器都可以互相免密钥访问, ssh-copy-id会以追加的方式进行密钥的分发.

[email protected],然后使用scp分发到02、03节点,速度更快.

5.安装jdk(三台主机jdk安装)

[[email protected] ~]# rpm -ivh jdk-8u151-linux-x64.rpm  
Preparing...                ########################################### [100%] 
   1:jdk1.8                 ########################################### [100%] 
[[email protected] ~]# export JAVA_HOME=/usr/java/jdk1.8.0_151/ 
[[email protected] ~]# export PATH=$JAVA_HOME/bin:$PATH 
[[email protected] ~]# java -version 
java version "1.8.0_151" 
Java(TM) SE Runtime Environment (build 1.8.0_151-b12) 
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)

6、安装hadoop:

[[email protected]~]$ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.4/hadoop-2.7.4-src.tar.gz 
[[email protected] ~]$ sudo tar zxvf hadoop-2.7.4.tar.gz -C /home/hadoop/ && cd /home/hadoop 
[[email protected] ~]$ sudo mv hadoop-2.7.4/ hadoop 
[[email protected] ~]$ sudo chown -R hadoop:hadoop hadoop/ 

#将hadoop的二进制目录添加到PATH变量,并设置HADOOP_HOME环境变量

[[email protected] ~]$ export HADOOP_HOME=/home/hadoop/hadoop/ 
[[email protected] ~]$ export PATH=$HADOOP_HOME/bin:$PATH  

二.配置文件修改

修改hadoop的配置文件.

配置文件位置:/home/hadoop/hadoop/etc/hadoop

1.修改hadoop-env.sh配置文件,指定JAVA_HOME为JAVA的安装路径

export JAVA_HOME=${JAVA_HOME} 
修改为: 
export JAVA_HOME=/usr/java/jdk1.8.0_151/ 

2. 修改yarn-env.sh文件.

指定yran框架的java运行环境,该文件是yarn框架运行环境的配置文件,需要修改JAVA_HOME的位置

# export JAVA_HOME=/home/y/libexec/jdk1.6.0/ 
修改为: 
export JAVA_HOME=/usr/java/jdk1.8.0_151/ 

3. 修改slaves文件.

指定DataNode数据存储服务器,将所有的DataNode的机器的主机名写入到此文件中,如下:

[[email protected] hadoop]$ cat slaves  
hadoop-02 
hadoop-03 

4.修改core-site.xml文件.在<configuration>……</configuration>中添加如下:

<configuration> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://hadoop-01:9000</value> 
</property> 
<property> 
<name>io.file.buffer.size</name> 
<value>131072</value> 
</property> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/home/hadoop/tmp</value> 
<description>Abase for other temporary directories.</description> 
</property> 
</configuration> 

5. 修改hdfs-site.xml配置文件,在<configuration>……</configuration>中添加如下:  

<configuration> 
<property> 
<name>dfs.namenode.secondary.http-address</name> 
<value>hadoop-01:9001</value> 
<description># 通过web界面来查看HDFS状态 </description> 
</property> 
<property> 
<name>dfs.namenode.name.dir</name> 
<value>/home/hadoop/dfs/name</value> 
</property> 
<property> 
<name>dfs.datanode.data.dir</name> 
<value>/home/hadoop/dfs/data</value> 
</property> 
<property> 
<name>dfs.replication</name> 
<value>2</value> 
<description># 每个Block有2个备份</description> 
</property> 
<property> 
<name>dfs.webhdfs.enabled</name> 
<value>true</value> 
</property> 
</configuration>

6. 修改mapred-site.xml

指定Hadoop的MapReduce运行在YARN环境

[hadoop@hadoop-01 hadoop]$ cp mapred-site.xml.template mapred-site.xml 
[[email protected]-01 hadoop]$ vim mapred-site.xml 
<configuration> 
<property> 
<name>mapreduce.framework.name</name> 
<value>yarn</value> 
</property> 
<property> 
<name>mapreduce.jobhistory.address</name> 
<value>hadoop-01:10020</value> 
</property> 
<property> 
<name>mapreduce.jobhistory.webapp.address</name> 
<value>hadoop-01:19888</value> 
</property> 
</configuration>

7.> 修改yarn-site.xml          

#该文件为yarn架构的相关配置

<configuration> 
<!-- Site specific YARN configuration properties --> 
 
<property> 
        <name>yarn.nodemanager.aux-services</name> 
        <value>mapreduce_shuffle</value> 
</property> 
<property> 
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
        <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
</property> 
<property> 
        <name>yarn.resourcemanager.address</name> 
        <value>hadoop-01:8032</value> 
</property> 
<property> 
        <name>yarn.resourcemanager.scheduler.address</name> 
        <value>hadoop-01:8030</value> 
</property> 
<property> 
        <name>yarn.resourcemanager.resource-tracker.address</name> 
        <value>hadoop-01:8031</value> 
</property> 
<property> 
        <name>yarn.resourcemanager.admin.address</name> 
        <value>hadoop-01:8033</value> 
</property> 
<property> 
        <name>yarn.resourcemanager.webapp.address</name> 
        <value>hadoop-01:8088</value> 
</property> 
<property> 
        <name>yarn.nodemanager.resource.memory-mb</name> 
        <value>1024</value> 
</property> 
<property> 
        <name>yarn.nodemanager.resource.cpu-vcores</name> 
        <value>1</value> 
</property> 
</configuration>

三.节点分发

复制hadoop到其他节点

[[email protected] ~]$ scp -r /home/hadoop/hadoop/ 192.168.153.129:/home/hadoop/ 
[[email protected] ~]$ scp -r /home/hadoop/hadoop/ 192.168.153.130:/home/hadoop/  

四、初始化及服务启停.

1.在hadoop-01使用hadoop用户初始化NameNode.

[[email protected] ~]$ /home/hadoop/hadoop/bin/hdfs namenode -format 
17/10/28 22:54:33 INFO namenode.NameNode: SHUTDOWN_MSG:  
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hadoop-01/192.168.153.128 
************************************************************/ 
[[email protected] ~]$ echo $? 
0 
说明执行成功. 
[[email protected] ~]# tree /home/hadoop/dfs 
/home/hadoop/dfs 
├── data 
└── name 
    └── current 
        ├── fsimage_0000000000000000000 
        ├── fsimage_0000000000000000000.md5 
        ├── seen_txid 
        └── VERSION 

2.启停hadoop服务:

/home/hadoop/hadoop/sbin/start-dfs.sh 
/home/hadoop/hadoop/sbin/stop-dfs.sh 

启动服务:

[[email protected] ~]$ /home/hadoop/hadoop/sbin/start-dfs.sh 
Starting namenodes on [hadoop-01] 
hadoop-01: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop-01.out 
hadoop-03: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop-03.out 
hadoop-02: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop-02.out 
Starting secondary namenodes [hadoop-01] 
hadoop-01: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop-01.out 

3.查看服务状态.

namenode节点(hadoop-01)上面查看进程

[[email protected] ~]$ps aux | grep –color namenode

Hadoop分布式集群部署(单namenode节点)详解大数据

或者使用jps查看是否有namenode进程

Hadoop分布式集群部署(单namenode节点)详解大数据

#DataNode上面查看进程(在hadoop-02、hadoop-03)上查看

#ps aux | grep –color datanode

Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署(单namenode节点)详解大数据

或者jps命令查看:

Hadoop分布式集群部署(单namenode节点)详解大数据

4.启动yarn分布式计算框架:

在 hadoop-01上启动ResourceManager

[[email protected] ~]$ /home/hadoop/hadoop/sbin/start-yarn.sh starting yarn daemons 
starting yarn daemons 
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-hadoop-01.out 
hadoop-02: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop-02.out 
hadoop-03: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop-03.out 
[[email protected] ~]$ 

#NameNode节点上查看进程

Hadoop分布式集群部署(单namenode节点)详解大数据

#DataNode节点上查看进程nodemanager进程.

[[email protected] ~]$ ps aux |grep –color nodemanager

Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署(单namenode节点)详解大数据

5.启动jobhistory服务,查看mapreduce状态

#在NameNode节点上

[[email protected] ~]$ /home/hadoop/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver 
starting historyserver, logging to /home/hadoop/hadoop/logs/mapred-hadoop-historyserver-hadoop-01.out 

至此单节点namenode的集群环境部署完成,单节点的启停操作都是在nameserve(hadoop-01)端操作的

注:start-dfs.sh和start-yarn.sh这两个脚本可用start-all.sh代替

/home/hadoop/hadoop/sbin/stop-all.sh 
/home/hadoop/hadoop/sbin/start-all.sh 

启动:

[[email protected] hadoop]$ /home/hadoop/hadoop/sbin/start-all.sh

Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署(单namenode节点)详解大数据

[[email protected] hadoop]$ /home/hadoop/hadoop/sbin/stop-all.sh

Hadoop分布式集群部署(单namenode节点)详解大数据

6.查看HDFS分布式文件系统状态

[[email protected] ~]$ /home/hadoop/hadoop/bin/hdfs dfsadmin -report

Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署(单namenode节点)详解大数据

#查看文件块组成,一个文件由那些块组成

[[email protected] ~]$ /home/hadoop/hadoop/bin/hdfs fsck / -files -blocks

Hadoop分布式集群部署(单namenode节点)详解大数据

 web页面查看hadoop集群状态

查看HDFS状态:http://192.168.153.128:50070

Hadoop分布式集群部署(单namenode节点)详解大数据

Hadoop分布式集群部署(单namenode节点)详解大数据

查看Hadoop集群状态: http://192.168.153.128:8088/cluster

Hadoop分布式集群部署(单namenode节点)详解大数据

登录web也成功,至此环境部署完成.

 

原创文章,作者:奋斗,如若转载,请注明出处:https://blog.ytso.com/9196.html

(0)
上一篇 2021年7月19日
下一篇 2021年7月19日

相关推荐

发表回复

登录后才能评论