Hadoop2.7.6_01_部署详解大数据

 

1. 主机规划

主机名称

外网IP

内网IP

操作系统

备注

安装软件

mini01

10.0.0.11

172.16.1.11

CentOS 7.2

ssh port:22

Hadoop 【NameNode  SecondaryNameNode】

mini02

10.0.0.12

172.16.1.12

CentOS 7.2

ssh port:22

Hadoop 【ResourceManager】

mini03

10.0.0.13

172.16.1.13

CentOS 7.2

ssh port:22

Hadoop 【DataNode  NodeManager】

mini04

10.0.0.14

172.16.1.14

CentOS 7.2

ssh port:22

Hadoop 【DataNode  NodeManager】

mini05

10.0.0.15

172.16.1.15

CentOS 7.2

ssh port:22

Hadoop 【DataNode  NodeManager】

 

Hadoop2.7.6_01_部署详解大数据

 

添加hosts信息,保证每台都可以相互ping通

[[email protected] ~]# cat /etc/hosts   
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 
 
10.0.0.11    mini01 
10.0.0.12    mini02 
10.0.0.13    mini03 
10.0.0.14    mini04 
10.0.0.15    mini05 

  

2. 添加用户账号

# 使用一个专门的用户,避免直接使用root用户 
# 添加用户、指定家目录并指定用户密码 
useradd -d /app yun && echo '123456' | /usr/bin/passwd --stdin yun 
# sudo提权 
echo "yun  ALL=(ALL)       NOPASSWD: ALL" >>  /etc/sudoers 
# 让其它普通用户可以进入该目录查看信息 
chmod 755 /app/ 

  

3. 实现yun用户免秘钥登录

要求:根据规划实现 mini01 到 mini01、mini02、mini03、mini04、mini05 免秘钥登录 
              实现 mini02 到 mini01、mini02、mini03、mini04、mini05 免秘钥登录 
# 可以使用ip也可以是hostname  但是由于我们计划使用的是 hostname 方式交互,所以使用hostname  
# 同时hostname方式分发,可以通过hostname远程登录,也可以IP远程登录 

  

3.1. 创建密钥

# 实现 mini01 到 mini02、mini03、mini04、mini05 免秘钥登录  
[[email protected] ~]$ ssh-keygen -t rsa  # 一路回车即可  
Generating public/private rsa key pair. 
Enter file in which to save the key (/app/.ssh/id_rsa):  
Created directory '/app/.ssh'. 
Enter passphrase (empty for no passphrase):  
Enter same passphrase again:  
Your identification has been saved in /app/.ssh/id_rsa. 
Your public key has been saved in /app/.ssh/id_rsa.pub. 
The key fingerprint is: 
SHA256:rAFSIyG6Ft6qgGdVl/7v79DJmD7kIDSTcbiLtdKyTQk [email protected] 
The key's randomart image is: 
+---[RSA 2048]----+ 
|. o.o    .       | 
|.. o .  o..      | 
|... . . o=       | 
|..o. oE+B        | 
|.o .. .*S*       | 
|o ..  +oB.. .= . | 
|o.o   .* ..++ +  | 
|oo    . .  oo.   | 
|.          .++o  | 
+----[SHA256]-----+ 
 
# 生成之后会在用户的根目录生成一个 “.ssh”的文件夹  
[[email protected] ~]$ ll -d .ssh/ 
drwx------ 2 yun yun 38 Jun  9 19:17 .ssh/ 
[[email protected] ~]$ ll .ssh/ 
total 8 
-rw------- 1 yun yun 1679 Jun  9 19:17 id_rsa 
-rw-r--r-- 1 yun yun  392 Jun  9 19:17 id_rsa.pub 

  

3.2. 分发密钥

# 可以使用ip也可以是hostname  但是由于我们使用的是 hostname 方式通信,所以使用hostname  
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.1.11   # IP方式【这里不用】  
# 分发 
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03   # 主机名方式【所有的都这样 从mini01到mini05】  
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/app/.ssh/id_rsa.pub" 
The authenticity of host '[mini03]:22 ([10.0.0.13]:22)' can't be established. 
ECDSA key fingerprint is SHA256:pN2NUkgCTt+b9P5TfQZcTh4PF4h7iUxAs6+V7Slp1YI. 
ECDSA key fingerprint is MD5:8c:f0:c7:d6:7c:b1:a8:59:1c:c1:5e:d7:52:cb:5f:51. 
Are you sure you want to continue connecting (yes/no)? yes 
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed 
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys 
[email protected]'s password:  
 
Number of key(s) added: 1 
 
Now try logging into the machine, with:   "ssh -p '22' 'mini03'" 
and check to make sure that only the key(s) you wanted were added. 

  

mini01分发密钥

[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05  

  

mini02分发密钥

[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04  
[[email protected] .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05  

  

远程登录测试【最好都测试一下】

[[email protected] ~]$ ssh mini05  
Last login: Sat Jun  9 19:47:43 2018 from 10.0.0.11 
 
Welcome You Login 
 
[[email protected] ~]$             # 表示远程登录成功  

  

3.3. 远程免密登录原理图

Hadoop2.7.6_01_部署详解大数据

 

3.4. .ssh目录中的文件说明

[[email protected] .ssh]$ pwd 
/app/.ssh 
[[email protected] .ssh]$ ll 
total 16 
-rw------- 1 yun yun  784 Jun  9 19:43 authorized_keys 
-rw------- 1 yun yun 1679 Jun  9 19:17 id_rsa 
-rw-r--r-- 1 yun yun  392 Jun  9 19:17 id_rsa.pub 
-rw-r--r-- 1 yun yun 1332 Jun  9 19:41 known_hosts 
######################################################################################## 
  authorized_keys:存放远程免密登录的公钥,主要通过这个文件记录多台机器的公钥 
  id_rsa : 生成的私钥文件 
  id_rsa.pub : 生成的公钥文件 
know_hosts : 已知的主机公钥清单 

  

4. Jdk【java8】

4.1. 软件安装

[[email protected] software]# pwd 
/app/software 
[[email protected] software]# tar xf jdk1.8.0_112.tar.gz  
[[email protected] software]# ll 
total 201392 
drwxr-xr-x 8   10  143      4096 Dec 20 13:27 jdk1.8.0_112 
-rw-r--r-- 1 root root 189815615 Mar 12 16:47 jdk1.8.0_112.tar.gz 
[[email protected] software]# mv jdk1.8.0_112/ /app/ 
[[email protected] software]# cd /app/ 
[[email protected] app]# ll 
total 8 
drwxr-xr-x  8   10   143 4096 Dec 20 13:27 jdk1.8.0_112 
[[email protected] app]# ln -s jdk1.8.0_112/ jdk 
[[email protected] app]# ll 
total 8 
lrwxrwxrwx  1 root root    13 May 16 23:19 jdk -> jdk1.8.0_112/ 
drwxr-xr-x  8   10   143 4096 Dec 20 13:27 jdk1.8.0_112 

  

4.2. 环境变量

[[email protected] ~]$ pwd 
/app 
[[email protected] ~]$ ll -d jdk*  # 可以根据实际情况选择jdk版本,其中jdk1.8 可以兼容 jdk1.7    
lrwxrwxrwx 1 yun yun   11 Mar 15 14:58 jdk -> jdk1.8.0_112 
drwxr-xr-x 8 yun yun 4096 Dec 20 13:27 jdk1.8.0_112 
[[email protected] profile.d]$ pwd 
/etc/profile.d 
[[email protected] profile.d]$ cat jdk.sh # java环境变量    
export JAVA_HOME=/app/jdk 
export JRE_HOME=/app/jdk/jre 
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH 
export PATH=$JAVA_HOME/bin:$PATH 
 
[[email protected] profile.d]# source /etc/profile 
[[email protected] profile.d]$ java -version   
java version "1.8.0_112" 
Java(TM) SE Runtime Environment (build 1.8.0_112-b15) 
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode) 

  

5. Hadoop配置修改并启动【所有机器配置文件一样】

[[email protected] software]$ pwd 
/app/software 
[[email protected] software]$ ll 
total 194152 
-rw-r--r-- 1 yun yun 198811365 Jun  8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz 
[[email protected] software]$ tar xf CentOS-7.4_hadoop-2.7.6.tar.gz 
[[email protected] software]$ mv hadoop-2.7.6/ /app/ 
[[email protected] software]$ cd 
[[email protected] ~]$ ln -s hadoop-2.7.6/ hadoop 
[[email protected] ~]$ ll 
total 4 
lrwxrwxrwx  1 yun yun   13 Jun  9 16:21 hadoop -> hadoop-2.7.6/ 
drwxr-xr-x  9 yun yun  149 Jun  8 16:36 hadoop-2.7.6 
lrwxrwxrwx  1 yun yun   12 May 26 11:18 jdk -> jdk1.8.0_112 
drwxr-xr-x  8 yun yun  255 Sep 23  2016 jdk1.8.0_112 

  

5.1. 环境变量

[[email protected] profile.d]# pwd 
/etc/profile.d 
[[email protected] profile.d]# vim hadoop.sh   
export HADOOP_HOME="/app/hadoop" 
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 
 
[[email protected] profile.d]# source /etc/profile  # 生效  

  

5.2. core-site.xml

[[email protected] hadoop]$ pwd 
/app/hadoop/etc/hadoop 
[[email protected] hadoop]$ vim core-site.xml  
<?xml version="1.0" encoding="UTF-8"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
…………………… 
<!-- Put site-specific property overrides in this file. --> 
 
<configuration> 
  <!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 --> 
  <property> 
    <name>fs.defaultFS</name> 
    <value>hdfs://mini01:9000</value>  <!-- mini01 是hostname信息 --> 
  </property> 
  <property> 
    <name>hadoop.tmp.dir</name> 
    <value>/app/hadoop/tmp</value> 
  </property> 
</configuration> 

  

5.3. hdfs-site.xml

[[email protected] hadoop]$ pwd 
/app/hadoop/etc/hadoop 
[[email protected] hadoop]$ vim hdfs-site.xml  
<?xml version="1.0" encoding="UTF-8"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
……………… 
<!-- Put site-specific property overrides in this file. --> 
 
<configuration> 
  <!-- 指定HDFS副本的数量 --> 
  <property> 
    <name>dfs.replication</name> 
    <value>3</value> 
  </property> 
 
  <property> 
    <!-- 两个name标签都可以。周期性的合并fsimage和edits log文件并且使edits log保持在一定范围内。最好和namenode不在一台机器,因为所需内存和namenode一样 --> 
    <!-- <name>dfs.secondary.http.address</name> --> 
    <name>dfs.namenode.secondary.http-address</name> 
    <value>mini01:50090</value> 
  </property> 
 
  <!-- dfs namenode 的目录,可以有多个目录,然后每个目录挂不同的磁盘,每个目录下的文件信息是一样的,相当于备份 --> 
  <!-- 如有需要,放开注释即可 
  <property> 
    <name>dfs.namenode.name.dir</name> 
    <value> file://${hadoop.tmp.dir}/dfs/name,file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value> 
  </property> 
  --> 
 
  <!-- 也可以配置dfs.datanode.data.dir  配置为多个目录,相当于扩容 --> 
 
</configuration> 

  

5.4. mapred-site.xml

[[email protected] hadoop]$ pwd 
/app/hadoop/etc/hadoop 
[[email protected] hadoop]$ mv mapred-site.xml.template mapred-site.xml   
[[email protected] hadoop]$ vim mapred-site.xml  
<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
……………… 
<!-- Put site-specific property overrides in this file. --> 
 
<configuration> 
  <property> 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value> 
  </property> 
 
</configuration> 

  

5.5. yarn-site.xml

[[email protected] hadoop]$ pwd 
/app/hadoop/etc/hadoop 
[[email protected] hadoop]$ vim yarn-site.xml  
<?xml version="1.0"?> 
…………………… 
<configuration> 
 
<!-- Site specific YARN configuration properties --> 
  <!-- 指定YARN的老大(ResourceManager)的地址 --> 
  <property> 
    <name>yarn.resourcemanager.hostname</name> 
    <value>mini02</value>  <!-- 根据规划 mini02 为ResourceManager --> 
  </property> 
  
  <!-- reducer获取数据的方式 --> 
  <property> 
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce_shuffle</value> 
  </property> 
 
</configuration> 

  

5.6. slaves

# 该配置和Hadoop服务无关,只是用于Hadoop脚本的批量使用 
[[email protected] hadoop]$ pwd 
/app/hadoop/etc/hadoop 
[[email protected] hadoop]$ cat slaves  
mini03 
mini04 
mini05 

  

5.7. 格式化namenode

(是对namenode进行初始化)

[[email protected] hadoop]$ hdfs namenode -format    
18/06/09 17:44:56 INFO namenode.NameNode: STARTUP_MSG:  
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG:   host = mini01/10.0.0.11 
STARTUP_MSG:   args = [-format] 
STARTUP_MSG:   version = 2.7.6 
……………… 
STARTUP_MSG:   java = 1.8.0_112 
************************************************************/ 
18/06/09 17:44:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
18/06/09 17:44:56 INFO namenode.NameNode: createNameNode [-format] 
Formatting using clusterid: CID-72e356f5-7723-4960-885a-72e522e19be1 
18/06/09 17:44:57 INFO namenode.FSNamesystem: No KeyProvider found. 
18/06/09 17:44:57 INFO namenode.FSNamesystem: fsLock is fair: true 
18/06/09 17:44:57 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 
18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 
18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 09 17:44:57 
18/06/09 17:44:57 INFO util.GSet: Computing capacity for map BlocksMap 
18/06/09 17:44:57 INFO util.GSet: VM type       = 64-bit 
18/06/09 17:44:57 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 
18/06/09 17:44:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: defaultReplication         = 3 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplication             = 512 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: minReplication             = 1 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: encryptDataTransfer        = false 
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000 
18/06/09 17:44:57 INFO namenode.FSNamesystem: fsOwner             = yun (auth:SIMPLE) 
18/06/09 17:44:57 INFO namenode.FSNamesystem: supergroup          = supergroup 
18/06/09 17:44:57 INFO namenode.FSNamesystem: isPermissionEnabled = true 
18/06/09 17:44:57 INFO namenode.FSNamesystem: HA Enabled: false 
18/06/09 17:44:57 INFO namenode.FSNamesystem: Append Enabled: true 
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map INodeMap 
18/06/09 17:44:58 INFO util.GSet: VM type       = 64-bit 
18/06/09 17:44:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^20 = 1048576 entries 
18/06/09 17:44:58 INFO namenode.FSDirectory: ACLs enabled? false 
18/06/09 17:44:58 INFO namenode.FSDirectory: XAttrs enabled? true 
18/06/09 17:44:58 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 
18/06/09 17:44:58 INFO namenode.NameNode: Caching file names occuring more than 10 times 
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map cachedBlocks 
18/06/09 17:44:58 INFO util.GSet: VM type       = 64-bit 
18/06/09 17:44:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^18 = 262144 entries 
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000 
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 
18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 
18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache 
18/06/09 17:44:58 INFO util.GSet: VM type       = 64-bit 
18/06/09 17:44:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^15 = 32768 entries 
18/06/09 17:44:58 INFO namenode.FSImage: Allocated new BlockPoolId: BP-925531343-10.0.0.11-1528537498201 
18/06/09 17:44:58 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted. 
18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 
18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 319 bytes saved in 0 seconds. 
18/06/09 17:44:58 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 
18/06/09 17:44:58 INFO util.ExitUtil: Exiting with status 0 
18/06/09 17:44:58 INFO namenode.NameNode: SHUTDOWN_MSG:  
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.11 
************************************************************/ 
[[email protected] hadoop]$ pwd 
/app/hadoop 
[[email protected] hadoop]$ ll 
total 112 
drwxr-xr-x 2 yun yun   194 Jun  8 16:36 bin 
drwxr-xr-x 3 yun yun    20 Jun  8 16:36 etc 
drwxr-xr-x 2 yun yun   106 Jun  8 16:36 include 
drwxr-xr-x 3 yun yun    20 Jun  8 16:36 lib 
drwxr-xr-x 2 yun yun   239 Jun  8 16:36 libexec 
-rw-r--r-- 1 yun yun 86424 Jun  8 16:36 LICENSE.txt 
-rw-r--r-- 1 yun yun 14978 Jun  8 16:36 NOTICE.txt 
-rw-r--r-- 1 yun yun  1366 Jun  8 16:36 README.txt 
drwxr-xr-x 2 yun yun  4096 Jun  8 16:36 sbin 
drwxr-xr-x 4 yun yun    31 Jun  8 16:36 share 
drwxrwxr-x 3 yun yun    17 Jun  9 17:44 tmp   # 该目录之前是没有的 
[[email protected] hadoop]$ ll tmp/ 
total 0 
drwxrwxr-x 3 yun yun 18 Jun  9 17:44 dfs 
[[email protected] hadoop]$ ll tmp/dfs/ 
total 0 
drwxrwxr-x 3 yun yun 21 Jun  9 17:44 name 
[[email protected] hadoop]$ ll tmp/dfs/name/ 
total 0 
drwxrwxr-x 2 yun yun 112 Jun  9 17:44 current 
[[email protected] hadoop]$ ll tmp/dfs/name/current/ 
total 16 
-rw-rw-r-- 1 yun yun 319 Jun  9 17:44 fsimage_0000000000000000000 
-rw-rw-r-- 1 yun yun  62 Jun  9 17:44 fsimage_0000000000000000000.md5 
-rw-rw-r-- 1 yun yun   2 Jun  9 17:44 seen_txid 
-rw-rw-r-- 1 yun yun 199 Jun  9 17:44 VERSION 

  

5.8. 启动namenode

# 在mini01上启动  
[[email protected] sbin]$ pwd 
/app/hadoop/sbin 
[[email protected] sbin]$ ./hadoop-daemon.sh start namenode  # 停止使用: hadoop-daemon.sh stop namenode  
starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out 
[[email protected] sbin]$ jps  
6066 Jps 
5983 NameNode 
[[email protected] sbin]$ ps -ef | grep 'hadoop'  
yun        5983      1  6 17:55 pts/0    00:00:07 /app/jdk/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,console -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop-yun-namenode-mini01.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode 
yun        6160   2337  0 17:57 pts/0    00:00:00 grep --color=auto hadoop 
[[email protected] sbin]$ netstat -lntup | grep '5983'   
(Not all processes could be identified, non-owned process info 
 will not be shown, you would have to be root to see it all.) 
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      5983/java            
tcp        0      0 10.0.0.11:9000          0.0.0.0:*               LISTEN      5983/java    

  

5.8.1.  浏览器访问

http://10.0.0.11:50070 

Hadoop2.7.6_01_部署详解大数据

Hadoop2.7.6_01_部署详解大数据

 

5.9. 启动datanode

# mini03、mini04、mini05 启动datanode  
# 由于添加环境变量,所以可以在任何目录启动  
[[email protected] ~]$ hadoop-daemon.sh start datanode  # 停止使用: hadoop-daemon.sh stop datanode 
starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini02.out 
[[email protected] ~]$ jps  
5349 Jps 
5263 DataNode 

  

5.9.1. 刷新浏览器

Hadoop2.7.6_01_部署详解大数据

 

5.10. 使用脚本批量启动hdf

# 根据规划在mini01启动  
[[email protected] hadoop]$ start-dfs.sh  
Starting namenodes on [mini01] 
mini01: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out 
mini04: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini04.out 
mini03: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini03.out 
mini05: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini05.out 
Starting secondary namenodes [mini01] 
mini01: starting secondarynamenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-secondarynamenode-mini01.out 

  

URL地址:(HDFS管理界面)

http://10.0.0.11:50070	 

  

5.11. 使用脚本批量启动yarn

# 根据规划在mini02启动 
# 启动yarn  
[[email protected] hadoop]$ start-yarn.sh  
starting yarn daemons 
starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini02.out 
mini05: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini05.out 
mini04: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini04.out 
mini03: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini03.out 

  

URL地址:(MR管理界面)

http://10.0.0.12:8088	 

  

5.12. 最后结果

##### mini01 
[[email protected] hadoop]$ jps  
16336 NameNode 
16548 SecondaryNameNode 
16686 Jps 
 
##### mini02 
[[email protected] hadoop]$ jps  
10936 ResourceManager 
11213 Jps 
 
##### mini03 
[[email protected] ~]$ jps  
9212 Jps 
8957 DataNode 
9039 NodeManager 
 
##### mini04 
[[email protected] ~]$ jps 
4130 NodeManager 
4296 Jps 
4047 DataNode 
 
##### mini05 
[[email protected] ~]$ jps 
7011 DataNode 
7091 NodeManager 
7308 Jps 

  

6. 文章参考

6.1. 虚拟机安装

1、VMware安装CentOS7

2、VMware网络设置

 

6.2. Hadoop-2.7.6编译过程

1、CentOS7下编译Hadoop-2.7.6

 

原创文章,作者:奋斗,如若转载,请注明出处:https://blog.ytso.com/tech/bigdata/9103.html

(0)
上一篇 2021年7月19日 09:11
下一篇 2021年7月19日 09:12

相关推荐

发表回复

登录后才能评论