Hbase二环境搭建详解大数据

此笔记仅用于作者记录复习使用,如有错误地方欢迎留言指正,作者感激不尽,如有转载请指明出处

Hbase环境搭建

hadoop为HA的Hbase配置

Zookeeper集群的正常部署并启动

$ /usr/local/src/zookeeper-3.4.5-cdh5.3.6/bin/zkServer.sh start

Hadoop集群的正常部署并启动

$ /usr/local/src/hadoop-2.5.0-cdh5.3.6/sbin/start-dfs.sh

$ /usr/local/src/hadoop-2.5.0-cdh5.3.6/sbin/start-yarn.sh

解压HBase

$ tar -zxf /opt/cdh/hbase-0.98.6-cdh5.3.6.tar.gz -C /usr/local/src/

修改HBase配置文件

  • 修改hbase-env.sh文件

export JAVA_HOME=/usr/local/src/jdk1.8.0_121/

export HBASE_MANAGES_ZK=false

  • 修改hbase-site.xml文件
    <!--指定hbase的根目录,mycluster:高可用hdfs名称--> 
    <property> 
        <name>hbase.rootdir</name> 
        <value>hdfs://mycluster/hbase</value> 
    </property> 
 
    <!--是否运行在完全分布式中--> 
    <property> 
        <name>hbase.cluster.distributed</name> 
        <value>true</value> 
    </property> 
 
    <!---高可用集群中只配置端口号就行,如果是单节点,则指明机器名--> 
    <property> 
        <name>hbase.master</name> 
        <value>60000</value> 
    </property> 
 
    <property> 
        <name>hbase.zookeeper.quorum</name> 
        <value>master:2181,slave1:2181,slave2:2181</value> 
    </property> 
 
    <property> 
        <name>hbase.zookeeper.property.dataDir</name> 
        <value>/usr/local/src/zookeeper-3.4.5-cdh5.3.6/dataDir</value> 
    </property>
  • 修改regionservers文件
    master
    slave1
    slave2

替换HBase根目录下的lib目录下的jar包,以解决兼容问题

  • 删除原有Jar包以及zookeeper的jar包
    $ rm -rf /usr/local/src/hbase-0.98.6-cdh5.3.6/lib/hadoop-*

    $ rm -rf lib/zookeeper-3.4.6.jar

  • 拷贝新的Jar包
    这里涉及到的jar包大概是:
    hadoop-annotations-2.5.0.jar
    hadoop-auth-2.5.0-cdh5.3.6.jar
    hadoop-client-2.5.0-cdh5.3.6.jar
    hadoop-common-2.5.0-cdh5.3.6.jar
    hadoop-hdfs-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar
    hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar
    hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar
    hadoop-yarn-api-2.5.0-cdh5.3.6.jar
    hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar
    hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar
    hadoop-yarn-client-2.5.0-cdh5.3.6.jar
    hadoop-yarn-common-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar
    hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar
    zookeeper-3.4.5-cdh5.3.6.jar

将Hadoop配置文件软连接到HBase的conf目录下

$ ln -s /usr/local/src/hadoop-2.5.0-cdh5.3.6/etc/hadoop/core-site.xml /usr/local/src/hbase-0.98.6-cdh5.3.6/conf/core-site.xml

$ ln -s /usr/local/src/hadoop-2.5.0-cdh5.3.6/etc/hadoop/hdfs-site.xml /usr/local/src/hbase-0.98.6-cdh5.3.6/conf/hdfs-site.xml

将整理好的HBase安装目录scp到其他机器节点

启动服务

$ bin/hbase-daemon.sh start master

$ bin/hbase-daemon.sh start regionserver

或者:
$ bin/start-hbase.sh

对应的停止命令:
$ bin/stop-hbase.sh

查看页面验证是否启动成功

http://192.168.159.30:60010

HMaster的高可用

  • 确保HBase集群已正常停止
    $ bin/stop-hbase.sh

  • 在conf目录下创建backup-masters文件
    $ touch conf/backup-masters

  • 在backup-masters文件中配置高可用HMaster节点
    $ echo slave1 > conf/backup-masters

  • 拷贝到其他节点的机器

  • 打开页面测试
    测试http://master:60010

  • 尝试关闭第一台机器的HMaster
    $ bin/hbase-daemon.sh stop master
    然后查看第二台的HMaster是否会直接启用

原创文章,作者:Maggie-Hunter,如若转载,请注明出处:https://blog.ytso.com/9153.html

(0)
上一篇 2021年7月19日
下一篇 2021年7月19日

相关推荐

发表回复

登录后才能评论