SparkSQL怎么连接开启Kerberos认证的Phoenix,针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。
SparkSQL可以与HBase交互,比如说通过jdbc,但是实际使用时,一般是利用Phoenix操作HBase,由于我们这里HDP集群开启了Kerberos安全认证,网上也没有连接代码实例,这里我整理了一个完整的实战代码给大家分享一下经验:
组件版本信息:
spark2.2.0
phoenix4.10.0
票据和krb5.conf信息根据自己的集群去修改,代码如下:
package com.hadoop.ljs.spark220.security;import org.apache.spark.SparkConf;import org.apache.spark.sql.Dataset;import org.apache.spark.sql.Row;import org.apache.spark.sql.SQLContext;import org.apache.spark.sql.SparkSession;import java.util.Properties;/** * @author: Created By lujisen * @company ChinaUnicom Software JiNan * @date: 2020-03-04 10:41 * @version: v1.0 * @description: com.hadoop.ljs.spark220.security */public class SparkKerberosPhoenix { public static final String krb5Conf="D://kafkaSSL//krb5.conf"; public static final String zookeeperQuorum="salver158.hadoop.unicom,salver31.hadoop.unicom,salver32.hadoop.unicom"; public static final String zookeeperZnode="/hbase-secure"; public static final String zookeeperPort="2181"; public static final String userTicket="hbase-unicomhdp98@CHINAUNICOM"; public static final String userKeytab="D://kafkaSSL//hbase.headless.keytab"; public static final String hbaseMasterPrincipal="hbase/_HOST@CHINAUNICOM"; public static final String hbaseRegionserverPrincipal="hbase/_HOST@CHINAUNICOM"; public static final String phoenixKerbersURL="jdbc:phoenix:" +zookeeperQuorum+":"+zookeeperPort+":" +zookeeperZnode+":"+userTicket+":"+userKeytab; public static void main(String[] args) { // 初始化配置文件 System.setProperty("java.security.krb5.conf",krb5Conf); SparkConf conf = new SparkConf().setAppName("PhoenixSparkDemo") .setMaster("local[*]") .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer"); SparkSession sparkSession = SparkSession.builder().config(conf).getOrCreate(); Properties props=new Properties(); props.setProperty("phoenix.schema.isNamespaceMappingEnabled", "true"); props.setProperty("phoenix.schema.mapSystemTablesToNamespace", "true"); props.setProperty("hbase.security.authentication", "kerberos"); props.setProperty("hadoop.security.authentication", "kerberos"); props.setProperty("hadoop.security.authorization", "true"); props.setProperty("hbase.security.authorization", "true"); props.setProperty("hbase.zookeeper.quorum", zookeeperQuorum); props.setProperty("zookeeper.znode.parent", zookeeperZnode); props.setProperty("hbase.zookeeper.property.clientPort", zookeeperPort); props.setProperty("hbase.master.kerberos.principal", hbaseMasterPrincipal); props.setProperty("hbase.regionserver.kerberos.principal", hbaseRegionserverPrincipal); Dataset<Row> df = sparkSession .read() .jdbc(phoenixKerbersURL,"LUJS.TBL_ORDER",props); df.createOrReplaceTempView("TBL_ORDER"); SQLContext sqlCtx = new SQLContext(sparkSession); df = sqlCtx.sql("select * from TBL_ORDER limit 10"); df.show(); sparkSession.stop(); }}
补充:
后面如果需要通过spark-submit提交phoenix代码,需要配置以下几项,这里可以参考phoenix官网:
1.做了HA的yarn集群这个配置有时候没修改过来,确认yarn-site.xml中的配置:
yarn.client.failover-proxy-provider=org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider
修改为:
yarn.client.failover-proxy-provider=org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider
2.提交命令中指定以下两个配置项,或者配置到spark2-defaults.conf配置文件中
spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.10.0-HBase-1.2-client.jarspark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.10.0-HBase-1.2-client.jar
或者提交任务手动指定即可:
spark.driver.extraClassPath
spark.executor.extraClassPath
/usr/hdp/2.6.3.0-235/spark2/bin/spark-submit –class com.hadoop.ljs.spark.SparkOnPhoenix –conf spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.10.0-HBase-1.2-client.jar –conf spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.10.0-HBase-1.2-client.jar –jars /usr/hdp/2.6.3.0-235/phoenix/phoenix-4.10.0-HBase-1.2-client.jar spark220Scala-1.0-SNAPSHOT.jar
关于SparkSQL怎么连接开启Kerberos认证的Phoenix问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注亿速云行业资讯频道了解更多相关知识。
原创文章,作者:carmelaweatherly,如若转载,请注明出处:https://blog.ytso.com/223204.html