win7下使用Idea远程连接spark执行spark pi,我自己的实验
win7地址为192.168.0.2,ubuntu为虚拟机,地址为192.168.0.3
远程连接spark
源代码语言为:
package main.scala.sogou /** * Created by danger on 2016/9/16. */ import org.apache.spark.SparkContext._ import org.apache.spark.{SparkConf,SparkContext} object RemoteDebug { def main(args: Array[String]) { val conf = new SparkConf().setAppName("Spark Pi").setMaster("spark://192.168.0.3:7077") .setJars(List("D://scalasrc//out//artifacts//scalasrc.jar")) val spark = new SparkContext(conf) val slices = if (args.length > 0) args(0).toInt else 2 val n = 100000 * slices val count = spark.parallelize(1 to n, slices).map { i => val x = Math.random * 2 - 1 val y = Math.random * 2 - 1 if (x * x + y * y < 1) 1 else 0 }.reduce(_ + _) println("Pi is roughly " + 4.0 * count / n) spark.stop() } } 你需要修改的地方就是spark://192.168.0.3:7077 还有就是setJars的地址:D://scalasrc//out//artifacts//scalasrc.jar
另外我没有Import spark的Math包
所以random用的是Math.random
在run的edit configuration界面中,配置参数
七十主要是mainclass哈
main.scala.sogou.RemoteDebug
单机run
远程的服务器一阵狂赚,消停后,出现了
Process finished with exit code 0
呵呵,成功了
但结果呢?
网上找,原来在这里
16/09/16 09:40:57 INFO DAGScheduler: ResultStage 0 (reduce at RemoteDebug.scala:19) finished in 75.751 s
16/09/16 09:40:57 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/09/16 09:40:57 INFO DAGScheduler: Job 0 finished: reduce at RemoteDebug.scala:19, took 76.071948 s
Pi is roughly 3.1385
16/09/16 09:40:57 INFO SparkUI: Stopped Spark web UI at http://192.168.0.2:4040
16/09/16 09:40:57 INFO DAGScheduler: Stopping DAGScheduler
16/09/16 09:40:57 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/09/16 09:40:57 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/09/16 09:40:57 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/09/16 09:40:57 INFO MemoryStore: MemoryStore cleared
太帅了,成功了,参考http://blog.csdn.net/javastart/article/details/43372977
原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/186645.html