博客
关于我
Spark安装部署
阅读量:179 次
发布时间:2019-02-28

本文共 2801 字,大约阅读时间需要 9 分钟。

一 下载Scala和Spark
[root@master opt]# wget http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz[root@master opt]# wget http://d3kbcqa49mib13.cloudfront.net/spark-2.0.0-bin-hadoop2.7.tgz
二 安装Scala
1 解压
[root@master opt]# tar -zxvf scala-2.11.8.tgz
2 配置环境变量
export SCALA_HOME=/opt/scala-2.11.8export PATH=$PATH:$SCALA_HOME/bin
3 测试
[root@master opt]# scalaWelcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152).Type in expressions for evaluation. Or try :help.scala>
三 安装Spark
1 解压
[root@master opt]# tar -zxvf spark-2.0.0-bin-hadoop2.7.tgz
2 配置环境变量
export SPARK_HOME=/opt/spark-2.0.0-bin-hadoop2.7export PATH=$PATH:$SPARK_HOME/bin
3 配置spark-env.sh
export JAVA_HOME=/opt/jdk1.8export PATH=$PATH:$JAVA_HOME/binexport SCALA_HOME=/opt/scala-2.11.8export PATH=$PATH:$SCALA_HOME/binexport SPARK_HOME=/opt/spark-2.0.0-bin-hadoop2.7export PATH=$PATH:$SPARK_HOME/bin
四 启动
[root@master sbin]# ./start-all.shstarting org.apache.spark.deploy.master.Master, logging to /opt/spark-2.0.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.outlocalhost: \Slocalhost: Kernel \r on an \mlocalhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark-2.0.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out[root@master sbin]# jps4128 Jps4049 Worker3992 Master
五 测试
[root@master ~]# cat test.loghello gojavac mysql[root@master sbin]# spark-shellUsing Spark's default log4j profile: org/apache/spark/log4j-defaults.propertiesSetting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel).18/02/03 22:25:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable18/02/03 22:25:08 WARN SparkContext: Use an existing SparkContext, some configuration may not take effect.Spark context Web UI available at http://192.168.0.110:4040Spark context available as 'sc' (master = local[*], app id = local-1517667907847).Spark session available as 'spark'.Welcome to      ____              __     / __/__  ___ _____/ /__    _\ \/ _ \/ _ `/ __/  '_/   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0      /_/         Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)Type in expressions to have them evaluated.Type :help for more information.scala> var file = sc.textFile("/root/test.log");file: org.apache.spark.rdd.RDD[String] = /root/test.log MapPartitionsRDD[1] at textFile at 
:24scala> file.collectres1: Array[String] = Array(hello go, java, c mysql, "", "")scala> var file = sc.textFile("hdfs://master/test.log");file: org.apache.spark.rdd.RDD[String] = hdfs://master/test.log MapPartitionsRDD[3] at textFile at
:24scala> file.collectres2: Array[String] = Array(hello go, java, c mysql, "", "")
你可能感兴趣的文章
mysql中cast() 和convert()的用法讲解
查看>>
mysql中datetime与timestamp类型有什么区别
查看>>
MySQL中DQL语言的执行顺序
查看>>
mysql中floor函数的作用是什么?
查看>>
MySQL中group by 与 order by 一起使用排序问题
查看>>
mysql中having的用法
查看>>
MySQL中interactive_timeout和wait_timeout的区别
查看>>
mysql中int、bigint、smallint 和 tinyint的区别、char和varchar的区别详细介绍
查看>>
mysql中json_extract的使用方法
查看>>
mysql中json_extract的使用方法
查看>>
mysql中kill掉所有锁表的进程
查看>>
mysql中like % %模糊查询
查看>>
MySql中mvcc学习记录
查看>>
mysql中null和空字符串的区别与问题!
查看>>
MySQL中ON DUPLICATE KEY UPDATE的介绍与使用、批量更新、存在即更新不存在则插入
查看>>
MYSQL中TINYINT的取值范围
查看>>
MySQL中UPDATE语句的神奇技巧,让你操作数据库如虎添翼!
查看>>
Mysql中varchar类型数字排序不对踩坑记录
查看>>
MySQL中一条SQL语句到底是如何执行的呢?
查看>>
MySQL中你必须知道的10件事,1.5万字!
查看>>