基本概念
What's DataFrame
A DataFrame is equivalent to a relational table in Spark SQL [1]。
DataFrame的前身是SchemaRDD,从Spark 1.3.0开始SchemaRDD更名为DataFrame [2]。其实从使用上来看,跟RDD的区别主要是有了Schema,这样就能根据不同行和列得到对应的值。
Why DataFrame, Motivition
比RDD有更多的操作,而且执行计划上也比RDD有更多的优化。能够方便处理大规模结构化数据。
How to use DataFrame
创建DataFrame
创建一个空的DataFrame
这里schema是一个StructType类型的
sqlContext.createDataFrame(sc.emptyRDD[Row], schema)
从一个List创建
def listToDataFrame(list: ListBuffer[List[Any]], schema:StructType): DataFrame = { val rows = list.map{x => Row(x:_*)} val rdd = sqlContext.sparkContext.parallelize(rows) sqlContext.createDataFrame(rdd, schema) }
直接通过RDD生成
val departments = sc.parallelize(Array( (31, "Sales"), (33, "Engineering"), (34, "Clerical"), (35, "Marketing") )).toDF("DepartmentID", "DepartmentName")val employees = sc.parallelize(Array[(String, Option[Int])]( ("Rafferty", Some(31)), ("Jones", Some(33)), ("Heisenberg", Some(33)), ("Robinson", Some(34)), ("Smith", Some(34)), ("Williams", null) )).toDF("LastName", "DepartmentID")
读取json文件创建[5]
json文件
{"name":"Michael"} {"name":"Andy", "age":30} {"name":"Justin", "age":19}
创建DataFrame
val df = sqlContext.jsonFile("/path/to/your/jsonfile") df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]
从parquet文件读出创建
val df:DataFrame = sqlContext.read.parquet("/Users/robin/workspace/cooked_data/bt")
从MySQL读取表chuang创建[5]
val jdbcDF = sqlContext.load("jdbc", Map("url" -> "jdbc:mysql://localhost:3306/db?user=aaa&password=111", "dbtable" -> "your_table"))
从Hive创建[5]
Spark提供了一个HiveContext的上下文,其实是SQLContext的一个子类,但从作用上来说,sqlContext也支持Hive数据源。只要在部署Spark的时候加入Hive选项,并把已有的hive-site.xml文件挪到$SPARK_HOME/conf路径下,就可以直接用Spark查询包含已有元数据的Hive表了
sqlContext.sql("select count(*) from hive_people")
从CSV文件创建
有个spark-csv的library
可以从maven引入,也可以k从spark-shell $SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.5.0
val df = sqlContext.read.format("com.databricks.spark.csv"). option("header", "true"). option("inferSchema","true"). load("/Users/username/tmp/person.csv")
DataFrame基本操作
官方例子
// To create DataFrame using SQLContextval people = sqlContext.read.parquet("...")val department = sqlContext.read.parquet("...") people.filter("age > 30") .join(department, people("deptId") === department("id")) .groupBy(department("name"), "gender") .agg(avg(people("salary")), max(people("age")))
Filter
把id为null的行都filter掉
df.withColumn("id", when(expr("id is null"), 0).otherwise(1)).show
Join连接
inner join [4]
val employees = sc.parallelize(Array[(String, Option[Int])]( ("Rafferty", Some(31)), ("Jones", Some(33)), ("Heisenberg", Some(33)), ("Robinson", Some(34)), ("Smith", Some(34)), ("Williams", null) )).toDF("LastName", "DepartmentID")val departments = sc.parallelize(Array( (31, "Sales"), (33, "Engineering"), (34, "Clerical"), (35, "Marketing") )).toDF("DepartmentID", "DepartmentName") departments.show() +------------+--------------+ |DepartmentID|DepartmentName| +------------+--------------+ | 31| Sales| | 33| Engineering| | 34| Clerical| | 35| Marketing| +------------+--------------+ employees.join(departments, "DepartmentID").show() +------------+----------+--------------+ |DepartmentID| LastName|DepartmentName| +------------+----------+--------------+ | 31| Rafferty| Sales| | 33| Jones| Engineering| | 33|Heisenberg| Engineering| | 34| Robinson| Clerical| | 34| Smith| Clerical| | null| Williams| null| +------------+----------+--------------+
left outer join [4]
employees.join(departments, Seq("DepartmentID"), "left_outer").show() +------------+----------+--------------+ |DepartmentID| LastName|DepartmentName| +------------+----------+--------------+ | 31| Rafferty| Sales| | 33| Jones| Engineering| | 33|Heisenberg| Engineering| | 34| Robinson| Clerical| | 34| Smith| Clerical| | null| Williams| null| +------------+----------+--------------+
val d1 = df.groupBy("startDate","endDate").agg(max("price") as "price").show
Join expression 用表达式连接 [3]
val products = sc.parallelize(Array( ("steak", "1990-01-01", "2000-01-01", 150), ("steak", "2000-01-02", "2020-01-01", 180), ("fish", "1990-01-01", "2020-01-01", 100) )).toDF("name", "startDate", "endDate", "price") products.show() +-----+----------+----------+-----+ | name| startDate| endDate|price| +-----+----------+----------+-----+ |steak|1990-01-01|2000-01-01| 150| |steak|2000-01-02|2020-01-01| 180| | fish|1990-01-01|2020-01-01| 100| +-----+----------+----------+-----+val orders = sc.parallelize(Array( ("1995-01-01", "steak"), ("2000-01-01", "fish"), ("2005-01-01", "steak") )).toDF("date", "product") orders.show() +----------+-------+ | date|product| +----------+-------+ |1995-01-01| steak| |2000-01-01| fish| |2005-01-01| steak| +----------+-------+ orders.join(products, $"product" === $"name" && $"date" >= $"startDate" && $"date" <= $"endDate") .show() +----------+-------+-----+----------+----------+-----+ | date|product| name| startDate| endDate|price| +----------+-------+-----+----------+----------+-----+ |2000-01-01| fish| fish|1990-01-01|2020-01-01| 100| |1995-01-01| steak|steak|1990-01-01|2000-01-01| 150| |2005-01-01| steak|steak|2000-01-02|2020-01-01| 180| +----------+-------+-----+----------+----------+-----+
Join types:
inner, outer, left_outer, right_outer, leftsemi
Join with dataframe alias
val joinedDF = testDF.as('a).join(genmodDF.as('b), $"a.PassengerId" === $"b.PassengerId") joinedDF.select($"a.PassengerId", $"b.PassengerId").take(10)val joinedDF = testDF.join(genmodDF, testDF("PassengerId") === genmodDF("PassengerId"), "inner")
作者:虎耳
链接:https://www.jianshu.com/p/8ac9778eb4bd