MMTTMM
单程:from pyspark.sql import Rowrdd = sc.parallelize([(20190701, [11,21,31], [('A',10), ('B', 20)])])# customize a Row class based on schema MRow = Row("date", "0", "1", "2", "A", "B")rdd.map(lambda x: MRow(x[0], *x[1], *map(lambda e:e[1],x[2]))).toDF().show()+--------+---+---+---+---+---+| date| 0| 1| 2| A| B|+--------+---+---+---+---+---+|20190701| 11| 21| 31| 10| 20|+--------+---+---+---+---+---+或者另一种方式:rdd.map(lambda x: Row(date=x[0], **dict((str(i), e) for i,e in list(enumerate(x[1])) + x[2]))).toDF().show()+---+---+---+---+---+--------+| 0| 1| 2| A| B| date|+---+---+---+---+---+--------+| 11| 21| 31| 10| 20|20190701|+---+---+---+---+---+--------+
一只甜甜圈
rdd = sc.parallelize((20190701, [11,21,31], [('A',10), ('B', 20)]))elements = rdd.take(3)a = [elements[0]] + (elements[1]) + [elements[2][0][1], elements[2][1][1]]import pandas as pdsdf = spark.createDataFrame(pd.DataFrame([20190701, 11, 21, 31, 10, 20]).T)