HUWWW
如果您正在使用HDFS运行SPark,我一直在通过正常编写CSV文件和利用HDFS进行合并来解决这个问题。我是在星火(1.6)直接这样做的:import org.apache.hadoop.conf.Configurationimport org.apache.hadoop.fs._def merge(srcPath: String, dstPath: String): Unit = {
val hadoopConfig = new Configuration()
val hdfs = FileSystem.get(hadoopConfig)
FileUtil.copyMerge(hdfs, new Path(srcPath), hdfs, new Path(dstPath), true, hadoopConfig, null)
// the "true" setting deletes the source files once they are merged into the new output}val newData =
<< create your dataframe >>val outputfile = "/user/feeds/project/outputs/subject"
var filename = "myinsights"var outputFileName = outputfile + "/temp_" + filename
var mergedFileName = outputfile + "/merged_" + filenamevar mergeFindGlob = outputFileName
newData.write .format("com.databricks.spark.csv")
.option("header", "false")
.mode("overwrite")
.save(outputFileName)
merge(mergeFindGlob, mergedFileName )
newData.unpersist()我不记得我是从哪里学到这个把戏的,但它可能对你有用。