resilient distributed dataset (RDD) which is a fault-tolerant collection of elements that can be operated on in parallel. There are two ways to create RDDs: parallelizing an existing collection in your driver program, or referencing a dataset in an external storage system, such as a shared filesystem, HDFS, HBase, or any data source offering a Hadoop InputFormat
。
- Internally, each RDD is characterized by five main properties:
-
- A list of partitions
-
- A function for computing each split
-
- A list of dependencies on other RDDs
-
- Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)
-
- Optionally, a list of preferred locations to compute each split on (e.g. block locations for
- an HDFS file)