pyspark.RDD.mapPartitionsWithSplit#
- RDD.mapPartitionsWithSplit(f, preservesPartitioning=False)[source]#
- Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition. - New in version 0.7.0. - Deprecated since version 0.9.0: use meth:RDD.mapPartitionsWithIndex instead. - Parameters
- ffunction
- a function to run on each partition of the RDD 
- preservesPartitioningbool, optional, default False
- indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input function doesn’t modify the keys 
 
- Returns
 - Examples - >>> rdd = sc.parallelize([1, 2, 3, 4], 4) >>> def f(splitIndex, iterator): yield splitIndex ... >>> rdd.mapPartitionsWithSplit(f).sum() 6