Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
573 views
in Technique[技术] by (71.8m points)

scala - Spark count vs take and length

I'm using com.datastax.spark:spark-cassandra-connector_2.11:2.4.0 when run zeppelin notebooks and don't understand the difference between two operations in spark. One operation takes a lot of time for computation, the second one executes immediately. Could someone explain to me the differences between two operations:

import com.datastax.spark.connector._
import org.apache.spark.sql.cassandra._

import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._

case class SomeClass(val someField:String)

val timelineItems = spark.read.format("org.apache.spark.sql.cassandra").options(scala.collection.immutable.Map("spark.cassandra.connection.host" -> "127.0.0.1", "table" -> "timeline_items", "keyspace" -> "timeline" )).load()
//some simplified code:
val timelineRow = timelineItems
        .map(x => {SomeClass("test")})
        .filter(x => x != null)
        .toDF()
        .limit(4)

//first operation (takes a lot of time. It seems spark iterates through all items in Cassandra and doesn't use laziness with limit 4)
println(timelineRow.count()) //return: 4

//second operation (executes immediately); 300 - just random number which doesn't affect the result
println(timelineRow.take(300).length) //return: 4
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

What is you see is a difference between implementation of Limit (an transformation-like operation) and CollectLimit (an action-like operation). However the difference in timings is highly misleading, and not something you can expect in general case.

First let's create a MCVE

spark.conf.set("spark.sql.files.maxPartitionBytes", 500)

val ds = spark.read
  .text("README.md")
  .as[String]
  .map{ x => {
    Thread.sleep(1000)
    x
   }}

val dsLimit4 = ds.limit(4)

make sure we start with clean slate:

spark.sparkContext.statusTracker.getJobIdsForGroup(null).isEmpty
Boolean = true

invoke count:

dsLimit4.count()

and take a look at the execution plan (from Spark UI):

== Parsed Logical Plan ==
Aggregate [count(1) AS count#12L]
+- GlobalLimit 4
   +- LocalLimit 4
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
         +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
            +- DeserializeToObject cast(value#0 as string).toString, obj#5: java.lang.String
               +- Relation[value#0] text

== Analyzed Logical Plan ==
count: bigint
Aggregate [count(1) AS count#12L]
+- GlobalLimit 4
   +- LocalLimit 4
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
         +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
            +- DeserializeToObject cast(value#0 as string).toString, obj#5: java.lang.String
               +- Relation[value#0] text

== Optimized Logical Plan ==
Aggregate [count(1) AS count#12L]
+- GlobalLimit 4
   +- LocalLimit 4
      +- Project
         +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
            +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
               +- DeserializeToObject value#0.toString, obj#5: java.lang.String
                  +- Relation[value#0] text

== Physical Plan ==
*(2) HashAggregate(keys=[], functions=[count(1)], output=[count#12L])
+- *(2) HashAggregate(keys=[], functions=[partial_count(1)], output=[count#15L])
   +- *(2) GlobalLimit 4
      +- Exchange SinglePartition
         +- *(1) LocalLimit 4
            +- *(1) Project
               +- *(1) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
                  +- *(1) MapElements <function1>, obj#6: java.lang.String
                     +- *(1) DeserializeToObject value#0.toString, obj#5: java.lang.String
                        +- *(1) FileScan text [value#0] Batched: false, Format: Text, Location: InMemoryFileIndex[file:/path/to/README.md], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>

The core component is

+- *(2) GlobalLimit 4
   +- Exchange SinglePartition
      +- *(1) LocalLimit 4

which indicates that we can expect a wide operation with multiple stages. We can see a single job

spark.sparkContext.statusTracker.getJobIdsForGroup(null)
Array[Int] = Array(0)

with two stages

spark.sparkContext.statusTracker.getJobInfo(0).get.stageIds
Array[Int] = Array(0, 1)

with eight

spark.sparkContext.statusTracker.getStageInfo(0).get.numTasks
Int = 8

and one

spark.sparkContext.statusTracker.getStageInfo(1).get.numTasks
Int = 1

task respectively.

Now let's compare it to

dsLimit4.take(300).size

which generates following

== Parsed Logical Plan ==
GlobalLimit 300
+- LocalLimit 300
   +- GlobalLimit 4
      +- LocalLimit 4
         +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
            +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
               +- DeserializeToObject cast(value#0 as string).toString, obj#5: java.lang.String
                  +- Relation[value#0] text

== Analyzed Logical Plan ==
value: string
GlobalLimit 300
+- LocalLimit 300
   +- GlobalLimit 4
      +- LocalLimit 4
         +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
            +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
               +- DeserializeToObject cast(value#0 as string).toString, obj#5: java.lang.String
                  +- Relation[value#0] text

== Optimized Logical Plan ==
GlobalLimit 4
+- LocalLimit 4
   +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
      +- MapElements <function1>, class java.lang.String, [StructField(value,StringType,true)], obj#6: java.lang.String
         +- DeserializeToObject value#0.toString, obj#5: java.lang.String
            +- Relation[value#0] text

== Physical Plan ==
CollectLimit 4
+- *(1) SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, java.lang.String, true], true, false) AS value#7]
   +- *(1) MapElements <function1>, obj#6: java.lang.String
      +- *(1) DeserializeToObject value#0.toString, obj#5: java.lang.String
         +- *(1) FileScan text [value#0] Batched: false, Format: Text, Location: InMemoryFileIndex[file:/path/to/README.md], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>

While both global and local limits still occur, there is no exchange in the middle. Therefore we can expect a single stage operation. Please note that planner narrowed down limit to more restrictive value.

As expected we see a single new job:

spark.sparkContext.statusTracker.getJobIdsForGroup(null)
Array[Int] = Array(1, 0)

which generated only one stage:

spark.sparkContext.statusTracker.getJobInfo(1).get.stageIds
Array[Int] = Array(2)

with only one task

spark.sparkContext.statusTracker.getStageInfo(2).get.numTasks
Int = 1

What does it mean for us?

  • In the count case Spark used wide transformation and actually applies LocalLimit on each partition and shuffles partial results to perform GlobalLimit.
  • In the take case Spark used narrow transformation and evaluated LocalLimit only on the first partition.

Obviously the latter approach won't work with number of values in the first partition is lower than the requested limit.

val dsLimit105 = ds.limit(105) // There are 105 lines

In such case the first count will use exactly the same logic as before (I encourage you to confirm that empirically), but take will take rather different path. So far we triggered only two jobs:

spark.sparkContext.statusTracker.getJobIdsForGroup(null)
Array[Int] = Array(1, 0)

Now if we execute

dsLimit105.take(300).size

you'll see that it required 3 more jobs:

spark.sparkContext.statusTracker.getJobIdsForGroup(null)
Array[Int] = Array(4, 3, 2, 1, 0)

So what's going on here? As noted before evaluating a single partition is not enough to satisfy limit in general case. In such case Spark iteratively evaluates LocalLimit on partitions, until GlobalLimit is satisfied, increasing number of partitions taken in each iteration.

Such strategy can have significant performance implications. Starting Spark jobs alone is not cheap and in cases, when upstream object is a result of wide transformation things can get quite ugly (in the best case scenario you can read shuffle files, but if these are lost for some reason, Spark might be forced to re-execute all the dependencies).

To summarize:

  • take is an action, and can short circuit in specific cases where upstream process is narrow, and LocalLimits can be satisfy GlobalLimits using the first few partitions.
  • limit is a transformation, and always evaluates all LocalLimits, as there is no iterative escape hatch.

While one can behave better than the other in specific cases, there not exchangeable and neither guarantees better performance in general.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...