Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
409 views
in Technique[技术] by (71.8m points)

java - Why does worker node not see updates to accumulator on another worker nodes?

I'm using a LongAccumulator as a shared counter in map operations. But it seems that I'm not using it correctly because the state of the counter on the worker nodes is not updated. Here's what my counter class looks like:

public class Counter implements Serializable {

   private LongAccumulator counter;

   public Long increment() {
      log.info("Incrementing counter with id: " + counter.id() + " on thread: " + Thread.currentThread().getName());
      counter.add(1);
      Long value = counter.value();
      log.info("Counter's value with id: " + counter.id() + " is: " + value + " on thread: " + Thread.currentThread().getName());
      return value;
   }

   public Counter(JavaSparkContext javaSparkContext) {
      counter = javaSparkContext.sc().longAccumulator();
   }
}

As far as I can understand the documentation this should work fine when the application is run within multiple worker nodes:

Accumulators are variables that are only “added” to through an associative and commutative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric types, and programmers can add support for new types.

But here is the result when the counter is incremented on 2 different workers and as it looks like the state is not shared between the nodes:

INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-6 INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-6
INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-0 INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-0

Do I understand the accumulators conception wrong or is there any setting that I must start the task with?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It shouldn't work:

Tasks running on a cluster can then add to it using the add method. However, they cannot read its value. Only the driver program can read the accumulator’s value, using its value method.

Each task has its own accumulator, which is updated locally, and merged with "shared" copy on the driver, once task has finished and result has been reported.

The old Accumulator API (now wrapping AccumulatorV2) actually thrown an exception when using value from within a task, but for some reason it has been omitted in AccumulatorV2.

What you experience is actually similar to the old behavior described here How to print accumulator variable from within task (seem to "work" without calling value method)?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...