Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
713 views
in Technique[技术] by (71.8m points)

windows - Why does starting a streaming query lead to "ExitCodeException exitCode=-1073741515"?

Been trying to get used to the new structured streaming but it keeps giving me below error as soon as I start a .writeStream query.

Any idea what could be causing this? Closest I could find was an ongoing Spark bug if you split checkpoint and metadata folders between Local and HDFS, but. Running on Windows 10, Spark 2.2 and IntelliJ.

17/08/29 21:47:39 ERROR StreamMetadata: Error writing stream metadata StreamMetadata(41dc9417-621c-40e1-a3cb-976737b83fb7) to C:/Users/jason/AppData/Local/Temp/temporary-b549ee73-6476-46c3-aaf8-23295bd6fa8c/metadata
ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:778)
    at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:116)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:114)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:114)
    at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:240)
    at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:278)
    at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:282)
    at FileStream$.main(FileStream.scala:157)
    at FileStream.main(FileStream.scala)
Exception in thread "main" ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:789)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:778)
    at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:76)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:116)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$6.apply(StreamExecution.scala:114)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:114)
    at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:240)
    at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:278)
    at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:282)
    at FileStream$.main(FileStream.scala:157)
    at FileStream.main(FileStream.scala)
17/08/29 21:47:39 INFO SparkContext: Invoking stop() from shutdown hook
17/08/29 21:47:39 INFO SparkUI: Stopped Spark web UI at http://192.168.178.21:4040
17/08/29 21:47:39 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/08/29 21:47:39 INFO MemoryStore: MemoryStore cleared
17/08/29 21:47:39 INFO BlockManager: BlockManager stopped
17/08/29 21:47:39 INFO BlockManagerMaster: BlockManagerMaster stopped
17/08/29 21:47:39 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/08/29 21:47:39 INFO SparkContext: Successfully stopped SparkContext
17/08/29 21:47:39 INFO ShutdownHookManager: Shutdown hook called
17/08/29 21:47:39 INFO ShutdownHookManager: Deleting directory C:UsersjasonAppDataLocalTempemporary-b549ee73-6476-46c3-aaf8-23295bd6fa8c
17/08/29 21:47:39 INFO ShutdownHookManager: Deleting directory C:UsersjasonAppDataLocalTempspark-117ed625-a588-4dcb-988b-2055ec5fa7ec

Process finished with exit code 1
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Actually, I had the same problem while running Spark unit tests on my local machine. It was caused by the failing WinUtils.exe in %HADOOP_HOME% folder:

Input: %HADOOP_HOME%inwinutils.exe chmod 777 %SOME_TEMP_DIRECTORY%

Output:

winutils.exe - System Error
The code execution cannot proceed because MSVCR100.dll was not found.
Reinstalling the program may fix this problem.

After some surfing the Internet I found out an issue on winutils project of Steve Loughran: Windows 10: winutils.exe doesn't work.
In particular it says that installing the VC++ redistributable packages should fix the problem (and that worked in my case): How do I fix this error "msvcp100.dll is missing"


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...