Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
960 views
in Technique[技术] by (71.8m points)

amazon s3 - Replace HDFS form local disk to s3 getting error (org.apache.hadoop.service.AbstractService)

We are trying to setup Cloudera 5.5 where HDFS will be working on s3 only for that we have already configured the necessory properties in Core-site.xml

<property>
    <name>fs.s3a.access.key</name>
    <value>################</value>
</property>
<property>
    <name>fs.s3a.secret.key</name>
    <value>###############</value>
</property>
<property>
    <name>fs.default.name</name>
    <value>s3a://bucket_Name</value>
</property>
<property>
    <name>fs.defaultFS</name>
    <value>s3a://bucket_Name</value>
</property>

After setting it up we were able to browse the files for s3 bucket from command

hadoop fs -ls /

And it shows the files available on s3 only.

But when we start the yarn services JobHistory server fails to start with below error and on launching pig jobs we are getting same error

PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR   org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils   
Unable to create default file context [s3a://kyvosps]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)

On serching on Internet we found that we need to set following properties as well in core-site.xml

<property>
  <name>fs.s3a.impl</name>
  <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
  <description>The implementation class of the S3A Filesystem</description>
</property>
<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    <description>The FileSystem for  S3A Filesystem</description>
</property>

After setting the above properties we are getting following error

org.apache.hadoop.service.AbstractService   
Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)
    at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:131)
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
    at org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)

The jars needed for this is in place but still getting the error any help will be great. Thanks in advance

Update

I tried to remove the property fs.AbstractFileSystem.s3a.impl but it give me the same first exception the one i was getting previously which is

org.apache.hadoop.security.UserGroupInformation 
PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
ERROR   org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils   
Unable to create default file context [s3a://bucket_name]
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for scheme: s3a
    at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:154)
    at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:337)
    at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:334)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:451)
    at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:473)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The problem is not with the location of the jars.

The problem is with the setting:

<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    <description>The FileSystem for  S3A Filesystem</description>
</property>

This setting is not needed. Because of this setting, it is searching for following constructor in S3AFileSystem class and there is no such constructor:

S3AFileSystem(URI theUri, Configuration conf);

Following exception clearly tells that it is unable to find a constructor for S3AFileSystem with URI and Configuration parameters.

java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.<init>(java.net.URI, org.apache.hadoop.conf.Configuration)

To resolve this problem, remove fs.AbstractFileSystem.s3a.impl setting from core-site.xml. Just having fs.s3a.impl setting in core-site.xml should solve your problem.

EDIT: org.apache.hadoop.fs.s3a.S3AFileSystem just implements FileSystem.

Hence, you cannot set value of fs.AbstractFileSystem.s3a.impl to org.apache.hadoop.fs.s3a.S3AFileSystem, since org.apache.hadoop.fs.s3a.S3AFileSystem does not implement AbstractFileSystem.

I am using Hadoop 2.7.0 and in this version s3A is not exposed as AbstractFileSystem.

There is JIRA ticket: https://issues.apache.org/jira/browse/HADOOP-11262 to implement the same and the fix is available in Hadoop 2.8.0.

Assuming, your jar has exposed s3A as AbstractFileSystem, you need to set the following for fs.AbstractFileSystem.s3a.impl:

<property>
    <name>fs.AbstractFileSystem.s3a.impl</name>
    <value>org.apache.hadoop.fs.s3a.S3A</value>
</property>

That will solve your problem.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...