Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
380 views
in Technique[技术] by (71.8m points)

java - Exception in createBlockOutputStream when copying data into HDFS

I am getting the below warning messages while copying the data into HDFS. I've 6 node cluster running. Every time during copy it ignores the two nodes and displays the below warning messages.

    INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.226.136:50010
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1116)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
13/11/04 05:02:15 INFO hdfs.DFSClient: Abandoning BP-603619794-127.0.0.1-1376359904614:blk_-7294477166306619719_1917
13/11/04 05:02:15 INFO hdfs.DFSClient: Excluding datanode 192.168.226.136:50010

Datanode logs

2014-02-07 04:22:01,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:                     IOException in offerService
          java.io.IOException: Failed on local exception: java.io.IOException: Connection  reset by peer; Host Details : local host is: "datanode4/192.168.226.136"; destination host is: "namenode":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763)
        at org.apache.hadoop.ipc.Client.call(Client.java:1235)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
        at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
        at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:170)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:441)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:521)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
        at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
        at sun.nio.ch.IOUtil.read(IOUtil.java:224)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
        at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:56)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:143)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:411)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:276)
        at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760)
        at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288)
        at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752)
        at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:941)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:839)
2014-02-07 04:22:04,780 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:05,783 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:06,785 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:07,788 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:08,791 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:09,794 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:10,796 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:11,798 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:12,802 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:13,813 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:13,818 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.net.ConnectException: Call From datanode4/192.168.226.136 to namenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

I tried doing SSH from datanode to namenode it works. Can anyone please me this one.

Please let me know if you need any other details.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Make sure to check firewall

service iptables save

service iptables stop

chkconfig iptables off

More details on this blog: http://ahikmat.blogspot.kr/2014/05/three-essential-things-to-do-while.html


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...