Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Fix append action failure #1801

Open
littlezhou opened this issue Jun 6, 2018 · 1 comment
Open

Fix append action failure #1801

littlezhou opened this issue Jun 6, 2018 · 1 comment
Assignees
Labels

Comments

@littlezhou
Copy link
Contributor

append -length 100 -file /src/t2

Log

Action starts at Wed Jun 06 10:40:54 CST 2018 : Read /src/t2
Append to /src/t2
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]], original=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)

@qiyuangong
Copy link
Contributor

Will check details in local env.
I guess this issue is caused by replication number greater than living datanode. For example, replication=3, while datanode number is 2.

@qiyuangong qiyuangong added the bug label Jan 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants