美文网首页Hadoop我爱编程逆水舟——大数据
Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追

Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追

作者: 胸怀大海的小鱼缸 | 来源:发表于2017-07-14 11:22 被阅读0次

今日在练习HDFS文件的读取输出,写入,追加写入时,
读取输出,写入都没问题,在追加写入时出现了问题。
报错如下:

Paste_Image.png Paste_Image.png Paste_Image.png

2017-07-14 10:50:00,046 WARN [org.apache.hadoop.hdfs.DFSClient] - DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)
2017-07-14 10:50:00,052 ERROR [org.apache.hadoop.hdfs.DFSClient] - Failed to close inode 16762
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)

添加上

Paste_Image.png

之后报错如下:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to APPEND_FILE /weather/output/abc.txt for DFSClient_NONMAPREDUCE_1964095166_1 on 192.168.153.1 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-590151867_1 on 192.168.153.1

然后再添加


Paste_Image.png

完事就没问题了。

相关文章

  • Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追

    今日在练习HDFS文件的读取输出,写入,追加写入时,读取输出,写入都没问题,在追加写入时出现了问题。报错如下: 2...

  • 一. 介绍

    一、hadoop介绍 hadoop版本: cdh5.7.0 二、分布式文件系统HDFS 二 HDFS架构 Mast...

  • HDFS Federation

    HDFS Federation HDFS Federation是Hadoop最新发布版本Hadoop-0.23.0...

  • HDFS知识点总结

    1. HDFS: HDFS:Hadoop Distributed File System Hadoop 分布式文件...

  • hadoop命令

    查看hdfs的文件hadoop fs -ls /目录2.删除hdfs文件hadoop fs -rm -rf /...

  • HDFS基本架构及原理

    HDFS 概述 HDFS(Hadoop Distributed File System,Hadoop分布式文件系统...

  • HDFS HA 原理

    HDFS HA 原理 标签:HDFS HA 概述 在 Hadoop 2.x 版本中,Hadoop 实现了 HDFS...

  • HDFS

    一、HDFS简介 Hadoop Distributed filesystem:Hadoop分布式文件系统 HDFS...

  • 启动HDFS

    配置的修改 pom文件 hdfs代码在hadoop-hdfs-project/hadoop-hdfs中,pom中关...

  • 1. hdfs实例

    hadoop包含hdfs和MapReduce,hdfs负责数据文件存储,是hadoop的基础,MapReduce是...

网友评论

    本文标题:Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追

    本文链接:https://www.haomeiwen.com/subject/mnynhxtx.html