Java – how to attach an HDFS file (3 nodes or less) to a very small cluster

I tried to attach files on HDFS on a single node cluster I also tried a 2 - node cluster with the same exception

In HDFS site, I will DFS Replication is set to 1 If I will DFS client. block. write. replace-datanode-on-failure. When policy is set to default, I get the following exception

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.10.37.16:50010],original=[10.10.37.16:50010]). The current Failed datanode replacement policy is DEFAULT,and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

If I follow configuration in HDFS default XML comments for very small clusters (3 nodes or less) and set DFS client. block. write. replace-datanode-on-failure. Policy, so that I can't get the following exceptions:

org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot append to file/user/hadoop/test. Name node is in safe mode.
The reported blocks 1277 has reached the threshold 1.0000 of total blocks 1277. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.

This is how I add:

Configuration conf = new Configuration();
conf.set("fs.defaultFS","hdfs://MY-MACHINE:8020/user/hadoop");
conf.set("hadoop.job.ugi","hadoop");

FileSystem fs = FileSystem.get(conf);
OutputStream out = fs.append(new Path("/user/hadoop/test"));

PrintWriter writer = new PrintWriter(out);
writer.print("hello world");
writer.close();

Is there anything I did wrong in the code? Maybe something is missing in the configuration? Any help will be appreciated!

edit

Even DFS Replication is set to 1 when I check the status of the file

FileStatus[] status = fs.listStatus(new Path("/user/hadoop"));

I found status [i] block_ Replication is set to 3 I don't think this is a problem because when I put DFS When the value of replication is changed to 0, I get a related exception So obviously it does obey DFS The value of replication, but for security reasons, is there a way to change the block of each file_ Replication value?

Solution

As I mentioned in the editor Even DFS Replication is set to 1, filestatus block_ Replication is set to 3

One possible solution is to run

hadoop fs -setrep -w 1 -R /user/hadoop/

This will recursively change the copy factor for each file in a given directory The document of this command can be in @ L_ 404_ Found in 1 @

What we need to do now is to see why we ignore HDFS - site Value in XML And how to force the value 1 as the default

edit

It turns out that DFS The replication property must be set in the configuration instance, otherwise it requires that the replication factor of the file be the default value, no matter in HDFS site The value set in XML is 3

Add to the code and the following statement will resolve it

conf.set("dfs.replication","1");
The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>