Java – Amazon EMR: run custom jar with input and output from S3

I'm trying to run an EMR cluster with custom jar steps The program takes input from S3 and outputs it to S3 (or at least that's what I want to do) In step configuration, I have the following in the arguments field:

v3.MaxTemperatureDriver
s3n://hadoopbook/ncdc/all
s3n://hadoop-szhu/max-temp

Where Hadoop book / NCDC / all is the path of the bucket containing the input data (as a sidenote, the example I'm running is from this book), and Hadoop szhu is my own bucket. I want to store the output After this post, my MapReduce driver is as follows:

package v3;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

import v1.MaxTemperatureReducer;

public class MaxTemperatureDriver extends Configured implements Tool {

  @Override
  public int run(String[] args) throws Exception {
    if (args.length != 2) {
      System.err.printf("Usage: %s [generic options] <input> <output>\n",getClass().getSimpleName());
      ToolRunner.printGenericCommandUsage(System.err);
      return -1;
    }

    Job job = new Job(getConf(),"Max temperature");
    job.setJarByClass(getClass());

    FileInputFormat.addInputPath(job,new Path(args[0]));
    FileOutputFormat.setOutputPath(job,new Path(args[1]));

    job.setMapperClass(MaxTemperatureMapper.class);
    job.setCombinerClass(MaxTemperatureReducer.class);
    job.setReducerClass(MaxTemperatureReducer.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    return job.waitForCompletion(true) ? 0 : 1;
  }

  public static void main(String[] args) throws Exception {
    int exitCode = ToolRunner.run(new MaxTemperatureDriver(),args);
    System.exit(exitCode);
  }
}

However, when I try to run it, I receive the following error:

Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3n

I also try to copy the data from S3 to the cluster using the following method (run after SSH enters the master node):

hadoop distcp \
  -Dfs.s3n.awsAccessKeyId='...' \
  -Dfs.s3n.awsSecretAccessKey='...' \
  s3n://hadoopbook/ncdc/all input/ncdc/all

However, I received some errors, and I listed an excerpt below:

2016-09-03 07:07:11,858 FATAL [IPC Server handler 6 on 43495] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1472884232220_0001_m_000000_0 - exited : java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:224)
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    ... 10 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:511)
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)
    ... 9 more

I'm not sure what the problem is, but I'd be happy to provide more details (please comment below) thank you!

Solution

S3n: / / is an old protocol. You should use S3://

reference resources: http://docs.aws.amazon.com//ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>