Java – MapReduce job in headless environment failed n times due to am container exception started by the container
When running a map reduction job in a headless environment in Mac OS X (for example, when running a job as a specific user SSH), I get the following exceptions or something similar
2013-12-04 15:08:28,513 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App Failed with state: Failed PERMISSIONS=Application application_1386194876944_0001 Failed 2 times due to AM Container for appattempt_1386194876944_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
On the contrary, if I log in as this user, no error will occur, and the MR job will end, and the Java icon marked "mrappmaster" will pop up in the dock
I've narrowed it down to resource manager and started the java process without passing - DJava awt. headless = true. When this happens in a headless environment, the JVM does not have permission to display in the root window This has occurred in some other cases, and I have corrected each
This is not a permission issue (something suggested elsewhere) or a missing directory
But I don't know how to affect the violation of the last unauthorized access to the root window
I have - DJava awt. Add the headless = true option to the following:
> hadoop-env. Hadoop in Sh_ OPTS > mapred-env. Hadoop in Sh_ JOB_ HISTORYSERVER_ OPTS YARN_ OPTS in yarn-env. sh > yarn-env. Yarn in Sh_ RESOURCEMANAGER_ Opts (however, it may be repeated that yarn_opts > mapred. {map | reduce}. Child.java.opts and mapred.child.java.opts are in mapred-site.xml
What did I miss? Is it better for me to add it to the global Java options?
For reference only, this is only Mac OS X 10.8 5, running from Apache 1.6 0_ Hadoop 2.2.0 downloaded from 65-b14 0. I don't use homebrew or any other distribution I'm using the wordcount sample to test pseudo clusters
thank you.
OK Mea culpa. I finally found all the settings to add... In mapred default Search all "opt" entries in the XML configuration description
They are here
<property> <name>mapred.child.java.opts</name> <value>-Djava.awt.headless=true</value> </property> <!-- add headless to default -Xmx1024m --> <property> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Djava.awt.headless=true -Xmx1024m</value> </property> <property> <name>yarn.app.mapreduce.am.admin-command-opts</name> <value>-Djava.awt.headless=true</value> </property>
I also tried by adding parameters to / etc / profile_ JAVA_ Options to do the same thing Unless you run mrappmaster, Java will pick it up!
I hope this will help others
Solution
The problem is that the path of the Java executable used by yarn is different from your path in the operating system
Check that the hard coded path of Java is / bin / Java, but if / bin / Java is not used as the Java executable, the yarn job will fail Just like in OSX, I run Java 1.7 on / usr / bin / Java, as follows:
$java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08,mixed mode)
To solve this problem in OSX, I created a link from / bin / Java to / usr / bin / Java, as follows:
$sudo ln -s /usr/bin/java /bin/java Password: *****
After that, the work runs successfully