继续浏览精彩内容
慕课网APP
程序员的梦工厂
打开
继续
感谢您的支持,我会继续努力的
赞赏金额会直接到老师账户
将二维码发送给自己后长按识别
微信支付
支付宝支付

本地Eclipse提交MR程序到Yarn

慕村9548890
关注TA
已关注
手记 1260
粉丝 227
获赞 989

1、一般地,从Windows本地的Eclipse提交程序到yarn,会报如下错误:

Diagnostics: Exception from container-launch.
Container id: container_1526537597068_0006_02_000001Exit code: 1Exception message: /bin/bash: line 0: fg: no job control

Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no job control

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
    at org.apache.hadoop.util.Shell.run(Shell.java:478)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1

2、原因:
客户端提交任务时候,在YARNRunner.java中调用了  createApplicationSubmissionContext方法,该方法负责初始化启动MRAppMaster的脚本,该脚本的生成依赖于本地系统,而目前由于是在Windows系统下,所以生成的脚本提交到Linux集群上运行必然出错。

webp

createApplicationSubmissionContext

3、手工修改YARNRunner适配Linux系统

// Construct necessary information to start the MR AM
        ApplicationSubmissionContext appContext = createApplicationSubmissionContext(conf, jobSubmitDir, ts);

4、修改完再次运行debug看到appContext的值为:

application_id { id: 7 cluster_timestamp: 1526537597068 } application_name: "wc-fat.jar" queue: "default" am_container_spec { localResources { key: "job.jar" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.jar" } size: 73165925 timestamp: 1526810933633 type: PATTERN visibility: APPLICATION pattern: "(?:classes/|lib/).*" } } localResources { key: "jobSubmitDir/job.splitmetainfo" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.splitmetainfo" } size: 15 timestamp: 1526810935006 type: FILE visibility: APPLICATION } } localResources { key: "jobSubmitDir/job.split" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.split" } size: 355 timestamp: 1526810934874 type: FILE visibility: APPLICATION } } localResources { key: "job.xml" value { resource { scheme: "hdfs" host: "192.168.92.150" port: 8020 file: "/tmp/hadoop-yarn/staging/hadoop/.staging/job_1526537597068_0007/job.xml" } size: 93151 timestamp: 1526810944696 type: FILE visibility: APPLICATION } } tokens: "HDTS\000\000\001\025MapReduceShuffleToken\b\342\214TYX\031\303\033" environment { key: "SHELL" value: "/bin/bash" } environment { key: "CLASSPATH" value: "$PWD:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:job.jar/job.jar:job.jar/classes/:job.jar/lib/*:$PWD/*" } environment { key: "LD_LIBRARY_PATH" value: "$PWD" } command: "$JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr " application_ACLs { accessType: APPACCESS_VIEW_APP acl: " " } application_ACLs { accessType: APPACCESS_MODIFY_APP acl: " " } } cancel_tokens_when_complete: true maxAppAttempts: 2 resource { memory: 1536 virtual_cores: 1 } applicationType: "MAPREDUCE"

【另外】
规划文件提交在hdfs上的佐证:

webp

image.png


webp

image.png



作者:sparkle123
链接:https://www.jianshu.com/p/f7912da85b61


打开App,阅读手记
0人推荐
发表评论
随时随地看视频慕课网APP