adarsh
2010-11-29 12:30:18 UTC
Dear all,
Thanks for U'r replies, now i Have successfully integrated Hadoop with sge such that my ./qhost -F | grep hdfs command shows
all data paths.
Now when I ran simple wordcount job, my job remain at qw state.
Logs in Execution hosts says :
11/29/2010 16:47:34| main|ws37-user-lin|E|shepherd of job 1.1 exited with exit status = 27
11/29/2010 16:47:34| main|ws37-user-lin|E|can't open usage file "active_jobs/1.1/usage" for job 1.1: No such file or directory
11/29/2010 16:47:34| main|ws37-user-lin|E|11/29/2010 16:47:34 [0:9462]: unable to find shell "/bin/csh"
How to get rid of this.
Whether scp accounting file to all nodes is sufficient or we must have mount /default/common on NFS.
I simple copied it to all execution hosts.
Thanks in Advance
Adarsh Sharma
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=300212
To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
Thanks for U'r replies, now i Have successfully integrated Hadoop with sge such that my ./qhost -F | grep hdfs command shows
all data paths.
Now when I ran simple wordcount job, my job remain at qw state.
Logs in Execution hosts says :
11/29/2010 16:47:34| main|ws37-user-lin|E|shepherd of job 1.1 exited with exit status = 27
11/29/2010 16:47:34| main|ws37-user-lin|E|can't open usage file "active_jobs/1.1/usage" for job 1.1: No such file or directory
11/29/2010 16:47:34| main|ws37-user-lin|E|11/29/2010 16:47:34 [0:9462]: unable to find shell "/bin/csh"
How to get rid of this.
Whether scp accounting file to all nodes is sufficient or we must have mount /default/common on NFS.
I simple copied it to all execution hosts.
Thanks in Advance
Adarsh Sharma
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=300212
To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].