Discussion:
Problems with PEs and resource quotas
(too old to reply)
mdsteeves
2010-12-13 20:27:05 UTC
Permalink
Raw Message
We're running SGE 6.2u4 on RHEL5.4.

We've set up Olesen to help users run jobs on the cluster that require
FLEXlm licenses, and would also like to be able to set up a resource
quota so that when users launch jobs they're not able to lock up all of
the licenses:

{
name moe_limit
description limit everyone to no more than 20 moe license
enabled TRUE
limit users {*} to moe=20
}

For some reason, though, we're running into problems with some users
that submit jobs that use PEs, and also request certain resources with
the "-l" switch get stuck in a qw state, and the message references the
resource quota:

scheduling info: queue instance "***@compute-1-25.local"
dropped because it is disabled
queue instance "***@compute-0-11.local"
dropped because it is disabled
queue instance "***@compute-1-26.local"
dropped because it is full
cannot run in queue "himem.q" because it is
not contained in its hard queue list (-q)
cannot run because it exceeds limit
"steevmi1/////" in rule "moe_limit/1"
cannot run in PE "orte" because it only
offers 0 slots

For testing, I've been using the following script:

#!/bin/bash

#$ -S /bin/ksh
#$ -j y
#$ -cwd
#$ -q mpi.q
#$ -pe orte 8
#$ -N mdsTest
## The following all work:
## #$ -l h_cpu=1
## #$ -l mem_total=5G
## #$ -l arch=lx26-amd64
## #$ -l moe=1
## Any of the following do not work, and cause the job to hang in the
queue:
## #$ -l q=mpi.q
## #$ -l hostname="compute-0-2"
## #$ -l
hostname="compute-0-78|compute-0-106|compute-0-69|compute-0-68|compute-0-100|compute-0-63|compute-0-93|compute-0-82|compute-0-76"

hostname
sleep 300

Even switching from "-q mpi.q" to "-masterq mpi.q" doesn't help any. If
we disable the resource quota rule, then the jobs run without any
problems. Is there something that we're missing?


-Mike
--
Michael Steeves (***@gmail.com)

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305177

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
reuti
2010-12-14 10:18:22 UTC
Permalink
Raw Message
Hi,
Post by mdsteeves
We're running SGE 6.2u4 on RHEL5.4.
We've set up Olesen to help users run jobs on the cluster that require
FLEXlm licenses, and would also like to be able to set up a resource
quota so that when users launch jobs they're not able to lock up all of
{
name moe_limit
description limit everyone to no more than 20 moe license
enabled TRUE
limit users {*} to moe=20
}
For some reason, though, we're running into problems with some users
that submit jobs that use PEs, and also request certain resources with
the "-l" switch get stuck in a qw state, and the message references the
dropped because it is disabled
dropped because it is disabled
dropped because it is full
cannot run in queue "himem.q" because it is
not contained in its hard queue list (-q)
cannot run because it exceeds limit
"steevmi1/////" in rule "moe_limit/1"
cannot run in PE "orte" because it only
offers 0 slots
#!/bin/bash
#$ -S /bin/ksh
#$ -j y
#$ -cwd
#$ -q mpi.q
#$ -pe orte 8
#$ -N mdsTest
## #$ -l h_cpu=1
## #$ -l mem_total=5G
## #$ -l arch=lx26-amd64
## #$ -l moe=1
## Any of the following do not work, and cause the job to hang in the
## #$ -l q=mpi.q
## #$ -l hostname="compute-0-2"
## #$ -l
hostname="compute-0-78|compute-0-106|compute-0-69|compute-0-68|compute-0-100|compute-0-63|compute-0-93|compute-0-82|compute-0-76"
I don't see any resource reservation in the above lines: #$ -R

And to have an effect it's necessary to set "max_reservation 20" or an appropriate value in the scheduler configuration. Then slots should be reserved for this job, so that he won't die of starvation.

Is this fixing the issue?

-- Reuti
Post by mdsteeves
hostname
sleep 300
Even switching from "-q mpi.q" to "-masterq mpi.q" doesn't help any. If
we disable the resource quota rule, then the jobs run without any
problems. Is there something that we're missing?
-Mike
--
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305177
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305386

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
mdsteeves
2010-12-14 20:02:30 UTC
Permalink
Raw Message
On 12/14/10 5:18 AM, reuti wrote:

[SNIP]
Post by reuti
I don't see any resource reservation in the above lines: #$ -R
And to have an effect it's necessary to set "max_reservation 20" or an appropriate value in the scheduler configuration. Then slots should be reserved for this job, so that he won't die of starvation.
Is this fixing the issue?
Resource reservation for the resource quota piece? We don't use that at
the moment -- the moe_limit that's currently in place limits each user
to only be able to have 20 jobs running, which is the behavior that we
want. The problem we're having is that other jobs, that don't need or
use these licenses, get stuck in a "qw" state, and reference the
moe_limit resource quota. If we go in and disable the resource quota,
then the job gets dispatched to a node and runs without problem.

If we don't use either "-l qname=...." or "-l hostname=...." when we
submit the job, then it launches without problem.

If we don't specify a parallel environment, but leave the -l requests in
the job submission, then it launches without a problem.

While I haven't tested each and every resource that could be requested
when a job is submitted, the jobs only seem to stick in a qw state if we
try to request either a queue or a host.


-Mike
--
Michael Steeves (***@gmail.com)

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305578

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
reuti
2010-12-14 20:34:14 UTC
Permalink
Raw Message
Post by mdsteeves
[SNIP]
Post by reuti
I don't see any resource reservation in the above lines: #$ -R
And to have an effect it's necessary to set "max_reservation 20" or an appropriate value in the scheduler configuration. Then slots should be reserved for this job, so that he won't die of starvation.
Is this fixing the issue?
Resource reservation for the resource quota piece? We don't use that at
the moment -- the moe_limit that's currently in place limits each user
to only be able to have 20 jobs running, which is the behavior that we
want. The problem we're having is that other jobs, that don't need or
use these licenses, get stuck in a "qw" state, and reference the
moe_limit resource quota. If we go in and disable the resource quota,
then the job gets dispatched to a node and runs without problem.
AFAICS you are limiting the number of potential queue instances with all the examples you mentioned as not working:

## #$ -l q=mpi.q
## #$ -l hostname="compute-0-2"
## #$ -l hostname...

Hence SGE has less options to schedule the job. Or does it also happen in an empty cluster?

Nevertheless: One bug to mention is, that you can't use -q in combination with -l h=. The workaround is to request the hostnames in the -q request:

-q ***@compute-0-2

-- Reuti
Post by mdsteeves
If we don't use either "-l qname=...." or "-l hostname=...." when we
submit the job, then it launches without problem.
If we don't specify a parallel environment, but leave the -l requests in
the job submission, then it launches without a problem.
While I haven't tested each and every resource that could be requested
when a job is submitted, the jobs only seem to stick in a qw state if we
try to request either a queue or a host.
-Mike
--
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305578
------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305586

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
mdsteeves
2010-12-14 21:13:31 UTC
Permalink
Raw Message
Post by reuti
Post by mdsteeves
Resource reservation for the resource quota piece? We don't use that at
the moment -- the moe_limit that's currently in place limits each user
to only be able to have 20 jobs running, which is the behavior that we
want. The problem we're having is that other jobs, that don't need or
use these licenses, get stuck in a "qw" state, and reference the
moe_limit resource quota. If we go in and disable the resource quota,
then the job gets dispatched to a node and runs without problem.
## #$ -l q=mpi.q
## #$ -l hostname="compute-0-2"
## #$ -l hostname...
Hence SGE has less options to schedule the job. Or does it also happen in an empty cluster?
We're working with the user to see what they're trying to accomplish
with the resource requests, but we're also trying to figure out why the
moe_limit is causing these jobs to sit in qw when enabled.


-Mike
--
Michael Steeves (***@gmail.com)

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305595

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
weiser
2010-12-15 08:25:42 UTC
Permalink
Raw Message
Hi Mike,
Post by mdsteeves
We're running SGE 6.2u4 on RHEL5.4.
{
name moe_limit
description limit everyone to no more than 20 moe license
enabled TRUE
limit users {*} to moe=20
}
For some reason, though, we're running into problems with some users
that submit jobs that use PEs, and also request certain resources with
I have just seen this behaviour with 6.2u2_1 when requesting a $fill_up
PE and having a -masterq specification. Disabling the resource limit
caused the job to be scheduled on the next scheduler run.

It did not happen in tests with 6.2u5 and u6. So it seems, it is a bug
fixed somewhere inbetween. I do not have 6.2u4 to compare.
Post by mdsteeves
cannot run because it exceeds limit
"steevmi1/////" in rule "moe_limit/1"
cannot run in PE "orte" because it only
offers 0 slots
cannot run in PE "orte" because it only offers 7 slots

so always exactly one less than requested.

Bye,
--
Michael Weiser science + computing ag
Senior Systems Engineer Geschaeftsstelle Duesseldorf
Martinstrasse 47-55, Haus A
phone: +49 211 302 708 32 D-40223 Duesseldorf
fax: +49 211 302 708 50 www.science-computing.de
--
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier,
Dr. Arno Steitz, Dr. Ingrid Zech
Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=305722

To unsubscribe from this discussion, e-mail: [users-***@gridengine.sunsource.net].
Loading...