[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DDE95BDD51FB774B804A9C029E735EE10379BEDA2D@sausexmbp02.amd.com>
Date: Fri, 18 Sep 2009 15:00:44 -0500
From: "Langsdorf, Mark" <mark.langsdorf@....com>
To: "'Ingo Molnar'" <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] Prevent immediate process rescheduling
> -----Original Message-----
> From: Ingo Molnar [mailto:mingo@...e.hu]
> Sent: Friday, September 18, 2009 2:55 PM
> To: Langsdorf, Mark; Peter Zijlstra; Mike Galbraith
> Cc: linux-kernel@...r.kernel.org
> Subject: Re: [PATCH] Prevent immediate process rescheduling
>
>
> (fixed the Cc: lines)
>
> * Mark Langsdorf <mark.langsdorf@....com> wrote:
>
> > Prevent the scheduler from immediately rescheduling a process
> > that just yielded if there is another process available.
> >
> > Originally suggested by Mike Galbraith (efault@....de).
> >
> > Signed-off-by: Mark Langsdorf <mark.langsdorf@....com>
> > ---
> > kernel/sched_fair.c | 16 +++++++++++++++-
> > 1 files changed, 15 insertions(+), 1 deletions(-)
> >
> > diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> > index 652e8bd..4fad08f 100644
> > --- a/kernel/sched_fair.c
> > +++ b/kernel/sched_fair.c
> > @@ -353,11 +353,25 @@ static void __dequeue_entity(struct
> cfs_rq *cfs_rq, struct sched_entity *se)
> > static struct sched_entity *__pick_next_entity(struct
> cfs_rq *cfs_rq)
> > {
> > struct rb_node *left = cfs_rq->rb_leftmost;
> > + struct sched_entity *se, *curr;
> >
> > if (!left)
> > return NULL;
> >
> > - return rb_entry(left, struct sched_entity, run_node);
> > + se = rb_entry(left, struct sched_entity, run_node);
> > + curr = ¤t->se;
> > +
> > + /*
> > + * Don't select the entity who just tried to schedule away
> > + * if there's another entity available.
> > + */
> > + if (unlikely(se == curr && cfs_rq->nr_running > 1)) {
> > + struct rb_node *next_node = rb_next(&curr->run_node);
> > + if (next_node)
> > + se = rb_entry(next_node, struct
> sched_entity, run_node);
> > + }
> > +
> > + return se;
> > }
>
> I suspect some real workload is the motivation of this - what is that
> workload?
Thanks for fixing the cc lines.
The next patch I submitted enables the Pause Filter feature
from the most recent AMD Operton processors. Pause Filter
lets us detect when a KVM guest VCPU is spinning useless on
a contended lock because the lockholder VCPU isn't scheduled.
Ideally, we'd like to switch to that VCPU thread, but at a
minimum, we'd like to meaningful yield the contending VCPU.
This patch prevents schedule() from just rescheduling the
thread that is trying to yield, and didn't have any
performance regressions in my tests.
-Mark Langsdorf
Operating System Research Center
AMD
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists