[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101213123944.GA14178@balbir.in.ibm.com>
Date: Mon, 13 Dec 2010 18:09:44 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
Cc: Rik van Riel <riel@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Anthony Liguori <aliguori@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 0/3] directed yield for Pause Loop Exiting
* Avi Kivity <avi@...hat.com> [2010-12-13 13:57:37]:
> On 12/11/2010 03:57 PM, Balbir Singh wrote:
> >* Avi Kivity<avi@...hat.com> [2010-12-11 09:31:24]:
> >
> >> On 12/10/2010 07:03 AM, Balbir Singh wrote:
> >> >>
> >> >> Scheduler people, please flame me with anything I may have done
> >> >> wrong, so I can do it right for a next version :)
> >> >>
> >> >
> >> >This is a good problem statement, there are other things to consider
> >> >as well
> >> >
> >> >1. If a hard limit feature is enabled underneath, donating the
> >> >timeslice would probably not make too much sense in that case
> >>
> >> What's the alternative?
> >>
> >> Consider a two vcpu guest with a 50% hard cap. Suppose the workload
> >> involves ping-ponging within the guest. If the scheduler decides to
> >> schedule the vcpus without any overlap, then the throughput will be
> >> dictated by the time slice. If we allow donation, throughput is
> >> limited by context switch latency.
> >>
> >
> >If the vpcu holding the lock runs more and capped, the timeslice
> >transfer is a heuristic that will not help.
>
> Why not? as long as we shift the cap as well.
>
Shifting the cap would break it, no? Anyway, that is something for us
to keep track of as we add additional heuristics, not a show stopper.
--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists