[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120105091059.GA3249@elte.hu>
Date: Thu, 5 Jan 2012 10:10:59 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Avi Kivity <avi@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS
* Avi Kivity <avi@...hat.com> wrote:
> > So why wait for non-running vcpus at all? That is, why not
> > paravirt the TLB flush such that the invalidate marks the
> > non-running VCPU's state so that on resume it will first
> > flush its TLBs. That way you don't have to wake it up and
> > wait for it to invalidate its TLBs.
>
> That's what Xen does, but it's tricky. For example
> get_user_pages_fast() depends on the IPI to hold off page
> freeing, if we paravirt it we have to take that into
> consideration.
>
> > Or am I like totally missing the point (I am after all
> > reading the thread backwards and I haven't yet fully paged
> > the kernel stuff back into my brain).
>
> You aren't, and I bet those kernel pages are unswappable
> anyway.
>
> > I guess tagging remote VCPU state like that might be
> > somewhat tricky.. but it seems worth considering, the whole
> > wake and wait for flush thing seems daft.
>
> It's nasty, but then so is paravirt. It's hard to get right,
> and it has a tendency to cause performance regressions as
> hardware improves.
Here it would massively improve performance - without regressing
the scheduler code massively.
Or you accept that the hardware does not support intelligent TLB
flushing yet, hope for future hw to fix it, and live with the
performance impact for now.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists