[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1442997058.24964.20.camel@citrix.com>
Date: Wed, 23 Sep 2015 10:30:58 +0200
From: Dario Faggioli <dario.faggioli@...rix.com>
To: Juergen Gross <jgross@...e.com>,
George Dunlap <george.dunlap@...rix.com>
CC: "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"Andrew Cooper" <Andrew.Cooper3@...rix.com>,
"Luis R. Rodriguez" <mcgrof@...not-panic.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
"David Vrabel" <david.vrabel@...rix.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Stefano Stabellini <stefano.stabellini@...citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC] xen: if on Xen, "flatten" the
scheduling domain hierarchy
On Wed, 2015-09-23 at 06:36 +0200, Juergen Gross wrote:
> On 09/22/2015 06:22 PM, George Dunlap wrote:
> > Juergen / Dario, could one of you summarize your two approaches,
> > and the
> > (alleged) advantages and disadvantages of each one?
>
> Okay, I'll have a try:
>
Thanks for this! ;-)
> The problem we want to solve:
> -----------------------------
>
> The Linux kernel is gathering cpu topology data during boot via the
> CPUID instruction on each processor coming online. This data is
> primarily used in the scheduler to decide to which cpu a thread
> should
> be migrated when this seems to be necessary. There are other users of
> the topology information in the kernel (e.g. some drivers try to do
> optimizations like core-specific queues/lists).
>
> When started in a virtualized environment the obtained data is next
> to
> useless or even wrong, as it is reflecting only the status of the
> time
> of booting the system. Scheduling of the (v)cpus done by the
> hypervisor
> is changing the topology beneath the feet of the Linux kernel without
> reflecting this in the gathered topology information. So any
> decisions
> taken based on that data will be clueless and possibly just wrong.
>
Exactly.
> The minimal solution is to change the topology data in the kernel in
> a
> way that all cpus are regarded as equal regarding their relation to
> each
> other (e.g. when migrating a thread to another cpu no cpu is
> preferred
> as a target).
>
> The topology information of the CPUID instruction is, however, even
> accessible form user mode and might be used for licensing purposes of
> any user program (e.g. by limiting the software to run on a specific
> number of cores or sockets). So just mangling the data returned by
> CPUID in the hypervisor seems not to be a general solution, while we
> might want to do it at least optionally in the future.
>
Yep. It turned out that, although being what started all this, CPUID
handling is a somewhat related but mostly independent problem. :-)
> In the future we might want to support either dynamic topology
> updates
> or be able to tell the kernel to use some of the topology data, e.g.
> when pinning vcpus.
>
Indeed. At least for the latter. Dynamic looks really difficult to me,
but indeed it would be ideal. Let's see.
> Solution 1 (Dario):
> -------------------
>
> Don't use the CPUID derived topology information in the Linux
> scheduler,
> but let it use a simple "flat" topology by setting own scheduler
> domain
> data under Xen.
>
> Advantages:
> + very clean solution regarding the scheduler interface
>
Yes, this is, I think, one of the main advantages of the patch. The
scheduler is offering an interface to architectures to define their
topology requirements and I'm using it, for specifying our topology
requirements: the tool for the job. :-D
> + scheduler decisions are based on a minimal data set
> + small patch
>
> Disadvantages:
> - covers the scheduler only, drivers still use the "wrong" data
>
This is a good point. It was the patch's purpose, TBH, but it's
certainly true that, if we need something similar elsewhere, we need to
do more.
> - a little bit hacky regarding some NUMA architectures (needs either
> a
> hook in the code dealing with that architecture or multiple
> scheduler
> domain data overwrites)
>
As I said in my other email, I'll double check (yes, I also think this
is about AMD boxes with intra-socket NUMA nodes).
> - future enhancements will make the solution less clean (either need
> duplicating scheduler domain data or some new hooks in scheduler
> domain interface)
>
This one, I'm not sure I understand.
> Solution 2 (Juergen):
> ---------------------
>
> When booted as a Xen guest modify the topology data built during boot
> resulting in the same simple "flat" topology as in Dario's solution.
>
> Advantages:
> + the simple topology is seen by all consumers of topology data as
> the
> data itself is modified accordingly
>
Yep, that's a good point.
> + small patch
> + future enhancements rather easy by selecting which data to modify
>
As for the '-' above about this, I'm not really sure what this means.
>
> Disadvantages:
> - interface to scheduler not as clean as in Dario's approach
> - scheduler decisions are based on multiple layers of topology data
> where one layer would be enough to describe the topology
>
This is not too big of a deal, IMO. Not at runtime, at least, as far as
my investigation went for now. Initialization (of scheduling domains)
is a bit clumsy in this case, as scheduling domains are created and
then destroyed/collapsed, but after they are setup, the net effect is
that there's only one scheduling domain with Juergen's patch too,
exactly as with mine.
> Dario, are you okay with this summary?
>
To most of it, yes, and thanks again for it.
Allow me to add a few points, out of the top of my head:
* we need to check whether the two approaches have the same
performance. In principle, they really should, and early results
seems to confirm that, but I'd like to run the full set of benches
(and I'll do that ASAP);
* I think we want to run even more benchmarks, and run them in
different (over)load conditions to better assess the effect of the
change
* both our patches provides a solution for Xen (for Xen PV guests, at
least for now, to be more precise). It is very likely that, e.g.,
KVM is in a similar situation, hence it may be worth to look for a
more general solution, especially if that buys us something (e.g.,
HVM support made easy?)
Thanks and Regards,
Dario
PS. BTW, Juergen, you're not on IRC, on #xendevel, are you?
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Download attachment "signature.asc" of type "application/pgp-signature" (182 bytes)
Powered by blists - more mailing lists