[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3E5A0FA7E9CA944F9D5414FEC6C712205DF4C484@ORSMSX106.amr.corp.intel.com>
Date: Mon, 14 Dec 2015 22:58:12 +0000
From: "Yu, Fenghua" <fenghua.yu@...el.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
CC: H Peter Anvin <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
"Thomas Gleixner" <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
x86 <x86@...nel.org>,
"Vikas Shivappa" <vikas.shivappa@...ux.intel.com>,
"Luck, Tony" <tony.luck@...el.com>
Subject: RE: [PATCH V15 11/11] x86,cgroup/intel_rdt : Add a cgroup interface
to manage Intel cache allocation
> From: Marcelo Tosatti [mailto:mtosatti@...hat.com]
> Sent: Wednesday, November 18, 2015 2:15 PM
> To: Yu, Fenghua <fenghua.yu@...el.com>
> Cc: H Peter Anvin <hpa@...or.com>; Ingo Molnar <mingo@...hat.com>;
> Thomas Gleixner <tglx@...utronix.de>; Peter Zijlstra
> <peterz@...radead.org>; linux-kernel <linux-kernel@...r.kernel.org>; x86
> <x86@...nel.org>; Vikas Shivappa <vikas.shivappa@...ux.intel.com>
> Subject: Re: [PATCH V15 11/11] x86,cgroup/intel_rdt : Add a cgroup interface
> to manage Intel cache allocation
>
> On Thu, Oct 01, 2015 at 11:09:45PM -0700, Fenghua Yu wrote:
> > Add a new cgroup 'intel_rdt' to manage cache allocation. Each cgroup
> > directory is associated with a class of service id(closid). To map a
> > task with closid during scheduling, this patch removes the closid field
> > from task_struct and uses the already existing 'cgroups' field in
> > task_struct.
> >
> > +
> > +/*
> > * intel_rdt_sched_in() - Writes the task's CLOSid to IA32_PQR_MSR
> > *
> > * Following considerations are made so that this has minimal impact
> > * on scheduler hot path:
> > * - This will stay as no-op unless we are running on an Intel SKU
> > * which supports L3 cache allocation.
> > + * - When support is present and enabled, does not do any
> > + * IA32_PQR_MSR writes until the user starts really using the feature
> > + * ie creates a rdt cgroup directory and assigns a cache_mask thats
> > + * different from the root cgroup's cache_mask.
> > * - Caches the per cpu CLOSid values and does the MSR write only
> > - * when a task with a different CLOSid is scheduled in.
>
> Why is this even allowed?
>
> socket CBM bits:
>
> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
> [ | | | | | | | | | | | | | | ]
>
> x x x x x x x
> x x x x
>
> x x x x x
>
> cgroupA.bits = [ 0 - 6 ] cgroupB.bits = [ 10 - 14] (level 1)
> cgroupA-A.bits = [ 0 - 4 ] (level 2)
>
> Two ways to create a cgroup with bits [ 0 - 4] set:
>
> 1) Create a cgroup C in level 1 with a different name.
> Useful to have same cgroup with two different names.
>
> 2) Create a cgroup A-B under cgroup-A with bits [0 - 4].
>
> It just creates confusion, having two or more cgroups under
> different levels of the hierarchy with the same bits set.
> (can't see any organizational benefit).
>
> Why not return -EINVAL ? Ah, cgroups are hierarchical, right.
I would let the situation be handled by user space management tool. Kernel handles only minimum situation.
The management tool should have more knowledge to create CLOSID. Kernel only pass that info to hardware.
Thanks.
-Fenghua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists