[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150730202253.GA12921@amt.cnet>
Date: Thu, 30 Jul 2015 17:22:53 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Vikas Shivappa <vikas.shivappa@...el.com>
Cc: "Auld, Will" <will.auld@...el.com>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...nel.org" <mingo@...nel.org>,
"tj@...nel.org" <tj@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"Fleming, Matt" <matt.fleming@...el.com>,
"Williamson, Glenn P" <glenn.p.williamson@...el.com>,
"Juvva, Kanaka D" <kanaka.d.juvva@...el.com>
Subject: Re: [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and
cgroup usage guide
On Thu, Jul 30, 2015 at 10:47:23AM -0700, Vikas Shivappa wrote:
>
>
> Marcello,
>
>
> On Wed, 29 Jul 2015, Marcelo Tosatti wrote:
> >
> >How about this:
> >
> >desiredclos (closid p1 p2 p3 p4)
> > 1 1 0 0 0
> > 2 0 0 0 1
> > 3 0 1 1 0
>
> #1 Currently in the rdt cgroup , the root cgroup always has all the
> bits set and cant be changed (because the cgroup hierarchy would by
> default make this to have all bits as all the children need to have
> a subset of the root's bitmask). So if the user creates a cgroup and
> not put any task in it , the tasks in the root cgroup could be still
> using that part of the cache. Thats the reason i say we can have
> really 'exclusive' masks.
>
> Or in other words - there is always a desired clos (0) which has all
> parts set which acts like a default pool.
>
> Also the parts can overlap. Please apply this for all the below
> comments which will change the way they work.
>
> >
> >p means part.
>
> I am assuming p = (a contiguous cache capacity bit mask)
>
> >closid 1 is a exclusive cgroup.
> >closid 2 is a "cache hog" class.
> >closid 3 is "default closid".
> >
> >Desiredclos is what user has specified.
> >
> >Transition 1: desiredclos --> effectiveclos
> >Clean all bits of unused closid's
> >(that must be updated whenever a
> >closid1 cgroup goes from empty->nonempty
> >and vice-versa).
> >
> >effectiveclos (closid p1 p2 p3 p4)
> > 1 0 0 0 0
> > 2 0 0 0 1
> > 3 0 1 1 0
>
> >
> >Transition 2: effectiveclos --> expandedclos
> >expandedclos (closid p1 p2 p3 p4)
> > 1 0 0 0 0
> > 2 0 0 0 1
> > 3 1 1 1 0
> >Then you have different inplacecos for each
> >CPU (see pseudo-code below):
> >
> >On the following events.
> >
> >- task migration to new pCPU:
> >- task creation:
> >
> > id = smp_processor_id();
> > for (part = desiredclos.p1; ...; part++)
> > /* if my cosid is set and any other
> > cosid is clear, for the part,
> > synchronize desiredclos --> inplacecos */
> > if (part[mycosid] == 1 &&
> > part[any_othercosid] == 0)
> > wrmsr(part, desiredclos);
> >
>
> Currently the root cgroup would have all the bits set which will act
> like a default cgroup where all the otherwise unused parts (assuming
> they are a set of contiguous cache capacity bits) will be used.
Right, but we don't want to place tasks in there in case one cgroup
wants exclusive cache access.
So whenever you want an exclusive cgroup you'd do:
create cgroup-exclusive; reserve desired part of the cache
for it.
create cgroup-default; reserved all cache minus that of cgroup-exclusive
for it.
place tasks that belong to cgroup-exclusive into it.
place all other tasks (including init) into cgroup-default.
Is that right?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists