lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 30 Jul 2015 10:47:23 -0700 (PDT)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
cc:	"Auld, Will" <will.auld@...el.com>,
	"Shivappa, Vikas" <vikas.shivappa@...el.com>,
	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"mingo@...nel.org" <mingo@...nel.org>,
	"tj@...nel.org" <tj@...nel.org>,
	"peterz@...radead.org" <peterz@...radead.org>,
	"Fleming, Matt" <matt.fleming@...el.com>,
	"Williamson, Glenn P" <glenn.p.williamson@...el.com>,
	"Juvva, Kanaka D" <kanaka.d.juvva@...el.com>
Subject: Re: [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and
 cgroup usage guide



Marcello,


On Wed, 29 Jul 2015, Marcelo Tosatti wrote:
>
> How about this:
>
> desiredclos (closid  p1  p2  p3 p4)
> 	     1       1   0   0  0
> 	     2	     0	 0   0  1
> 	     3	     0   1   1  0

#1 Currently in the rdt cgroup , the root cgroup always has all the bits set and 
cant be changed (because the cgroup hierarchy would by default make this to have 
all bits as all the children need to have a subset of the root's bitmask). So if 
the user creates a cgroup and not put any task in it , the tasks in the root 
cgroup could be still using that part of the cache. Thats the reason i say we 
can have really 'exclusive' masks.

Or in other words - there is always a desired clos (0) which has all parts set 
which acts like a default pool.

Also the parts can overlap.  Please apply this for all the below comments which 
will change the way they work.

>
> p means part.

I am assuming p = (a contiguous cache capacity bit mask)

> closid 1 is a exclusive cgroup.
> closid 2 is a "cache hog" class.
> closid 3 is "default closid".
>
> Desiredclos is what user has specified.
>
> Transition 1: desiredclos --> effectiveclos
> Clean all bits of unused closid's
> (that must be updated whenever a
> closid1 cgroup goes from empty->nonempty
> and vice-versa).
>
> effectiveclos (closid  p1  p2  p3 p4)
> 	       1       0   0   0  0
> 	       2       0   0   0  1
> 	       3       0   1   1  0

>
> Transition 2: effectiveclos --> expandedclos
> expandedclos (closid  p1  p2  p3 p4)
> 	       1       0   0   0  0
> 	       2       0   0   0  1
> 	       3       1   1   1  0
> Then you have different inplacecos for each
> CPU (see pseudo-code below):
>
> On the following events.
>
> - task migration to new pCPU:
> - task creation:
>
> 	id = smp_processor_id();
> 	for (part = desiredclos.p1; ...; part++)
> 		/* if my cosid is set and any other
> 	   	   cosid is clear, for the part,
> 		   synchronize desiredclos --> inplacecos */
> 		if (part[mycosid] == 1 &&
> 		    part[any_othercosid] == 0)
> 			wrmsr(part, desiredclos);
>

Currently the root cgroup would have all the bits set which will act like a 
default cgroup where all the otherwise unused parts (assuming they are a 
set of contiguous cache capacity bits) will be used.

Otherwise the question is in the expandedclos - who decides to expand the closx 
parts to include some of the unused parts.. - that could just be a default root 
always ?

Thanks,
Vikas


>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ