lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Jul 2015 16:32:08 -0300
From:	Marcelo Tosatti <mtosatti@...hat.com>
To:	"Auld, Will" <will.auld@...el.com>
Cc:	"Shivappa, Vikas" <vikas.shivappa@...el.com>,
	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"mingo@...nel.org" <mingo@...nel.org>,
	"tj@...nel.org" <tj@...nel.org>,
	"peterz@...radead.org" <peterz@...radead.org>,
	"Fleming, Matt" <matt.fleming@...el.com>,
	"Williamson, Glenn P" <glenn.p.williamson@...el.com>,
	"Juvva, Kanaka D" <kanaka.d.juvva@...el.com>
Subject: Re: [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and
 cgroup usage guide

On Wed, Jul 29, 2015 at 01:28:38AM +0000, Auld, Will wrote:
> > > Whenever cgroupE has zero tasks, remove exclusivity (by allowing other
> > > cgroups to use the exclusive ways of it).
> > 
> > Same comment as above - Cgroup masks can always overlap and other cgroups
> > can allocate the same cache , and hence wont have exclusive cache allocation.
> 
> [Auld, Will] You can define all the cbm to provide one clos with an exclusive area
> 
> > 
> > So natuarally the cgroup with tasks would get to use the cache if it has the same
> > mask (say representing 50% of cache in your example) as others .
>  
> [Auld, Will] automatic adjustment of the cbm make me nervous. There are times 
> when we want to limit the cache for a process independent of whether there is 
> lots of unused cache. 

How about this:

desiredclos (closid  p1  p2  p3 p4)
	     1       1   0   0  0
	     2	     0	 0   0  1
	     3	     0   1   1  0

p means part. 
closid 1 is a exclusive cgroup.
closid 2 is a "cache hog" class.
closid 3 is "default closid".

Desiredclos is what user has specified.

Transition 1: desiredclos --> effectiveclos
Clean all bits of unused closid's
(that must be updated whenever a 
closid1 cgroup goes from empty->nonempty 
and vice-versa).

effectiveclos (closid  p1  p2  p3 p4)
	       1       0   0   0  0
	       2       0   0   0  1
	       3       0   1   1  0

Transition 2: effectiveclos --> expandedclos
expandedclos (closid  p1  p2  p3 p4)
	       1       0   0   0  0
	       2       0   0   0  1
	       3       1   1   1  0

Then you have different inplacecos for each
CPU (see pseudo-code below):

On the following events.

- task migration to new pCPU:
- task creation:

	id = smp_processor_id();
	for (part = desiredclos.p1; ...; part++)
		/* if my cosid is set and any other
 	   	   cosid is clear, for the part,
		   synchronize desiredclos --> inplacecos */
		if (part[mycosid] == 1 && 
		    part[any_othercosid] == 0)
			wrmsr(part, desiredclos);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ