lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 Feb 2015 13:38:41 -0800 (PST)
From:	Vikas Shivappa <vikas.shivappa@...el.com>
To:	Tejun Heo <tj@...nel.org>
cc:	Vikas Shivappa <vikas.shivappa@...el.com>,
	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	linux-kernel@...r.kernel.org, matt.fleming@...el.com,
	hpa@...or.com, tglx@...utronix.de, mingo@...nel.org,
	peterz@...radead.org, will.auld@...el.com, dave.hansen@...el.com,
	andi.kleen@...el.com, tony.luck@...el.com, kanaka.d.juvva@...el.com
Subject: Re: [PATCH 3/7] x86/intel_rdt: Support cache bit mask for Intel
 CAT



On Fri, 27 Feb 2015, Tejun Heo wrote:

> Hello, Vikas.
>
> On Fri, Feb 27, 2015 at 11:34:16AM -0800, Vikas Shivappa wrote:
>> This cgroup subsystem would basically let the user partition one of the
>> Platform shared resource , the LLC cache. This could be extended in future
>
> I suppose LLC means last level cache?  It'd be great if you can spell
> out the full term when the abbreviation is first referenced in the
> comments or documentation.
>

Yes that's last level cache. Will update documentation/comments if any.

>> to partition more shared resources when there is hardware support that way
>> we may eventually have more files in the cgroup. RDT is a generic term for
>> platform resource sharing.
>
>> For more information you can refer to section 17.15 of Intel SDM.
>> We did go through quite a bit of discussion on lkml regarding adding the
>> cgroup interface for CAT and the patches were posted only after that.
>> This cgroup would not interact with other cgroups in the sense would not
>> modify or add any elements to existing cgroups - there was such a proposal
>> but was removed as we did not get agreement on lkml.
>>
>> the original lkml thread is here from 10/2014 for your reference -
>> https://lkml.org/lkml/2014/10/16/568
>
> Yeap, I followed that thread and this being a separate controller
> definitely makes a lot more sense.
>
>>   I
>>> take it that the feature implemented is too coarse to allow for weight
>>> based distribution?
>>>
>> Could you please clarify more on this ? However there is a limitation from
>> hardware that there have to be a minimum of 2 bits in the cbm if thats what
>> you referred to. Otherwise the bits in the cbm directly map to the number of
>> cache ways and hence the cache capacity ..
>
> Right, so the granularity is fairly coarse and specifying things like
> "distribute cache in 4:2:1 (or even in absolute bytes) to these three
> cgroups" wouldn't work at all.

Specifying in any amount of cache bytes would be not possible because the minimum 
granularity has to be atleast one cache way because the entire memory can be 
indexed into one cache way.
Providing the bit mask granularity helps users to not worry about how much bytes 
cache way is and can specify in terms of the bitmask. If we want to 
provide such an interface in the cgroups where users can specify the size in 
bytes then we need to show the user the 
minimum granularity in bytes as well. Also note that this 
bit masks are overlapping and hence the users have a way to specify overlapped 
regions in cache which may be very useful in lot of scenarios where multiple 
cgroups want to share the capacity.

The minimum granularity is 2 bits in the pre-production SKUs  and it does
put limitation to scenarios you say. We will issue a patch update once it 
hopefully gets updated in later SKUs. But note that the SDM also recommends 
using 
2 bits from performance aspect because an application using only cache-way would 
have a lot more conflicts.
Say if max cbm is 20bits then the granularity is 10% of total cache..

>
> Thanks.
>
> -- 
> tejun
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ