lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160722214322.GA938@intel.com>
Date:	Fri, 22 Jul 2016 14:43:23 -0700
From:	"Luck, Tony" <tony.luck@...el.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
Cc:	Fenghua Yu <fenghua.yu@...el.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <h.peter.anvin@...el.com>,
	Tejun Heo <tj@...nel.org>, Borislav Petkov <bp@...e.de>,
	Stephane Eranian <eranian@...gle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	David Carrillo-Cisneros <davidcc@...gle.com>,
	Ravi V Shankar <ravi.v.shankar@...el.com>,
	Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
	Sai Prakhya <sai.praneeth.prakhya@...el.com>,
	linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>
Subject: Re: [PATCH 04/32] x86/intel_rdt: Add L3 cache capacity bitmask
 management

On Fri, Jul 22, 2016 at 04:12:04AM -0300, Marcelo Tosatti wrote:
> How does this patchset handle the following condition:
> 
> 6) Create reservations in such a way that the sum is larger than
> total amount of cache, and CPU pinning (example from Karen Noel):
> 
> VM-1 on socket-1 with 80% of reservation.
> VM-2 on socket-2 with 80% of reservation.
> VM-1 pinned to socket-1.
> VM-2 pinned to socket-2.

That's legal, but perhaps we need a description of
overlapping cache reservations.

Hardware tells you how finely you can divide the cache (and this
information is shown in /sys/fs/resctrl/info/l3/max_cbm_len to save
you from digging in CPUID leaves).  E.g. on Broadwell the value is
20, so you can control cache allocations in 5% slices.

A bitmask defines which slices you can use (and h/w has the restriction
that you must have contiguous '1' bits in any mask).  So you can pick
your 80% using 0x0ffff, 0x1fffe, 0x3fffc, 0x7fff8 or 0xffff0.

There is no requirement that masks be exclusive of each other. So
you might pick the two extremes: 0x0ffff and 0xffff0 for your two
VM's in this example. Each would be allowed to allocate up to 80%,
but with a big overlap in the middle. Each has 20% exclusive, but
there is a 60% range in the middle that they would compete for.

Is this specific case useful? Possibly not.  I think the more common
overlap cases might be between processes that you know have shared
code/data. Also the case where some rdtgroup has access to allocate
in the entire cache (mask 0xfffff on Broadwell) and some other rdtgroups
have limited cache allocation with less bits in the mask.

-Tony

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ