lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d15d97d1-286c-4857-8688-4d8369271c2c@intel.com>
Date: Mon, 29 Sep 2025 09:09:35 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Dave Martin <Dave.Martin@....com>, "Luck, Tony" <tony.luck@...el.com>
CC: <linux-kernel@...r.kernel.org>, James Morse <james.morse@....com>, "Thomas
 Gleixner" <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, "Borislav
 Petkov" <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>, "H. Peter
 Anvin" <hpa@...or.com>, Jonathan Corbet <corbet@....net>, <x86@...nel.org>,
	<linux-doc@...r.kernel.org>
Subject: Re: [PATCH] fs/resctrl,x86/resctrl: Factor mba rounding to be
 per-arch

Hi Dave,

On 9/29/25 6:56 AM, Dave Martin wrote:
> On Thu, Sep 25, 2025 at 03:58:35PM -0700, Luck, Tony wrote:
>> On Mon, Sep 22, 2025 at 04:04:40PM +0100, Dave Martin wrote:

...

>> The region aware h/w supports separate bandwidth controls for each
>> region. We could hope (or perhaps update the spec to define) that
>> region0 is always node-local DDR memory and keep the "MB" tag for
>> that.
> 
> Do you have concerns about existing software choking on the #-prefixed
> lines?

I am trying to understand the purpose of the #-prefix. I see two motivations
for the #-prefix with the primary point that multiple schema apply to the same
resource. 

1) Commented schema are "inactive"
This is unclear to me. In the MB example the commented lines show the 
finer grained controls. Since the original MB resource is an approximation
and the hardware must already be configured to support it, would the #-prefixed
lines not show the actual "active" configuration?

2) Commented schema are "conflicting"
The original proposal mentioned "write them back instead of (or in addition to)
the conflicting entries". I do not know how resctrl will be able to
handle a user requesting a change to both "MB" and "MB_HW". This seems like
something that should fail?

On a high level it is not clear to me why the # prefix is needed. As I understand the
schemata names will always be unique and the new features made backward
compatible to existing schemata names. That is, existing MB, L3, etc.
will also have the new info files that describe their values/ranges.

I expect that user space will ignore schema that it is not familiar
with so the # prefix seems unnecessary? 

I believe the motivation is to express a relationship between different
schema (you mentioned "shadow" initially). I think this relationship can
be expressed clearly by using a namespace prefix (like "MB_" in the examples).
This may help more when there are multiple schemata with this format where a #-prefix
does not make obvious which resource is shadowed. 
 
>> Then use some other tag naming for other regions. Remote DDR,
>> local CXL, remote CXL are the ones we think are next in the h/w
>> memory sequence. But the "region" concept would allow for other
>> options as other memory technologies come into use.
> 
> Would it be reasnable just to have a set of these schema instances, per
> region, so:
> 
> MB_HW: ... // implicitly region 0
> MB_HW_1: ...
> MB_HW_2: ...
> 
> etc.
> 
> Or, did you have something else in mind?
> 
> My thinking is that we avoid adding complexity in the schemata file if
> we treat mapping these schema instances onto the hardware topology as
> an orthogonal problem.  So long as we have unique names in the schemata
> file, we can describe elsewhere what they relate to in the hardware.

Agreed ... and "elsewhere" is expected to be unique depending on the resource.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ