lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 15 May 2024 11:48:52 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Tony Luck <tony.luck@...el.com>
CC: "Yu, Fenghua" <fenghua.yu@...el.com>, "Wieczor-Retman, Maciej"
	<maciej.wieczor-retman@...el.com>, Peter Newman <peternewman@...gle.com>,
	James Morse <james.morse@....com>, Babu Moger <babu.moger@....com>, "Drew
 Fustini" <dfustini@...libre.com>, Dave Martin <Dave.Martin@....com>,
	"x86@...nel.org" <x86@...nel.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "patches@...ts.linux.dev"
	<patches@...ts.linux.dev>
Subject: Re: [PATCH v17 7/9] x86/resctrl: Add new monitor files for Sub-NUMA
 cluster (SNC) monitoring

Hi Tony,

On 5/15/2024 10:23 AM, Tony Luck wrote:
> On Wed, May 15, 2024 at 09:47:28AM -0700, Reinette Chatre wrote:
>> Hi Tony,
>>
>> On 5/14/2024 2:53 PM, Tony Luck wrote:
>>> On Tue, May 14, 2024 at 01:30:05PM -0700, Reinette Chatre wrote:
>>>> Hi Tony,
>>>>
>>>> On 5/14/2024 11:26 AM, Luck, Tony wrote:
>>>>>> On 5/13/2024 5:21 PM, Tony Luck wrote:
>>>>>>> On Mon, May 13, 2024 at 11:53:17AM -0700, Reinette Chatre wrote:
>>>>>>>> On 5/13/2024 10:05 AM, Tony Luck wrote:
>>>>>>>>> On Fri, May 10, 2024 at 02:24:13PM -0700, Reinette Chatre wrote:
>>>>>>>>> Thanks for the review. Detailed comments below. But overall I'm
>>>>>>>>> going to split patch 7 into a bunch of smaller changes, each with
>>>>>>>>> a better commit message.
>>>>>>>>>
>>>>>>>>>> On 5/3/2024 1:33 PM, Tony Luck wrote:
>>>>>>>>>>
>>>>>>>>>> (Could you please start the changelog with some context?)
>>>>>>>>>>
>>>>>>>>>>> Add a field to the rdt_resource structure to track whether monitoring
>>>>>>>>>>> resources are tracked by hardware at a different scope (NODE) from
>>>>>>>>>>> the legacy L3 scope.
>>>>>>>>>>
>>>>>>>>>> This seems to describe @mon_scope that was introduced in patch #3?
>>>>>>>>>
>>>>>>>>> Not really. Patch #3 made the change so that control an monitor
>>>>>>>>> functions can have different scope. That's still needed as with SNC
>>>>>>>>> enabled the underlying data collection is at the node level for
>>>>>>>>> monitoring, while control stays at the L3 cache scope.
>>>>>>>>>
>>>>>>>>> This new field describes the legacy scope of monitoring, so that
>>>>>>>>> resctrl can provide correctly scoped monitor files for legacy
>>>>>>>>> applications that aren't aware of SNC. So I'm using this both
>>>>>>>>> to indicate when SNC is enabled (with mon_scope != mon_display_scope)
>>>>>>>>> or disabled (when they are the same).
>>>>>>>>
>>>>>>>> This seems to enforce the idea that these new additions aim to be
>>>>>>>> generic on the surface but the only goal is to support SNC.
>>>>>>>
>>>>>>> If you have some more ideas on how to make this more generic and
>>>>>>> less SNC specific I'm all ears.
>>>>>>
>>>>>> It may not end up being totally generic. It should not pretend to be
>>>>>> when it is not. It makes the flows difficult to follow when there are
>>>>>> these unexpected checks/quirks in what claims to be core code.
>>>>>
>>>>> Do you want some sort of warning comments in pieces of code
>>>>> that are SNC specific?
>>>>
>>>> I cannot think now where warnings will be appropriate but if you
>>>> find instances then please do. To start the quirks can at least be
>>>> documented. For example, "Only user of <feature> is SNC, which does
>>>> not require <custom> so simplify by <describe shortcut> ..."
>>>
>>> The main spot that triggered this line of discussion was changing the
>>> sanity check that operations to read monitors is being done from a
>>> CPU within the right domain. I've added a short comment on the new
>>> check:
>>>
>>> -       if (!cpumask_test_cpu(smp_processor_id(), &d->hdr.cpu_mask))
>>> +       /* Event counts can only be read from a CPU on the same L3 cache */
>>> +       if (d->display_id != get_cpu_cacheinfo_id(smp_processor_id(), r->mon_display_scope))
>>>                 return -EINVAL;
>>>
>>> But my change embeds the assumption that monitor events are L3 scoped.
>>>
>>> Should it be something like this (to keep the non-SNC case generic):
>>>
>>> 	if (r->mon_scope == r->mon_display_scope) {
>>> 		if (!cpumask_test_cpu(smp_processor_id(), &d->hdr.cpu_mask))
>>> 			return -EINVAL;
>>
>> Yes, keeping this check looks good to me ...
>>
>>> 	} else {
>>> 		/*
>>> 		 * SNC: OK to read events on any CPU sharing same L3
>>> 		 * cache instance.
>>> 		 */
>>> 		 if (d->display_id != get_cpu_cacheinfo_id(smp_processor_id(), r->mon_display_scope))
>>> 		 	return -EINVAL;
>>> 	}
>>
>> ... while I remain unsure about where "display_id" fits in.
> 
> See below.
> 
>>>
>>>>
>>>>>
>>>>>>
>>>>>>>>>>>         }
>>>>>>>>>>> +
>>>>>>>>>>> +       return 0;
>>>>>>>>>>> +}
>>>>>>>>>>> +
>>>>>>>>>>> +static int mkdir_mondata_subdir(struct kernfs_node *parent_kn,
>>>>>>>>>>> +                               struct rdt_mon_domain *d,
>>>>>>>>>>> +                               struct rdt_resource *r, struct rdtgroup *prgrp)
>>>>>>>>>>> +{
>>>>>>>>>>> +       struct kernfs_node *kn, *ckn;
>>>>>>>>>>> +       char name[32];
>>>>>>>>>>> +       bool do_sum;
>>>>>>>>>>> +       int ret;
>>>>>>>>>>> +
>>>>>>>>>>> +       do_sum = r->mon_scope != r->mon_display_scope;
>>>>>>>>>>> +       sprintf(name, "mon_%s_%02d", r->name, d->display_id);
>>>>>>>>>>> +       kn = kernfs_find_and_get_ns(parent_kn, name, NULL);
>>>>>>>>>>> +       if (!kn) {
>>>>>>>>>>> +               /* create the directory */
>>>>>>>>>>> +               kn = kernfs_create_dir(parent_kn, name, parent_kn->mode, prgrp);
>>>>>>>>>>> +               if (IS_ERR(kn))
>>>>>>>>>>> +                       return PTR_ERR(kn);
>>>>>>>>>>> +
>>>>>>>>>>> +               ret = rdtgroup_kn_set_ugid(kn);
>>>>>>>>>>> +               if (ret)
>>>>>>>>>>> +                       goto out_destroy;
>>>>>>>>>>> +               ret = mon_add_all_files(kn, d, r, prgrp, do_sum);
>>>>>>>>>>
>>>>>>>>>> This does not look right. If I understand correctly the private data
>>>>>>>>>> of these event files will have whichever mon domain came up first as
>>>>>>>>>> its domain id. That seems completely arbitrary and does not reflect
>>>>>>>>>> accurate state for this file. Since "do_sum" is essentially a "flag"
>>>>>>>>>> on how this file can be treated, can its "dom_id" not rather be
>>>>>>>>>> the "monitor scope domain id"? Could that not help to eliminate
>>>>>>>>>
>>>>>>>>> You are correct that this should be the "monitor scope domain id" rather
>>>>>>>>> than the first SNC domain that appears. I'll change to use that. I don't
>>>>>>>>> think it helps in removing the per-domain display_id.
>>>>>>>>
>>>>>>>> Wouldn't the file metadata then be the "display_id"?
>>>>>>>
>>>>>>> Yes. The metadata is the display_id for files that need to sum across
>>>>>>> SNC nodes, but the domain id for ones where no summation is needed.
>>>>>>
>>>>>> Right ... and there is a "sum" flag to tell which is which?
>>>>>
>>>>> Yes. sum==0 means the domid field is the one and only domain to
>>>>> report for this resctrl monitor file. sum==1 means the domid field is
>>>>> the display_id - all domains with this display_id must be summed to
>>>>> provide the result to present to the user.
>>>>>
>>>>> I've tried to capture that in the kerneldoc comment for struct mon_event.
>>>>> Here's what I'm planning to include in v18 (Outlook will probably mangle
>>>>> the formatting ... just imagine that the text lines up neatly):
>>>>>
>>>>> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
>>>>> index 49440f194253..3411557d761a 100644
>>>>> --- a/arch/x86/kernel/cpu/resctrl/internal.h
>>>>> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
>>>>> @@ -132,14 +132,19 @@ struct mon_evt {
>>>>>   *                     as kernfs private data
>>>>>   * @rid:               Resource id associated with the event file
>>>>>   * @evtid:             Event id associated with the event file
>>>>> - * @domid:             The domain to which the event file belongs
>>>>> + * @sum:               Set when event must be summed across multiple
>>>>> + *                     domains.
>>>>> + * @domid:             When @sum is zero this is the domain to which
>>>>> + *                     the event file belongs. When sum is one this
>>>>> + *                     is the display_id of all domains to be summed
>>>>
>>>> Here is where I would like to understand why it cannot just be
>>>> "When sum is one this is the domain id of the scope at which (for which?)
>>>> the events must be summed." Although, you already mentioned this will be
>>>> clear in next posting.
>>>>
>>>>>   * @u:                 Name of the bit fields struct
>>>>>   */
>>>>>  union mon_data_bits {
>>>>>         void *priv;
>>>>>         struct {
>>>>>                 unsigned int rid                : 10;
>>>>> -               enum resctrl_event_id evtid     : 8;
>>>>> +               enum resctrl_event_id evtid     : 7;
>>>>> +               unsigned int sum                : 1;
>>>>>                 unsigned int domid              : 14;
>>>>>         } u;
>>>>>  };
>>>>>
>>>>> -Tony
>>>
>>> Maybe an example might help. Assume an SNC system with two sockets,
>>> three SNC nodes per socket, only supporting monitoring. The only domain
>>> list created by resctrl is the mon_domains list on the RDT_RESOURCE_L3
>>> resource. And it looks like this (with "disply_list" abbreviated to
>>> "dspl" to keep the picture small):
>>>
>>>
>>>        <------ SNC NODES ON SOCKET 0 ----->   <------ SNC NODES ON SOCKET 1 ------>
>>> ----> +----------+ +----------+ +----------+ +----------+ +----------+ +----------+
>>>       | id = 0   | | id = 1   | | id = 2   | | id = 3   | | id = 4   | | id = 5   |
>>>       |          | |          | |          | |          | |          | |          |
>>>       | dspl = 0 | | dspl = 0 | | dspl = 0 | | dspl = 1 | | dspl = 1 | | dspl = 1 |
>>>       |          | |          | |          | |          | |          | |          |
>>>       +----------+ +----------+ +----------+ +----------+ +----------+ +----------+
>>>
>>> Reading the per-SNC node monitor values looks just the same as the
>>> non-SNC case. The struct rmid_read passed across the smp_call*() has
>>> the resource, domain, event, and reading the counters is essentially
>>> unchanged.
>>>
>>> Reading a file to sum event counts for SNC nodes on socket 1 needs to
>>> find each of the "struct rdt_mon_domain" that are part of socket 1.
>>> I'm doing that with meta data in the file that says sum=1 (need to add
>>> up something) and domid=1 (the things to be added are those with
>>> display_id = 1). So the code reads:
>>>
>>> 	list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
>>> 		if (d->display_id == rr->d->display_id) {
>>> 			... call stuff to read and sum for domain "d"
>>> 		}
>>> 	}
>>>
>>> The display_id is "the domain id of the scope at which (for which?)
>>> the events must be summed." in your text above.
>>
>> My point remains that it is not clear (to me) why it is required to
>> carry the display_id around.
>>
>>  	list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
>> 		/* determine @id of @d at rr->r->mon_display_scope */
>>  		if (id == domid) {
>>  			... call stuff to read and sum for domain "d"
>>  		}
>>  	}
> 
> That "determine @id of @d at rr->r->mon_display_scope" is:
> 
> 	display_id = get_domain_id_from_scope(cpumask_first(rr->d->hdr.cpu_mask), rr->r->mon_display_scope);
> 	if (display_id < 0) {
> 		take some error action
> 	}
> 
> So it certainly isn't *required* to carry display_id around. But doing
> so makes the code simpler. I could bury the long line into a helper

Is "if (d->display_id == rr->d->display_id)" really "simpler"? It is
shorter I agree, but I would argue that it is much harder to understand
what the code is trying to do. The reader needs to understand what
"display_id" means, how the state is maintained, how
the values propagated to this call site, etc. With a query like above
it should be obvious what the code does.

> macro/function. But I can't bury the error check.

If this is an error then it is a kernel bug and should be handled
appropriately.

> 
> I'd also need to change get_domain_id_from_scope() from "static" to
> global so it can be used in other files besides core.c

Is this a problem?

> Note that there are several places where I need to use display_id,
> computing it at run time in each place, but it seems so much easier to
> do it once at domain creation time.

Easier to code perhaps but I do not see how it is "easy" to understand
and maintain.

I think we have now repeated the same conversation twice. Previously you
promised that your design would be clear to me in the next version and
I have already stated twice that I am ok with that.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ