[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZlirIUbLw8fLHe0j@agluck-desk3.sc.intel.com>
Date: Thu, 30 May 2024 09:36:49 -0700
From: Tony Luck <tony.luck@...el.com>
To: Reinette Chatre <reinette.chatre@...el.com>
Cc: Fenghua Yu <fenghua.yu@...el.com>,
Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>,
Peter Newman <peternewman@...gle.com>,
James Morse <james.morse@....com>, Babu Moger <babu.moger@....com>,
Drew Fustini <dfustini@...libre.com>,
Dave Martin <Dave.Martin@....com>, x86@...nel.org,
linux-kernel@...r.kernel.org, patches@...ts.linux.dev
Subject: Re: [PATCH v19 00/20] Add support for Sub-NUMA cluster (SNC) systems
On Wed, May 29, 2024 at 07:46:27PM -0700, Reinette Chatre wrote:
> Hi Tony,
>
> On 5/29/24 1:20 PM, Tony Luck wrote:
> > On Tue, May 28, 2024 at 03:55:29PM -0700, Reinette Chatre wrote:
> > > Hi Tony,
> > > > 13: Wordsmith commit into imperative.
> > > > I looked at using kobject_has_children() to check for empty
> > > > directory, but it needs a "struct kobject *" and all I have
> > > > is "struct kernfs_node *". I'm now checking how many CPUs
> > >
> > > Consider how kobject_has_children() uses that struct kobject *.
> > > Specifically:
> > > return kobj->sd && kobj->sd->dir.subdirs
> > >
> > > It operates on kobj->sd, which is exactly what you have: struct kernfs_node.
> >
> > So right. My turn to grumble about other peoples choice of names. If
> > that field was named "kn" instead of "sd" I would have spotted this
> > too.
> >
> > > > remain in ci->shared_cpu_map to detect whether this is the
> > > > last SNC node.
> > >
> > > hmmm, ok, will take a look ... but please finalize discussion of a patch series
> > > before submitting a new series that rejects feedback without discussion and
> > > does something completely different in new version.
> >
> > Reinette,
> >
> > So here's what rmdir_mondata_subdir_allrdtgrp() looks like using the
> > subdirs check. It might need an update/better header comment.
> >
> > -Tony
> >
> > ---
> >
> > /*
> > * Remove all subdirectories of mon_data of ctrl_mon groups
> > * and monitor groups with given domain id.
>
> (note comment still considers that domain id is parameter)
Will fix.
> > */
> > static void rmdir_mondata_subdir_allrdtgrp(struct rdt_resource *r,
> > struct rdt_mon_domain *d)
> > {
> > struct rdtgroup *prgrp, *crgrp;
> > struct kernfs_node *kn;
> > char subname[32];
>
> I wonder if static checkers will know that this cannot be used
> uninitialized?
I wondered that too. There are no complaints from gcc. How do people
deal with false positives from static checkers? Simplest would be to
provide an initializer:
char subname[32] = "";
While that might shut up the static check, it would be more confusing
for human readers.
> > char name[32];
> >
> > sprintf(name, "mon_%s_%02d", r->name, d->ci->id);
> > if (r->mon_scope != RESCTRL_L3_CACHE) {
> > /*
> > * SNC mode: Unless the last domain is being removed must
> > * just remove the SNC subdomain.
> > */
> > sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id);
> > }
> >
> > list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) {
> > kn = kernfs_find_and_get(prgrp->mon.mon_data_kn, name);
> > if (!kn)
> > continue;
> >
> > if (kn->dir.subdirs <= 1)
> > kernfs_remove(kn);
> > else
> > kernfs_remove_by_name(kn, subname);
> >
> > list_for_each_entry(crgrp, &prgrp->mon.crdtgrp_list, mon.crdtgrp_list) {
> > kn = kernfs_find_and_get(crgrp->mon.mon_data_kn, name);
> > if (!kn)
> > continue;
> >
> > if (kn->dir.subdirs <= 1)
> > kernfs_remove(kn);
> > else
> > kernfs_remove_by_name(kn, subname);
> > }
> > }
> > }
>
> This solution looks more intuitive to me. I do think that it may be
> missing some kernfs_put()'s?
There aren't any kernfs_put()'s in the existing code. Resctrl takes
an extra hold on the CTRL_MON and MON directories and jumps though some
hoops to drop that after the directory has been removed. But the monitor
directories have nothing like that.
> Reinette
>
> ps. Please do give me a couple of days more with this series before you
> submit a new version.
Sure. Will do.
-Tony
Powered by blists - more mailing lists