[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd3afbfd-3372-cac9-600e-ace19ddd3199@arm.com>
Date: Wed, 13 Dec 2023 18:03:04 +0000
From: James Morse <james.morse@....com>
To: Reinette Chatre <reinette.chatre@...el.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Cc: Fenghua Yu <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Babu Moger <Babu.Moger@....com>,
shameerali.kolothum.thodi@...wei.com,
D Scott Phillips OS <scott@...amperecomputing.com>,
carl@...amperecomputing.com, lcherian@...vell.com,
bobo.shaobowang@...wei.com, tan.shaopeng@...itsu.com,
baolin.wang@...ux.alibaba.com, Jamie Iles <quic_jiles@...cinc.com>,
Xin Hao <xhao@...ux.alibaba.com>, peternewman@...gle.com,
dfustini@...libre.com, amitsinght@...vell.com
Subject: Re: [PATCH v7 02/24] x86/resctrl: kfree() rmid_ptrs from
rdtgroup_exit()
Hi Reinette
On 09/11/2023 17:39, Reinette Chatre wrote:
> Hi James,
>
> Subject refers to rdtgroup_exit() but the patch is actually changing
> resctrl_exit().
I'll fix that,
> On 10/25/2023 11:03 AM, James Morse wrote:
>> rmid_ptrs[] is allocated from dom_data_init() but never free()d.
>>
>> While the exit text ends up in the linker script's DISCARD section,
>> the direction of travel is for resctrl to be/have loadable modules.
>>
>> Add resctrl_exit_mon_l3_config() to cleanup any memory allocated
>> by rdt_get_mon_l3_config().
>
> To match what patch actually does it looks like this should rather be:
> "Add resctrl_exit_mon_l3_config()" -> "Add resctrl_put_mon_l3_config()"
>
>>
>> There is no reason to backport this to a stable kernel.
[...]
>> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
>> index 19e0681f0435..0056c9962a44 100644
>> --- a/arch/x86/kernel/cpu/resctrl/core.c
>> +++ b/arch/x86/kernel/cpu/resctrl/core.c
>> @@ -992,7 +992,13 @@ late_initcall(resctrl_late_init);
>>
>> static void __exit resctrl_exit(void)
>> {
>> + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>> +
>> cpuhp_remove_state(rdt_online);
>> +
>> + if (r->mon_capable)
>> + rdt_put_mon_l3_config(r);
>> +
>> rdtgroup_exit();
>> }
>
> I expect cleanup to do the inverse of init. I do not know what was the
> motivation for the rdtgroup_exit() to follow cpuhp_remove_state()
This will invoke the hotplug callbacks, making it look to resctrl like all CPUs are
offline. This means it is then impossible for rdtgroup_exit() to race with the hotplug
notifiers. (if you could run this code...)
> but I
> was expecting this new cleanup to be done after rdtgroup_exit() to be inverse
> of init. This cleanup is inserted in middle of two existing cleanup - could
> you please elaborate how this location was chosen?
rdtgroup_exit() does nothing with the resctrl structures, it removes sysfs and debugfs
entries, and unregisters the filesystem.
Hypothetically, you can't observe any effect of the rmid_ptrs array being freed as
all the CPUs are offline and the overflow/limbo threads should have been cancelled.
Once cpuhp_remove_state() has been called, this really doesn't matter.
Thanks,
James
Powered by blists - more mailing lists