[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ccd6be5-1dbd-c4a5-659f-ae20761dcce7@intel.com>
Date: Thu, 24 Aug 2023 15:58:11 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: James Morse <james.morse@....com>, <x86@...nel.org>,
<linux-kernel@...r.kernel.org>
CC: Fenghua Yu <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Babu Moger <Babu.Moger@....com>,
<shameerali.kolothum.thodi@...wei.com>,
D Scott Phillips OS <scott@...amperecomputing.com>,
<carl@...amperecomputing.com>, <lcherian@...vell.com>,
<bobo.shaobowang@...wei.com>, <tan.shaopeng@...itsu.com>,
<xingxin.hx@...nanolis.org>, <baolin.wang@...ux.alibaba.com>,
Jamie Iles <quic_jiles@...cinc.com>,
Xin Hao <xhao@...ux.alibaba.com>, <peternewman@...gle.com>,
<dfustini@...libre.com>
Subject: Re: [PATCH v5 06/24] x86/resctrl: Track the number of dirty RMID a
CLOSID has
Hi James,
On 8/24/2023 9:53 AM, James Morse wrote:
> Hi Reinette,
>
> On 09/08/2023 23:33, Reinette Chatre wrote:
>> On 7/28/2023 9:42 AM, James Morse wrote:
>>> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> index de91ca781d9f..44addc0126fc 100644
>>> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
>>> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
>>> @@ -43,6 +43,13 @@ struct rmid_entry {
>>> */
>>> static LIST_HEAD(rmid_free_lru);
>>>
>>> +/**
>>> + * @closid_num_dirty_rmid The number of dirty RMID each CLOSID has.
>>> + * Only allocated when CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID is defined.
>>> + * Indexed by CLOSID. Protected by rdtgroup_mutex.
>>> + */
>>> +static int *closid_num_dirty_rmid;
>>> +
>>
>> Will the values ever be negative?
>
> Nope, int is just fewer keystrokes. I'll change it to unsigned int.
>
>
>>> /**
>>> * @rmid_limbo_count count of currently unused but (potentially)
>>> * dirty RMIDs.
>
>
>>> @@ -782,13 +802,28 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
>>> static int dom_data_init(struct rdt_resource *r)
>>> {
>>> u32 idx_limit = resctrl_arch_system_num_rmid_idx();
>>> + u32 num_closid = resctrl_arch_get_num_closid(r);
>>> struct rmid_entry *entry = NULL;
>>> u32 idx;
>>> int i;
>>>
>>> + if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
>>> + int *tmp;
>>> +
>>> + tmp = kcalloc(num_closid, sizeof(int), GFP_KERNEL);
>>> + if (!tmp)
>>> + return -ENOMEM;
>>> +
>>> + mutex_lock(&rdtgroup_mutex);
>>> + closid_num_dirty_rmid = tmp;
>>> + mutex_unlock(&rdtgroup_mutex);
>>> + }
>>> +
>>
>> It does no harm but I cannot see why the mutex is needed here.
>
> It's belt-and-braces to ensure that all accesses to that global variable are protected by
> that lock. This avoids giving me a memory ordering headache.
> rmid_ptrs and the call to __rmid_entry() that dereferences it should probably get the same
> treatment.
This is fair.
> I'll move the locking to the caller as the least-churny way of covering both.
This is not clear to me. From what I can tell all the sites you mention
are in dom_data_init() so keeping the locking there (but covering the
additional sites) seem appropriate?
>
>>> rmid_ptrs = kcalloc(idx_limit, sizeof(struct rmid_entry), GFP_KERNEL);
>>> - if (!rmid_ptrs)
>>> + if (!rmid_ptrs) {
>>> + kfree(closid_num_dirty_rmid);
>>> return -ENOMEM;
>>> + }
>>>
>>> for (i = 0; i < idx_limit; i++) {
>>> entry = &rmid_ptrs[i];
>>
>> How will this new memory be freed? Actually I cannot find where
>> rmid_ptrs is freed either .... is a "dom_data_free()" needed?
>
> Oh that's not deliberate? :P
>
> rmid_ptrs has been immortal since the beginning. The good news is resctrl_exit() goes in
> the exitcall section, which is in the DISCARDS section of the linker script as resctrl
> can't be built as a module. It isn't possible to tear resctrl down, so no-one will notice
> this leak.
>
> Something on my eternal-todo-list is to make the filesystem parts of resctrl a loadable
> module (if Tony doesn't get there first!). That would flush this sort of thing out.
> Last time I triggered resctrl_exit() manually not all of the files got cleaned up - I
> haven't investigated it further.
>
>
> I agree it should probably have a kfree() call somewhere under rdtgroup_exit(), as its
> only the L3 that needs any of this, I'll add resctrl_exit_mon_l3_config() for
> rdtgroup_exit() to call.
I'd prefer that allocation and free are clearly symmetrical. Doing so helps
to make the code easier to understand. rdtgroup_exit() is intended to clean
up after rdtgroup_init(). Since this allocation does not occur within rdtgroup_init()
I do not think rdtgroup_exit() is the best place for this cleanup. resctrl_exit() looks
more appropriate to me. Having a dom_data_free() to clean up after a dom_data_init() also
seems like an addition that will help to make the code easier to understand but that
is without a clear understanding about what you have in mind for
resctrl_exit_mon_l3_config().
>
> Another option is to rip out all the __exit text as its discarded anyway. But if loadable
> modules is the direction of travel, it probably make more sense to fix it.
My preference is to do the cleanup properly.
Reinette
Powered by blists - more mailing lists