[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f49df2d42d7e97b61a5e26ff4d89ede5fbe37a35.camel@linux.intel.com>
Date: Wed, 25 Sep 2019 08:20:30 -0700
From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
To: Yunfeng Ye <yeyunfeng@...wei.com>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
bvanassche@....org, bhelgaas@...gle.com, dsterba@...e.com,
"tglx@...utronix.de" <tglx@...utronix.de>,
sakari.ailus@...ux.intel.com
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] async: Let kfree() out of the critical area of the lock
On Wed, 2019-09-25 at 20:52 +0800, Yunfeng Ye wrote:
> It's not necessary to put kfree() in the critical area of the lock, so
> let it out.
>
> Signed-off-by: Yunfeng Ye <yeyunfeng@...wei.com>
> ---
> kernel/async.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/async.c b/kernel/async.c
> index 4f9c1d6..1de270d 100644
> --- a/kernel/async.c
> +++ b/kernel/async.c
> @@ -135,12 +135,12 @@ static void async_run_entry_fn(struct work_struct *work)
> list_del_init(&entry->domain_list);
> list_del_init(&entry->global_list);
>
> - /* 3) free the entry */
> - kfree(entry);
> atomic_dec(&entry_count);
> -
> spin_unlock_irqrestore(&async_lock, flags);
>
> + /* 3) free the entry */
> + kfree(entry);
> +
> /* 4) wake up any waiters */
> wake_up(&async_done);
> }
It probably wouldn't hurt to update the patch description to mention that
async_schedule_node_domain does the allocation outside of the lock, then
takes the lock and does the list addition and entry_count increment inside
the critical section so this is just updating the code to match that it
seems.
Otherwise the change itself looks safe to me, though I am not sure there
is a performance gain to be had so this is mostly just a cosmetic patch.
Reviewed-by: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
Powered by blists - more mailing lists