[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.2001031128200.160920@chino.kir.corp.google.com>
Date: Fri, 3 Jan 2020 11:29:06 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Wei Yang <richardw.yang@...ux.intel.com>
cc: hannes@...xchg.org, mhocko@...nel.org, vdavydov.dev@...il.com,
akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, yang.shi@...ux.alibaba.com
Subject: Re: [RFC PATCH] mm: thp: grab the lock before manipulation defer
list
On Fri, 3 Jan 2020, Wei Yang wrote:
> As all the other places, we grab the lock before manipulate the defer list.
> Current implementation may face a race condition.
>
> Fixes: 87eaceb3faa5 ("mm: thp: make deferred split shrinker memcg aware")
>
> Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
>
> ---
> I notice the difference during code reading and just confused about the
> difference. No specific test is done since limited knowledge about cgroup.
>
> Maybe I miss something important?
The check for !list_empty(page_deferred_list(page)) must certainly be
serialized with doing list_del_init(page_deferred_list(page)).
> ---
> mm/memcontrol.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index bc01423277c5..62b7ec34ef1a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5368,12 +5368,12 @@ static int mem_cgroup_move_account(struct page *page,
> }
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + spin_lock(&from->deferred_split_queue.split_queue_lock);
> if (compound && !list_empty(page_deferred_list(page))) {
> - spin_lock(&from->deferred_split_queue.split_queue_lock);
> list_del_init(page_deferred_list(page));
> from->deferred_split_queue.split_queue_len--;
> - spin_unlock(&from->deferred_split_queue.split_queue_lock);
> }
> + spin_unlock(&from->deferred_split_queue.split_queue_lock);
> #endif
> /*
> * It is safe to change page->mem_cgroup here because the page
> @@ -5385,13 +5385,13 @@ static int mem_cgroup_move_account(struct page *page,
> page->mem_cgroup = to;
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + spin_lock(&to->deferred_split_queue.split_queue_lock);
> if (compound && list_empty(page_deferred_list(page))) {
> - spin_lock(&to->deferred_split_queue.split_queue_lock);
> list_add_tail(page_deferred_list(page),
> &to->deferred_split_queue.split_queue);
> to->deferred_split_queue.split_queue_len++;
> - spin_unlock(&to->deferred_split_queue.split_queue_lock);
> }
> + spin_unlock(&to->deferred_split_queue.split_queue_lock);
> #endif
>
> spin_unlock_irqrestore(&from->move_lock, flags);
> --
> 2.17.1
>
>
>
Powered by blists - more mailing lists