lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Uf+EP8yGf93=R3XK0Y=0To0KQDys0O1BkG-Odej3Rwj5A@mail.gmail.com>
Date:   Mon, 6 Jan 2020 08:18:34 -0800
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Wei Yang <richardw.yang@...ux.intel.com>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>, vdavydov.dev@...il.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        cgroups@...r.kernel.org, linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Yang Shi <yang.shi@...ux.alibaba.com>
Subject: Re: [RFC PATCH] mm: thp: grab the lock before manipulation defer list

On Fri, Jan 3, 2020 at 6:34 AM Wei Yang <richardw.yang@...ux.intel.com> wrote:
>
> As all the other places, we grab the lock before manipulate the defer list.
> Current implementation may face a race condition.
>
> Fixes: 87eaceb3faa5 ("mm: thp: make deferred split shrinker memcg aware")
>
> Signed-off-by: Wei Yang <richardw.yang@...ux.intel.com>
>
> ---
> I notice the difference during code reading and just confused about the
> difference. No specific test is done since limited knowledge about cgroup.
>
> Maybe I miss something important?
> ---
>  mm/memcontrol.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index bc01423277c5..62b7ec34ef1a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5368,12 +5368,12 @@ static int mem_cgroup_move_account(struct page *page,
>         }
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +       spin_lock(&from->deferred_split_queue.split_queue_lock);
>         if (compound && !list_empty(page_deferred_list(page))) {
> -               spin_lock(&from->deferred_split_queue.split_queue_lock);
>                 list_del_init(page_deferred_list(page));
>                 from->deferred_split_queue.split_queue_len--;
> -               spin_unlock(&from->deferred_split_queue.split_queue_lock);
>         }
> +       spin_unlock(&from->deferred_split_queue.split_queue_lock);
>  #endif
>         /*
>          * It is safe to change page->mem_cgroup here because the page

So I suspect the lock placement has to do with the compound boolean
value passed to the function.

One thing you might want to do is pull the "if (compound)" check out
and place it outside of the spinlock check. It would then simplify
this signficantly so it is something like
if (compound) {
  spin_lock();
  list = page_deferred_list(page);
  if (!list_empty(list)) {
    list_del_init(list);
    from->..split_queue_len--;
  }
  spin_unlock();
}

Same for the block below. I would pull the check for compound outside
of the spinlock call since it is a value that shouldn't change and
would eliminate an unnecessary lock in the non-compound case.

> @@ -5385,13 +5385,13 @@ static int mem_cgroup_move_account(struct page *page,
>         page->mem_cgroup = to;
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +       spin_lock(&to->deferred_split_queue.split_queue_lock);
>         if (compound && list_empty(page_deferred_list(page))) {
> -               spin_lock(&to->deferred_split_queue.split_queue_lock);
>                 list_add_tail(page_deferred_list(page),
>                               &to->deferred_split_queue.split_queue);
>                 to->deferred_split_queue.split_queue_len++;
> -               spin_unlock(&to->deferred_split_queue.split_queue_lock);
>         }
> +       spin_unlock(&to->deferred_split_queue.split_queue_lock);
>  #endif
>
>         spin_unlock_irqrestore(&from->move_lock, flags);
> --

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ