lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkoCds-WOoN5CKas4DThk8hU65pgtMcga10QEqEmKU2f5A@mail.gmail.com>
Date:   Tue, 7 Dec 2021 16:26:28 -0800
From:   Yang Shi <shy828301@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Nico Pache <npache@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Kirill Tkhai <ktkhai@...tuozzo.com>,
        Roman Gushchin <guro@...com>, Vlastimil Babka <vbabka@...e.cz>,
        Vladimir Davydov <vdavydov.dev@...il.com>, raquini@...hat.com,
        Michal Hocko <mhocko@...e.com>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v2 1/1] mm/vmscan.c: Prevent allocating shrinker_info on
 offlined nodes

On Tue, Dec 7, 2021 at 3:44 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Tue,  7 Dec 2021 17:40:13 -0500 Nico Pache <npache@...hat.com> wrote:
>
> > We have run into a panic caused by a shrinker allocation being attempted
> > on an offlined node.
> >
> > Our crash analysis has determined that the issue originates from trying
> > to allocate pages on an offlined node in expand_one_shrinker_info. This
> > function makes the incorrect assumption that we can allocate on any node.
> > To correct this we make sure the node is online before tempting an
> > allocation. If it is not online choose the closest node.
>
> This isn't fully accurate, is it?  We could allocate on a node which is
> presently offline but which was previously onlined, by testing
> NODE_DATA(nid).
>
> It isn't entirely clear to me from the v1 discussion why this approach
> isn't being taken?
>
> AFAICT the proposed patch is *already* taking this approach, by having
> no protection against a concurrent or subsequent node offlining?

AFAICT, we have not reached agreement on how to fix it yet. I saw 3
proposals at least:

1. From Michal, allocate node data for all possible nodes.
https://lore.kernel.org/all/Ya89aqij6nMwJrIZ@dhcp22.suse.cz/T/#u

2. What this patch does. Proposed originally from
https://lore.kernel.org/all/20211108202325.20304-1-amakhalov@vmware.com/T/#u

3. From David, fix in node_zonelist().
https://lore.kernel.org/all/51c65635-1dae-6ba4-daf9-db9df0ec35d8@redhat.com/T/#u

>
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -222,13 +222,16 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg,
> >       int size = map_size + defer_size;
> >
> >       for_each_node(nid) {
> > +             int tmp = nid;
>
> Not `tmp', please.  Better to use an identifier which explains the
> variable's use.  target_nid?
>
> And a newline after defining locals, please.
>
> >               pn = memcg->nodeinfo[nid];
> >               old = shrinker_info_protected(memcg, nid);
> >               /* Not yet online memcg */
> >               if (!old)
> >                       return 0;
> >
> > -             new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid);
> > +             if(!node_online(nid))
>
> s/if(/if (/
>
> > +                     tmp = numa_mem_id();
> > +             new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, tmp);
> >               if (!new)
> >                       return -ENOMEM;
> >
>
> And a code comment fully explaining what's going on here?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ