lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Dec 2020 10:15:41 -0500
From:   Pavel Tatashin <pasha.tatashin@...een.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Oscar Salvador <osalvador@...e.de>,
        Dan Williams <dan.j.williams@...el.com>,
        Sasha Levin <sashal@...nel.org>,
        Tyler Hicks <tyhicks@...ux.microsoft.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>, mike.kravetz@...cle.com,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>, Jason Gunthorpe <jgg@...pe.ca>,
        Peter Zijlstra <peterz@...radead.org>,
        Mel Gorman <mgorman@...e.de>,
        Matthew Wilcox <willy@...radead.org>,
        David Rientjes <rientjes@...gle.com>,
        John Hubbard <jhubbard@...dia.com>
Subject: Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations

On Thu, Dec 3, 2020 at 4:17 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Wed 02-12-20 00:23:29, Pavel Tatashin wrote:
> [...]
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 611799c72da5..7a6d86d0bc5f 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3766,20 +3766,25 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
> >       return alloc_flags;
> >  }
> >
> > -static inline unsigned int current_alloc_flags(gfp_t gfp_mask,
> > -                                     unsigned int alloc_flags)
> > +static inline unsigned int cma_alloc_flags(gfp_t gfp_mask,
> > +                                        unsigned int alloc_flags)
> >  {
> >  #ifdef CONFIG_CMA
> > -     unsigned int pflags = current->flags;
> > -
> > -     if (!(pflags & PF_MEMALLOC_NOMOVABLE) &&
> > -         gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
> > +     if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE)
> >               alloc_flags |= ALLOC_CMA;
> > -
> >  #endif
> >       return alloc_flags;
> >  }
> >
> > +static inline gfp_t current_gfp_checkmovable(gfp_t gfp_mask)
> > +{
> > +     unsigned int pflags = current->flags;
> > +
> > +     if ((pflags & PF_MEMALLOC_NOMOVABLE))
> > +             return gfp_mask & ~__GFP_MOVABLE;
> > +     return gfp_mask;
> > +}
> > +
>
> It sucks that we have to control both ALLOC and gfp flags. But wouldn't
> it be simpler and more straightforward to keep current_alloc_flags as is
> (module PF rename) and hook the gfp mask evaluation into current_gfp_context
> and move it up before the first allocation attempt?

We could do that, but perhaps as a separate patch? I am worried about
hidden implication of adding extra scope (GFP_NOIO|GFP_NOFS) to the
fast path. Also, current_gfp_context() is used elsewhere, and in some
places removing __GFP_MOVABLE from gfp_mask means that we will need to
also change other things. For example [1], in try_to_free_pages() we
call current_gfp_context(gfp_mask) which can reduce the maximum zone
idx, yet we simply set it to: reclaim_idx = gfp_zone(gfp_mask), not to
the newly determined gfp_mask.

[1] https://soleen.com/source/xref/linux/mm/vmscan.c?r=2da9f630#3239


 All scope flags
> should be applicable to the hot path as well. It would add few cycles to
> there but the question is whether that would be noticeable over just
> handling PF_MEMALLOC_NOMOVABLE on its own. The cache line would be
> pulled in anyway.

Let's try it in a separate patch? I will add it in the next version of
this series.

Thank you,
Pasha

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ