lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Feb 2021 12:42:46 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Chuck Lever <chuck.lever@...cle.com>, Mel Gorman <mgorman@...e.de>,
        Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Jakub Kicinski <kuba@...nel.org>, brouer@...hat.com,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: alloc_pages_bulk()

On Mon, 22 Feb 2021 09:42:56 +0000
Mel Gorman <mgorman@...hsingularity.net> wrote:

> On Mon, Feb 15, 2021 at 05:10:38PM +0100, Jesper Dangaard Brouer wrote:
> > 
> > On Mon, 15 Feb 2021 12:00:56 +0000
> > Mel Gorman <mgorman@...hsingularity.net> wrote:
> >   
> > > On Thu, Feb 11, 2021 at 01:26:28PM +0100, Jesper Dangaard Brouer wrote:  
> > [...]  
> > >   
> > > > I also suggest the API can return less pages than requested. Because I
> > > > want to to "exit"/return if it need to go into an expensive code path
> > > > (like buddy allocator or compaction).  I'm assuming we have a flags to
> > > > give us this behavior (via gfp_flags or alloc_flags)?
> > > >     
> > > 
> > > The API returns the number of pages returned on a list so policies
> > > around how aggressive it should be allocating the requested number of
> > > pages could be adjusted without changing the API. Passing in policy
> > > requests via gfp_flags may be problematic as most (all?) bits are
> > > already used.  
> > 
> > Well, I was just thinking that I would use GFP_ATOMIC instead of
> > GFP_KERNEL to "communicate" that I don't want this call to take too
> > long (like sleeping).  I'm not requesting any fancy policy :-)
> >   
> 
> The NFS use case requires opposite semantics
> -- it really needs those allocations to succeed
> https://lore.kernel.org/r/161340498400.7780.962495219428962117.stgit@klimt.1015granger.net.

Sorry, but that is not how I understand the code.

The code is doing exactly what I'm requesting. If the alloc_pages_bulk()
doesn't return expected number of pages, then check if others need to
run.  The old code did schedule_timeout(msecs_to_jiffies(500)), while
Chuck's patch change this to ask for cond_resched().  Thus, it tries to
avoid blocking the CPU for too long (when allocating many pages).

And the nfsd code seems to handle that the code can be interrupted (via
return -EINTR) via signal_pending(current).  Thus, the nfsd code seems
to be able to handle if the page allocations failed.


> I've asked what code it's based on as it's not 5.11 and I'll iron that
> out first.
>
> Then it might be clearer what the "can fail" semantics should look like.
> I think it would be best to have pairs of patches where the first patch
> adjusts the semantics of the bulk allocator and the second adds a user.
> That will limit the amount of code code carried in the implementation.
> When the initial users are in place then the implementation can be
> optimised as the optimisations will require significant refactoring and
> I not want to refactor multiple times.

I guess, I should try to code-up the usage in page_pool.

What is the latest patch for adding alloc_pages_bulk() ?

The nfsd code (svc_alloc_arg) is called in a context where it can
sleep, and thus use GFP_KERNEL.  In most cases the page_pool will be
called with GFP_ATOMIC.  I don't think I/page_pool will retry the call
like Chuck did, as I cannot (re)schedule others to run.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ