lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 21 Dec 2018 15:42:17 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     "Jonathan Lemon" <jonathan.lemon@...il.com>
Cc:     netdev@...r.kernel.org, brouer@...hat.com
Subject: Re: [PATCH net-next] net: Don't return pfmemalloc pages to the page
 pool.


On Thu, 20 Dec 2018 14:11:35 -0800 "Jonathan Lemon" <jonathan.lemon@...il.com> wrote:
> On 20 Dec 2018, at 5:03, Jesper Dangaard Brouer wrote:
> 
[...]
> > I don't like adding this in the hot-path.  Instead we could move this
> > to the page alloc slow-path, and reject allocating pages with
> > pgmemalloc in the first place.  
> 
> No real objection to that - but then why bother with pfmemalloc?  If the 
> driver can't  obtain pages for emergency use, then they might as well
> not exist.

I've changed my mind.  There is an interesting opportunity in allowing
pfmemalloc-pages to be used by the driver.  (So, I'm saying I'm okay
with adding this to the hot-path. And this hopefully doesn't affect
performance (too much), as page_is_pfmemalloc() is reading from the same
cache-line).

The opportunity is that XDP can handle/operate at wirespeed.  We could
allow XDP to get this info (simply via helper call, so we don't affect
users not using this). When seeing PFMEMALLOC, which indicate a bad
situation is occurring really soon, then we can react at a earlier
stage (spending less cycles on reacting).
 One idea is to reduce-size of XDP frame, and use XDP_TX to send-back
the frame to the sender as a congestion/drop notification, which inform
sender to slowdown. If this is incast happening within the same
data-center then the XDP_TX-feedback can reach the sender really fast.
One example of such an approach: https://youtu.be/BO0QhaxBRr0

This is one example of how XDP can allow us to do stuff that was not
possible before...
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ