lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1069540.1746202908@warthog.procyon.org.uk>
Date: Fri, 02 May 2025 17:21:48 +0100
From: David Howells <dhowells@...hat.com>
To: Andrew Lunn <andrew@...n.ch>
Cc: dhowells@...hat.com, Eric Dumazet <edumazet@...gle.com>,
    "David S. Miller" <davem@...emloft.net>,
    Jakub Kicinski <kuba@...nel.org>,
    David Hildenbrand <david@...hat.com>,
    John Hubbard <jhubbard@...dia.com>,
    Christoph Hellwig <hch@...radead.org>, willy@...radead.org,
    netdev@...r.kernel.org, linux-mm@...ck.org
Subject: Reorganising how the networking layer handles memory

Okay, perhaps I should start at the beginning :-).

There a number of things that are going to mandate an overhaul of how the
networking layer handles memory:

 (1) The sk_buff code assumes it can take refs on pages it is given, but the
     page ref counter is going to go away in the relatively near term.

     Indeed, you're already not allowed to take a ref on, say, slab memory,
     because the page ref doesn't control the lifetime of the object.

     Even pages are going to kind of go away.  Willy haz planz...

 (2) sendmsg(MSG_ZEROCOPY) suffers from the O_DIRECT vs fork() bug because it
     doesn't use page pinning.  It needs to use the GUP routines.

 (3) sendmsg(MSG_SPLICE_PAGES) isn't entirely satisfactory because it can't be
     used with certain memory types (e.g. slab).  It takes a ref on whatever
     it is given - which is wrong if it should pin this instead.

 (4) iov_iter extraction will probably change to dispensing {physaddr,len}
     tuples rather than {page,off,len} tuples.  The socket layer won't then
     see pages at all.

 (5) Memory segments splice()'d into a socket may have who-knows-what weird
     lifetime requirements.

So after discussions at LSF/MM, what I'm proposing is this:

 (1) If we want to use zerocopy we (the kernel) have to pass a cleanup
     function to sendmsg() along with the data.  If you don't care about
     zerocopy, it will copy the data.

 (2) For each message sent with sendmsg, the cleanup function is called
     progressively as parts of the data it included are completed.  I would do
     it progressively so that big messages can be handled.

 (3) We also pass an optional 'refill' function to sendmsg.  As data is sent,
     the code that extracts the data will call this to pin more user bufs (we
     don't necessarily want to pin everything up front).  The refill function
     is permitted to sleep to allow the amount of pinned memory to subside.

 (4) We move a lot the zerocopy wrangling code out of the basement of the
     networking code and put it at the system call level, above the call to
     ->sendmsg() and the basement code then calls the appropriate functions to
     extract, refill and clean up.  It may be usable in other contexts too -
     DIO to regular files, for example.

 (5) The SO_EE_ORIGIN_ZEROCOPY completion notifications are then generated by
     the cleanup function.

 (6) The sk_buff struct does not retain *any* refs/pins on memory fragments it
     refers to.  This is done for it by the zerocopy layer.

This will allow us to kill three birds with one stone:

 (A) It will fix the issues with zerocopy transmission mentioned above (DIO vs
     fork, pin vs ref, pages without refcounts).  Whoever calls sendmsg() is
     then responsible for maintaining the lifetime of the memory by whatever
     means necessary.

 (B) Kernel drivers (e.g. network filesystems) can then use MSG_ZEROCOPY
     (MSG_SPLICE_PAGES can be discarded).  They can create their own message,
     cobbling it together out of kmalloc'd memory and arrays of pages, safe in
     the knowledge that the network stack will treat it only as an array of
     memory fragments.

     They would supply their own cleanup function to do the appropriate folio
     putting and would not need a "refill" function.  The extraction can be
     handled by standard iov_iter code.

     This would allow a network filesystem to transmit a complete RPC message
     with a single sendmsg() call, avoiding the need to cork the socket.

 (C) Make it easier to provide alternative userspace notification mechanisms
     than SO_EE_ORIGIN_ZEROCOPY.  Maybe by allowing a "cookie" to be passed in
     the control message that can be passed back by some other mechanism
     (e.g. recvmsg).  Or by providing a user address that can be altered and a
     futex triggered on it.

There's potentially a fourth bird too, but I'm not sure how practical they
are:

 (D) What if TCP and UDP sockets, say, *only* do zerocopy?  And that the
     syscall layer does the buffering transparently to hide that from the
     user?  That could massively simplify the lower layers and perhaps make
     the buffering more efficient.

     For instance, the data could be organised by the top layer into (large)
     pages and then the protocol would divide that up.  Smaller chunks that
     need to go immediately could be placed in kmalloc'd buffers rather than
     using a page frag allocator.

     There are some downsides/difficulties too.  Firstly, it would probably
     render the checksum-whilst-copying impossible (though being able to use
     CPU copy acceleration might make up for that, as might checksum offload).

     Secondly, it would mean that sk_buffs would have at least two fragments -
     header and data - which might be impossible for some NICs.

     Thirdly, some protocols just want to copy the data into their own skbuffs
     whatever.

There are also some issues with this proposal:

 (1) AF_ALG.  This does its own thing, including direct I/O without
     MSG_ZEROCOPY being set.  It doesn't actually use sk_buffs.  Really, it's
     not a network protocol in the normal sense and might have been better
     implemented as, say, a bunch of functions in io_uring.

 (2) Packet crypto.  Some protocols might want to do encryption from the
     source buffers into the skbuff and this would amount to a duplicate copy.

     This might be made more complicated by things like the TLS upper level
     protocol on TCP where we might be able to offload the crypto to the NIC,
     but might have to do it ourselves.

 (3) Is it possible to have a mixture of zerocopy and non-zerocopy pieces in
     the same sk_buff?  If there's a mixture, it would be possible to deal
     with the non-zerocopy bit by allocating a zerocopy record and setting
     the cleanup function just to free it.

Implementation notes:

 (1) What I'm thinking is that there will be an 'iov_manager' struct that
     manages a single call to sendmsg().  This will be refcounted and carry
     the completion state (kind of like ubuf_info) and the cleanup function
     pointer.

 (2) The upper layer will wrap iov_manager in its own thing (kind of like
     ubuf_info_msgzc).

 (3) For sys_sendmsg(), sys_sendmmsg() and io_uring() use a 'syscall-level
     manager' that will progressively pin and unpin userspace buffers.

     (a) This will keep a list of the memory fragments it currently has pinned
     	 in a rolling buffer.  It has to be able to find them to unpin them
     	 and it has to allow for the userspace addresses having been remapped
     	 or unmapped.

     (b) As its refill function gets called, the manager will pin more pages
     	 and add them to the producer end of the buffer.

     (c) These can then be extracted by the protocol into skbuffs.

     (d) As its cleanup function gets called, it will advance the consumer end
     	 and unpin/discard memory ranges that are consumed.

     I'm not sure how much drag this might add to performance, though, so it
     will need to be tried and benchmarked.

 (4) Possibly, the list of fragments can be made directly available through an
     iov_iter type and a subset attached directly to a sk_buff.

 (5) SOCK_STREAM sockets will keep an ordered list of manager structs, each
     tagged with the byte transmission sequence range for that struct.  The
     socket will keep a transmission completion sequence counter and as the
     counter advances through the manager list, their cleanup functions will
     be called and, ultimately, they'll be detached and put.

 (6) SOCK_DGRAM sockets will keep a list of manager structs on the sk_buff as
     well as on the socket.  The problem is that they may complete out of
     order, but SO_EE_ORIGIN_ZEROCOPY works by byte position.  Each time a
     sk_buff completes, all the managers attached to it are marked complete,
     but complete managers can only get cleaned up when they reach the front
     of the queue.

 (7) Kernel services will wrap iov_manager in their own wrapper and will pass
     down iterator that describes their message in its entirety through an
     iov_iter.

Finally, this doesn't cover recvmsg() zerocopy, which might also have some of
the same issues.

David


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ