lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20230406094245.3633290-14-dhowells@redhat.com> Date: Thu, 6 Apr 2023 10:42:39 +0100 From: David Howells <dhowells@...hat.com> To: netdev@...r.kernel.org Cc: David Howells <dhowells@...hat.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Willem de Bruijn <willemdebruijn.kernel@...il.com>, Matthew Wilcox <willy@...radead.org>, Al Viro <viro@...iv.linux.org.uk>, Christoph Hellwig <hch@...radead.org>, Jens Axboe <axboe@...nel.dk>, Jeff Layton <jlayton@...nel.org>, Christian Brauner <brauner@...nel.org>, Chuck Lever III <chuck.lever@...cle.com>, Linus Torvalds <torvalds@...ux-foundation.org>, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, David Ahern <dsahern@...nel.org> Subject: [PATCH net-next v5 13/19] ip, udp: Support MSG_SPLICE_PAGES Make IP/UDP sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells <dhowells@...hat.com> cc: Willem de Bruijn <willemdebruijn.kernel@...il.com> cc: David Ahern <dsahern@...nel.org> cc: "David S. Miller" <davem@...emloft.net> cc: Eric Dumazet <edumazet@...gle.com> cc: Jakub Kicinski <kuba@...nel.org> cc: Paolo Abeni <pabeni@...hat.com> cc: Jens Axboe <axboe@...nel.dk> cc: Matthew Wilcox <willy@...radead.org> cc: netdev@...r.kernel.org --- net/ipv4/ip_output.c | 47 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 22a90a9392eb..c6c318eb3d37 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -957,6 +957,41 @@ csum_page(struct page *page, int offset, int copy) return csum; } +/* + * Add (or copy) data pages for MSG_SPLICE_PAGES. + */ +static int __ip_splice_pages(struct sock *sk, struct sk_buff *skb, + void *from, int *pcopy) +{ + struct msghdr *msg = from; + struct page *page = NULL, **pages = &page; + ssize_t copy = *pcopy; + size_t off; + int err; + + copy = iov_iter_extract_pages(&msg->msg_iter, &pages, copy, 1, 0, &off); + if (copy <= 0) + return copy ?: -EIO; + + err = skb_append_pagefrags(skb, page, off, copy); + if (err < 0) { + iov_iter_revert(&msg->msg_iter, copy); + return err; + } + + if (skb->ip_summed == CHECKSUM_NONE) { + __wsum csum; + + csum = csum_page(page, off, copy); + skb->csum = csum_block_add(skb->csum, csum, skb->len); + } + + skb_len_add(skb, copy); + refcount_add(copy, &sk->sk_wmem_alloc); + *pcopy = copy; + return 0; +} + static int __ip_append_data(struct sock *sk, struct flowi4 *fl4, struct sk_buff_head *queue, @@ -1048,6 +1083,14 @@ static int __ip_append_data(struct sock *sk, skb_zcopy_set(skb, uarg, &extra_uref); } } + } else if ((flags & MSG_SPLICE_PAGES) && length) { + if (inet->hdrincl) + return -EPERM; + if (rt->dst.dev->features & NETIF_F_SG) + /* We need an empty buffer to attach stuff to */ + paged = true; + else + flags &= ~MSG_SPLICE_PAGES; } cork->length += length; @@ -1207,6 +1250,10 @@ static int __ip_append_data(struct sock *sk, err = -EFAULT; goto error; } + } else if (flags & MSG_SPLICE_PAGES) { + err = __ip_splice_pages(sk, skb, from, ©); + if (err < 0) + goto error; } else if (!zc) { int i = skb_shinfo(skb)->nr_frags;
Powered by blists - more mailing lists