[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230405165339.3468808-8-dhowells@redhat.com>
Date: Wed, 5 Apr 2023 17:53:26 +0100
From: David Howells <dhowells@...hat.com>
To: netdev@...r.kernel.org
Cc: David Howells <dhowells@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Jens Axboe <axboe@...nel.dk>, Jeff Layton <jlayton@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Chuck Lever III <chuck.lever@...cle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH net-next v4 07/20] tcp: Make sendmsg(MSG_SPLICE_PAGES) copy unspliceable data
If sendmsg() with MSG_SPLICE_PAGES encounters a page that shouldn't be
spliced - a slab page, for instance, or one with a zero count - make
tcp_sendmsg() copy it.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Eric Dumazet <edumazet@...gle.com>
cc: "David S. Miller" <davem@...emloft.net>
cc: Jakub Kicinski <kuba@...nel.org>
cc: Paolo Abeni <pabeni@...hat.com>
cc: Jens Axboe <axboe@...nel.dk>
cc: Matthew Wilcox <willy@...radead.org>
cc: netdev@...r.kernel.org
---
net/ipv4/tcp.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 510bacc7ce7b..238a8ad6527c 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1418,10 +1418,10 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
goto do_error;
copy = err;
} else if (zc == 2) {
- /* Splice in data. */
+ /* Splice in data if we can; copy if we can't. */
struct page *page = NULL, **pages = &page;
size_t off = 0, part;
- bool can_coalesce;
+ bool can_coalesce, put = false;
int i = skb_shinfo(skb)->nr_frags;
copy = iov_iter_extract_pages(&msg->msg_iter, &pages,
@@ -1448,12 +1448,34 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
goto wait_for_space;
copy = part;
+ if (!sendpage_ok(page)) {
+ const void *p = kmap_local_page(page);
+ void *q;
+
+ q = page_frag_memdup(NULL, p + off, copy,
+ sk->sk_allocation, ULONG_MAX);
+ kunmap_local(p);
+ if (!q) {
+ iov_iter_revert(&msg->msg_iter, copy);
+ err = copy ?: -ENOMEM;
+ goto do_error;
+ }
+ page = virt_to_page(q);
+ off = offset_in_page(q);
+ put = true;
+ can_coalesce = false;
+ }
+
if (can_coalesce) {
skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
} else {
- get_page(page);
+ if (!put)
+ get_page(page);
+ put = false;
skb_fill_page_desc_noacc(skb, i, page, off, copy);
}
+ if (put)
+ put_page(page);
page = NULL;
if (!(flags & MSG_NO_SHARED_FRAGS))
Powered by blists - more mailing lists