lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <SA0PR15MB3919B83A4591FE06F6572EA899919@SA0PR15MB3919.namprd15.prod.outlook.com> Date: Thu, 6 Apr 2023 15:36:47 +0000 From: Bernard Metzler <BMT@...ich.ibm.com> To: David Howells <dhowells@...hat.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org> CC: David Howells <dhowells@...hat.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Willem de Bruijn <willemdebruijn.kernel@...il.com>, Matthew Wilcox <willy@...radead.org>, Al Viro <viro@...iv.linux.org.uk>, Christoph Hellwig <hch@...radead.org>, Jens Axboe <axboe@...nel.dk>, Jeff Layton <jlayton@...nel.org>, Christian Brauner <brauner@...nel.org>, Chuck Lever III <chuck.lever@...cle.com>, Linus Torvalds <torvalds@...ux-foundation.org>, "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, Jason Gunthorpe <jgg@...pe.ca>, Leon Romanovsky <leon@...nel.org>, Tom Talpey <tom@...pey.com>, "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org> Subject: RE: [PATCH net-next v5 11/19] siw: Inline do_tcp_sendpages() > -----Original Message----- > From: David Howells <dhowells@...hat.com> > Sent: Thursday, 6 April 2023 11:43 > To: netdev@...r.kernel.org > Cc: David Howells <dhowells@...hat.com>; David S. Miller > <davem@...emloft.net>; Eric Dumazet <edumazet@...gle.com>; Jakub Kicinski > <kuba@...nel.org>; Paolo Abeni <pabeni@...hat.com>; Willem de Bruijn > <willemdebruijn.kernel@...il.com>; Matthew Wilcox <willy@...radead.org>; Al > Viro <viro@...iv.linux.org.uk>; Christoph Hellwig <hch@...radead.org>; Jens > Axboe <axboe@...nel.dk>; Jeff Layton <jlayton@...nel.org>; Christian > Brauner <brauner@...nel.org>; Chuck Lever III <chuck.lever@...cle.com>; > Linus Torvalds <torvalds@...ux-foundation.org>; linux- > fsdevel@...r.kernel.org; linux-kernel@...r.kernel.org; linux-mm@...ck.org; > Bernard Metzler <BMT@...ich.ibm.com>; Jason Gunthorpe <jgg@...pe.ca>; Leon > Romanovsky <leon@...nel.org>; Tom Talpey <tom@...pey.com>; linux- > rdma@...r.kernel.org > Subject: [EXTERNAL] [PATCH net-next v5 11/19] siw: Inline > do_tcp_sendpages() > > do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(), > so inline it, allowing do_tcp_sendpages() to be removed. This is part of > replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set. > > Signed-off-by: David Howells <dhowells@...hat.com> > cc: Bernard Metzler <bmt@...ich.ibm.com> > cc: Jason Gunthorpe <jgg@...pe.ca> > cc: Leon Romanovsky <leon@...nel.org> > cc: Tom Talpey <tom@...pey.com> > cc: "David S. Miller" <davem@...emloft.net> > cc: Eric Dumazet <edumazet@...gle.com> > cc: Jakub Kicinski <kuba@...nel.org> > cc: Paolo Abeni <pabeni@...hat.com> > cc: Jens Axboe <axboe@...nel.dk> > cc: Matthew Wilcox <willy@...radead.org> > cc: linux-rdma@...r.kernel.org > cc: netdev@...r.kernel.org > --- > drivers/infiniband/sw/siw/siw_qp_tx.c | 17 ++++++++++++----- > 1 file changed, 12 insertions(+), 5 deletions(-) > > diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c > b/drivers/infiniband/sw/siw/siw_qp_tx.c > index 05052b49107f..fa5de40d85d5 100644 > --- a/drivers/infiniband/sw/siw/siw_qp_tx.c > +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c > @@ -313,7 +313,7 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx, > struct socket *s, > } > > /* > - * 0copy TCP transmit interface: Use do_tcp_sendpages. > + * 0copy TCP transmit interface: Use MSG_SPLICE_PAGES. > * > * Using sendpage to push page by page appears to be less efficient > * than using sendmsg, even if data are copied. > @@ -324,20 +324,27 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx, > struct socket *s, > static int siw_tcp_sendpages(struct socket *s, struct page **page, int > offset, > size_t size) > { > + struct bio_vec bvec; > + struct msghdr msg = { > + .msg_flags = (MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST | > + MSG_SPLICE_PAGES), > + }; > struct sock *sk = s->sk; > - int i = 0, rv = 0, sent = 0, > - flags = MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST; > + int i = 0, rv = 0, sent = 0; > > while (size) { > size_t bytes = min_t(size_t, PAGE_SIZE - offset, size); > > if (size + offset <= PAGE_SIZE) > - flags = MSG_MORE | MSG_DONTWAIT; > + msg.msg_flags = MSG_MORE | MSG_DONTWAIT; > > tcp_rate_check_app_limited(sk); > + bvec_set_page(&bvec, page[i], bytes, offset); > + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); > + > try_page_again: > lock_sock(sk); > - rv = do_tcp_sendpages(sk, page[i], offset, bytes, flags); > + rv = tcp_sendmsg_locked(sk, &msg, size); > release_sock(sk); > > if (rv > 0) { lgtm. Reviewd-by: Bernard Metzler <bmt@...ich.ibm.com>
Powered by blists - more mailing lists