[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aOb8-3C6y3wV9sIH@kernel.org>
Date: Wed, 8 Oct 2025 20:08:27 -0400
From: Mike Snitzer <snitzer@...nel.org>
To: NeilBrown <neilb@...mail.net>
Cc: Jeff Layton <jlayton@...nel.org>, Chuck Lever <chuck.lever@...cle.com>,
Olga Kornievskaia <okorniev@...hat.com>,
Dai Ngo <Dai.Ngo@...cle.com>, Tom Talpey <tom@...pey.com>,
Trond Myklebust <trondmy@...nel.org>,
Anna Schumaker <anna@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>,
David Howells <dhowells@...hat.com>,
Brandon Adams <brandona@...a.com>, linux-nfs@...r.kernel.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] sunrpc: add a slot to rqstp->rq_bvec for TCP
record marker
On Thu, Oct 09, 2025 at 08:51:25AM +1100, NeilBrown wrote:
> On Thu, 09 Oct 2025, Jeff Layton wrote:
> > We've seen some occurrences of messages like this in dmesg on some knfsd
> > servers:
> >
> > xdr_buf_to_bvec: bio_vec array overflow
> >
> > Usually followed by messages like this that indicate a short send (note
> > that this message is from an older kernel and the amount that it reports
> > attempting to send is short by 4 bytes):
> >
> > rpc-srv/tcp: nfsd: sent 1048155 when sending 1048152 bytes - shutting down socket
> >
> > svc_tcp_sendmsg() steals a slot in the rq_bvec array for the TCP record
> > marker. If the send is an unaligned READ call though, then there may not
> > be enough slots in the rq_bvec array in some cases.
> >
> > Add a slot to the rq_bvec array, and fix up the array lengths in the
> > callers that care.
> >
> > Fixes: e18e157bb5c8 ("SUNRPC: Send RPC message on TCP with a single sock_sendmsg() call")
> > Tested-by: Brandon Adams <brandona@...a.com>
> > Signed-off-by: Jeff Layton <jlayton@...nel.org>
> > ---
> > fs/nfsd/vfs.c | 6 +++---
> > net/sunrpc/svc.c | 3 ++-
> > net/sunrpc/svcsock.c | 4 ++--
> > 3 files changed, 7 insertions(+), 6 deletions(-)
>
> I can't say that I'm liking this patch.
>
> There are 11 place where (in nfsd-testing recently) where
> rq_maxpages is used (as opposed to declared or assigned).
>
> 3 in nfsd/vfs.c
> 4 in sunrpc/svc.c
> 1 in sunrpc/svc_xprt.c
> 2 in sunrpc/svcsock.c
> 1 in xprtrdma/svc_rdma_rc.c
>
> Your patch changes six of those to add 1. I guess the others aren't
> "callers that care". It would help to have it clearly stated why, or
> why not, a caller might care.
>
> But also, what does "rq_maxpages" even mean now?
> The comment in svc.h still says "num of entries in rq_pages"
> which is certainly no longer the case.
> But if it was the case, we should have called it "rq_numpages"
> or similar.
> But maybe it wasn't meant to be the number of pages in the array,
> maybe it was meant to be the maximum number of pages is a request
> or a reply.....
> No - that is sv_max_mesg, to which we add 2 and 1.
> So I could ask "why not just add another 1 in svc_serv_maxpages()?"
> Would the callers that might not care be harmed if rq_maxpages were
> one larger than it is?
>
> It seems to me that rq_maxpages is rather confused and the bug you have
> found which requires this patch is some evidence to that confusion. We
> should fix the confusion, not just the bug.
>
> So simple question to cut through my waffle:
> Would this:
> - return DIV_ROUND_UP(serv->sv_max_mesg, PAGE_SIZE) + 2 + 1;
> + return DIV_ROUND_UP(serv->sv_max_mesg, PAGE_SIZE) + 2 + 1 + 1;
>
> fix the problem. If not, why not? If so, can we just do this?
> then look at renaming rq_maxpages to rq_numpages and audit all the uses
> (and maybe you have already audited...).
Right, I recently wanted to do the same:
https://lore.kernel.org/linux-nfs/20250909233315.80318-2-snitzer@kernel.org/
Certainly cleaner and preferable to me.
Otherwise the +1 sprinkled selectively is really prone to be a problem
for any new users of rq_maxpages.
Mike
Powered by blists - more mailing lists