[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b6ba7275-ceab-4619-9e5b-a886daf34689@oracle.com>
Date: Wed, 11 Jun 2025 12:29:51 -0400
From: Chuck Lever <chuck.lever@...cle.com>
To: Sergey Bashirov <sergeybashirov@...il.com>
Cc: Christoph Hellwig <hch@...radead.org>, Jeff Layton <jlayton@...nel.org>,
NeilBrown <neil@...wn.name>, Olga Kornievskaia <okorniev@...hat.com>,
Dai Ngo <Dai.Ngo@...cle.com>, Tom Talpey <tom@...pey.com>,
linux-nfs@...r.kernel.org, linux-kernel@...r.kernel.org,
Konstantin Evtushenko <koevtushenko@...dex.com>
Subject: Re: [PATCH] nfsd: Use correct error code when decoding extents
On 6/11/25 12:24 PM, Sergey Bashirov wrote:
> I also have some doubts about this code:
> if (xdr_stream_decode_u64(&xdr, &bex.len))
> return -NFS4ERR_BADXDR;
> if (bex.len & (block_size - 1))
> return -NFS4ERR_BADXDR;
>
> The first error code is clear to me, it is all about decoding. But should
> not we return -NFS4ERR_EINVAL in the second check? On one hand, we
> encountered an invalid value after successful decoding, but on the other
> hand, we stopped decoding the extent array, so we can say that this is
> also a decoding error.
On first read of Section 2.3 of RFC 5663, there's no mandated alignment
requirement for bex_length. IMO this is a case where the implementation
is deciding that a decoded value is not valid, so NFS4ERR_INVAL might be
a better choice here.
--
Chuck Lever
Powered by blists - more mailing lists