[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240801185321.GA2534812@thelio-3990X>
Date: Thu, 1 Aug 2024 11:53:21 -0700
From: Nathan Chancellor <nathan@...nel.org>
To: Simon Horman <horms@...nel.org>, David Howells <dhowells@...hat.com>,
Christian Brauner <christian@...uner.io>
Cc: Steve French <smfrench@...il.com>, Matthew Wilcox <willy@...radead.org>,
Jeff Layton <jlayton@...nel.org>,
Gao Xiang <hsiangkao@...ux.alibaba.com>,
Dominique Martinet <asmadeus@...ewreck.org>,
Marc Dionne <marc.dionne@...istor.com>,
Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>, Tom Talpey <tom@...pey.com>,
Eric Van Hensbergen <ericvh@...nel.org>,
Ilya Dryomov <idryomov@...il.com>, netfs@...ts.linux.dev,
linux-afs@...ts.infradead.org, linux-cifs@...r.kernel.org,
linux-nfs@...r.kernel.org, ceph-devel@...r.kernel.org,
v9fs@...ts.linux.dev, linux-erofs@...ts.ozlabs.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 18/24] netfs: Speed up buffered reading
On Wed, Jul 31, 2024 at 08:07:42PM +0100, Simon Horman wrote:
> On Mon, Jul 29, 2024 at 05:19:47PM +0100, David Howells wrote:
>
> ...
>
> > diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
>
> ...
>
> > +/*
> > + * Perform a read to the pagecache from a series of sources of different types,
> > + * slicing up the region to be read according to available cache blocks and
> > + * network rsize.
> > + */
> > +static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
> > +{
> > + struct netfs_inode *ictx = netfs_inode(rreq->inode);
> > + unsigned long long start = rreq->start;
> > + ssize_t size = rreq->len;
> > + int ret = 0;
> > +
> > + atomic_inc(&rreq->nr_outstanding);
> > +
> > + do {
> > + struct netfs_io_subrequest *subreq;
> > + enum netfs_io_source source = NETFS_DOWNLOAD_FROM_SERVER;
> > + ssize_t slice;
> > +
> > + subreq = netfs_alloc_subrequest(rreq);
> > + if (!subreq) {
> > + ret = -ENOMEM;
> > + break;
> > + }
> > +
> > + subreq->start = start;
> > + subreq->len = size;
> > +
> > + atomic_inc(&rreq->nr_outstanding);
> > + spin_lock_bh(&rreq->lock);
> > + list_add_tail(&subreq->rreq_link, &rreq->subrequests);
> > + subreq->prev_donated = rreq->prev_donated;
> > + rreq->prev_donated = 0;
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_added);
> > + spin_unlock_bh(&rreq->lock);
> > +
> > + source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size);
> > + subreq->source = source;
> > + if (source == NETFS_DOWNLOAD_FROM_SERVER) {
> > + unsigned long long zp = umin(ictx->zero_point, rreq->i_size);
> > + size_t len = subreq->len;
> > +
> > + if (subreq->start >= zp) {
> > + subreq->source = source = NETFS_FILL_WITH_ZEROES;
> > + goto fill_with_zeroes;
> > + }
> > +
> > + if (len > zp - subreq->start)
> > + len = zp - subreq->start;
> > + if (len == 0) {
> > + pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx",
> > + rreq->debug_id, subreq->debug_index,
> > + subreq->len, size,
> > + subreq->start, ictx->zero_point, rreq->i_size);
> > + break;
> > + }
> > + subreq->len = len;
> > +
> > + netfs_stat(&netfs_n_rh_download);
> > + if (rreq->netfs_ops->prepare_read) {
> > + ret = rreq->netfs_ops->prepare_read(subreq);
> > + if (ret < 0) {
> > + atomic_dec(&rreq->nr_outstanding);
> > + netfs_put_subrequest(subreq, false,
> > + netfs_sreq_trace_put_cancel);
> > + break;
> > + }
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
> > + }
> > +
> > + slice = netfs_prepare_read_iterator(subreq);
> > + if (slice < 0) {
> > + atomic_dec(&rreq->nr_outstanding);
> > + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
> > + ret = slice;
> > + break;
> > + }
> > +
> > + rreq->netfs_ops->issue_read(subreq);
> > + goto done;
> > + }
> > +
> > + fill_with_zeroes:
> > + if (source == NETFS_FILL_WITH_ZEROES) {
> > + subreq->source = NETFS_FILL_WITH_ZEROES;
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
> > + netfs_stat(&netfs_n_rh_zero);
> > + slice = netfs_prepare_read_iterator(subreq);
> > + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
> > + netfs_read_subreq_terminated(subreq, 0, false);
> > + goto done;
> > + }
> > +
> > + if (source == NETFS_READ_FROM_CACHE) {
> > + trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
> > + slice = netfs_prepare_read_iterator(subreq);
> > + netfs_read_cache_to_pagecache(rreq, subreq);
> > + goto done;
> > + }
> > +
> > + if (source == NETFS_INVALID_READ)
> > + break;
>
> Hi David,
>
> I feel a sense of deja vu here. So apologies if this was already
> discussed in the past.
>
> If the code ever reaches this line, then slice will be used
> uninitialised below.
>
> Flagged by W=1 allmodconfig builds on x86_64 with Clang 18.1.8.
which now breaks the build in next-20240801:
fs/netfs/buffered_read.c:304:7: error: variable 'slice' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
304 | if (source == NETFS_INVALID_READ)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
fs/netfs/buffered_read.c:308:11: note: uninitialized use occurs here
308 | size -= slice;
| ^~~~~
fs/netfs/buffered_read.c:304:3: note: remove the 'if' if its condition is always true
304 | if (source == NETFS_INVALID_READ)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
305 | break;
fs/netfs/buffered_read.c:221:16: note: initialize the variable 'slice' to silence this warning
221 | ssize_t slice;
| ^
| = 0
1 error generated.
If source has to be one of these values, perhaps switching to a switch
statement and having a default with a WARN_ON() or something would
convey that to the compiler?
Cheers,
Nathan
Powered by blists - more mailing lists