[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2SJw7w1IsIik3nb@casper.infradead.org>
Date: Fri, 4 Nov 2022 03:40:51 +0000
From: Matthew Wilcox <willy@...radead.org>
To: David Howells <dhowells@...hat.com>
Cc: George Law <glaw@...hat.com>, Jeff Layton <jlayton@...nel.org>,
linux-cachefs@...hat.com, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] netfs: Fix missing xas_retry() calls in xarray iteration
On Thu, Nov 03, 2022 at 09:33:28PM +0000, David Howells wrote:
> +++ b/fs/netfs/buffered_read.c
> @@ -46,10 +46,15 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
>
> rcu_read_lock();
> xas_for_each(&xas, folio, last_page) {
> - unsigned int pgpos = (folio_index(folio) - start_page) * PAGE_SIZE;
> - unsigned int pgend = pgpos + folio_size(folio);
> + unsigned int pgpos, pgend;
"unsigned int" assumes that the number of bytes isn't going to exceed 32
bits. I tend to err on the side of safety here and use size_t.
> bool pg_failed = false;
>
> + if (xas_retry(&xas, folio))
> + continue;
> +
> + pgpos = (folio_index(folio) - start_page) * PAGE_SIZE;
> + pgend = pgpos + folio_size(folio);
What happens if start_page is somewhere inside folio? Seems to me
that pgend ends up overhanging into the next folio?
Powered by blists - more mailing lists