[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2+hP6Rb6zXWcZ01epXOhD49os8F43=snE3pzCHX8+=Dzt1xg@mail.gmail.com>
Date: Wed, 8 Jan 2025 20:45:07 -0800
From: Marco Nelissen <marco.nelissen@...il.com>
To: "Darrick J. Wong" <djwong@...nel.org>
Cc: brauner@...nel.org, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] iomap: avoid avoid truncating 64-bit offset to 32 bits
On Wed, Jan 8, 2025 at 8:38 PM Darrick J. Wong <djwong@...nel.org> wrote:
>
> On Wed, Jan 08, 2025 at 08:11:50PM -0800, Marco Nelissen wrote:
> > on 32-bit kernels, iomap_write_delalloc_scan() was inadvertently using a
> > 32-bit position due to folio_next_index() returning an unsigned long.
> > This could lead to an infinite loop when writing to an xfs filesystem.
> >
> > Signed-off-by: Marco Nelissen <marco.nelissen@...il.com>
> > ---
> > fs/iomap/buffered-io.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index 54dc27d92781..d303e6c8900c 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1138,7 +1138,7 @@ static void iomap_write_delalloc_scan(struct inode *inode,
> > start_byte, end_byte, iomap, punch);
> >
> > /* move offset to start of next folio in range */
> > - start_byte = folio_next_index(folio) << PAGE_SHIFT;
> > + start_byte = folio_pos(folio) + folio_size(folio);
>
> eeek. Yeah, I guess that would happen towards the upper end of the 16T
> range on 32-bit.
By "16T" do you mean 16 TeraByte? I'm able to reproduce the infinite loop
with files around 4 GB.
> I wonder if perhaps pagemap.h should have:
>
> static inline loff_t folio_next_pos(struct folio *folio)
> {
> return folio_pos(folio) + folio_size(folio);
> }
>
> But I think this is the only place in the kernel that uses this
> construction? So maybe not worth the fuss.
>
> Reviewed-by: "Darrick J. Wong" <djwong@...nel.org>
>
> --D
>
> > folio_unlock(folio);
> > folio_put(folio);
> > }
> > --
> > 2.39.5
> >
Powered by blists - more mailing lists