lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20231212133713.bihojdsnccmadcpg@quack3>
Date: Tue, 12 Dec 2023 14:37:13 +0100
From: Jan Kara <jack@...e.cz>
To: Baokun Li <libaokun1@...wei.com>
Cc: Jan Kara <jack@...e.cz>, linux-mm@...ck.org, linux-ext4@...r.kernel.org,
	tytso@....edu, adilger.kernel@...ger.ca, willy@...radead.org,
	akpm@...ux-foundation.org, david@...morbit.com, hch@...radead.org,
	ritesh.list@...il.com, linux-kernel@...r.kernel.org,
	yi.zhang@...wei.com, yangerkun@...wei.com, yukuai3@...wei.com,
	stable@...nel.org
Subject: Re: [RFC PATCH] mm/filemap: avoid buffered read/write race to read
 inconsistent data

On Tue 12-12-23 21:16:16, Baokun Li wrote:
> On 2023/12/12 20:41, Jan Kara wrote:
> > On Tue 12-12-23 17:36:34, Baokun Li wrote:
> > > The following concurrency may cause the data read to be inconsistent with
> > > the data on disk:
> > > 
> > >               cpu1                           cpu2
> > > ------------------------------|------------------------------
> > >                                 // Buffered write 2048 from 0
> > >                                 ext4_buffered_write_iter
> > >                                  generic_perform_write
> > >                                   copy_page_from_iter_atomic
> > >                                   ext4_da_write_end
> > >                                    ext4_da_do_write_end
> > >                                     block_write_end
> > >                                      __block_commit_write
> > >                                       folio_mark_uptodate
> > > // Buffered read 4096 from 0          smp_wmb()
> > > ext4_file_read_iter                   set_bit(PG_uptodate, folio_flags)
> > >   generic_file_read_iter            i_size_write // 2048
> > >    filemap_read                     unlock_page(page)
> > >     filemap_get_pages
> > >      filemap_get_read_batch
> > >      folio_test_uptodate(folio)
> > >       ret = test_bit(PG_uptodate, folio_flags)
> > >       if (ret)
> > >        smp_rmb();
> > >        // Ensure that the data in page 0-2048 is up-to-date.
> > > 
> > >                                 // New buffered write 2048 from 2048
> > >                                 ext4_buffered_write_iter
> > >                                  generic_perform_write
> > >                                   copy_page_from_iter_atomic
> > >                                   ext4_da_write_end
> > >                                    ext4_da_do_write_end
> > >                                     block_write_end
> > >                                      __block_commit_write
> > >                                       folio_mark_uptodate
> > >                                        smp_wmb()
> > >                                        set_bit(PG_uptodate, folio_flags)
> > >                                     i_size_write // 4096
> > >                                     unlock_page(page)
> > > 
> > >     isize = i_size_read(inode) // 4096
> > >     // Read the latest isize 4096, but without smp_rmb(), there may be
> > >     // Load-Load disorder resulting in the data in the 2048-4096 range
> > >     // in the page is not up-to-date.
> > >     copy_page_to_iter
> > >     // copyout 4096
> > > 
> > > In the concurrency above, we read the updated i_size, but there is no read
> > > barrier to ensure that the data in the page is the same as the i_size at
> > > this point, so we may copy the unsynchronized page out. Hence adding the
> > > missing read memory barrier to fix this.
> > > 
> > > This is a Load-Load reordering issue, which only occurs on some weak
> > > mem-ordering architectures (e.g. ARM64, ALPHA), but not on strong
> > > mem-ordering architectures (e.g. X86). And theoretically the problem
> > AFAIK x86 can also reorder loads vs loads so the problem can in theory
> > happen on x86 as well.
> 
> According to what I read in the /perfbook /at the link below,
> 
>  Loads Reordered After Loads does not happen on x86.
> 
> pdf sheet 562 corresponds to page 550,
> 
>    Table 15.5: Summary of Memory Ordering
> 
> https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook-1c.2023.06.11a.pdf

Indeed. I stand corrected! Thanks for the link.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ