lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNM=+y-OwKjtsjsEkwPjpHXpt7ywaE48JyiND6dKt=Vf1Q@mail.gmail.com>
Date:   Wed, 19 Feb 2020 10:21:46 +0100
From:   Marco Elver <elver@...gle.com>
To:     Al Viro <viro@...iv.linux.org.uk>
Cc:     Qian Cai <cai@....pw>, Christoph Hellwig <hch@...radead.org>,
        "Darrick J. Wong" <darrick.wong@...cle.com>,
        linux-xfs@...r.kernel.org,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] fs: fix a data race in i_size_write/i_size_read

On Wed, 19 Feb 2020 at 06:23, Al Viro <viro@...iv.linux.org.uk> wrote:
>
> On Wed, Feb 19, 2020 at 12:08:40AM -0500, Qian Cai wrote:
> >
> >
> > > On Feb 18, 2020, at 11:52 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> > >
> > > If aligned 64bit stores on 64bit host (note the BITS_PER_LONG ifdefs) end up
> > > being split, the kernel is FUBAR anyway.  Details, please - how could that
> > > end up happening?
> >
> > My understanding is the compiler might decide to split the load into saying two 4-byte loads. Then, we might have,
> >
> > Load1
> > Store
> > Load2
> >
> > where the load value could be a garbage. Also, Marco (the KCSAN maintainer) who knew more of compiler than me mentioned that there is no guarantee that the store will not be split either. Thus, the WRITE_ONCE().
> >

In theory, yes. But by now we're aware of the current convention of
assuming plain aligned writes up to word-size are atomic (which KCSAN
respects with default Kconfig).

>         I would suggest
> * if some compiler does that, ask the persons responsible for that
> "optimization" which flags should be used to disable it.
> * if they fail to provide such, educate them regarding the
> usefulness of their idea
> * if that does not help, don't use the bloody piece of garbage.
>
>         Again, is that pure theory (because I can't come up with
> any reason why splitting a 32bit load would be any less legitimate
> than doing the same to a 64bit one on a 64bit architecture),
> or is there anything that really would pull that off?

Right. In reality, for mainstream architectures, it appears quite unlikely.

There may be other valid reasons, such as documenting the fact the
write can happen concurrently with loads.

Let's assume the WRITE_ONCE can be dropped.

The load is a different story. While load tearing may not be an issue,
it's more likely that other optimizations can break the code. For
example load fusing can break code that expects repeated loads in a
loop. E.g. I found these uses of i_size_read in loops:

git grep -E '(for|while) \(.*i_size_read'
fs/ocfs2/dir.c: while (ctx->pos < i_size_read(inode)) {
fs/ocfs2/dir.c:                 for (i = 0; i < i_size_read(inode) &&
i < offset; ) {
fs/ocfs2/dir.c: while (ctx->pos < i_size_read(inode)) {
fs/ocfs2/dir.c:         while (ctx->pos < i_size_read(inode)
fs/squashfs/dir.c:      while (length < i_size_read(inode)) {
fs/squashfs/namei.c:    while (length < i_size_read(dir)) {

Can i_size writes happen concurrently, and if so will these break if
the compiler decides to just do i_size_read's load once, and keep the
result in a register?

Thanks,
-- Marco

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ