[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170508172527.GA18408@linux.intel.com>
Date: Mon, 8 May 2017 11:25:27 -0600
From: Ross Zwisler <ross.zwisler@...ux.intel.com>
To: Jan Kara <jack@...e.cz>
Cc: Ross Zwisler <ross.zwisler@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-nvdimm@...ts.01.org, stable@...r.kernel.org
Subject: Re: [PATCH 4/4] dax: Fix data corruption when fault races with write
On Fri, May 05, 2017 at 09:25:00AM +0200, Jan Kara wrote:
> Currently DAX read fault can race with write(2) in the following way:
>
> CPU1 - write(2) CPU2 - read fault
> dax_iomap_pte_fault()
> ->iomap_begin() - sees hole
> dax_iomap_rw()
> iomap_apply()
> ->iomap_begin - allocates blocks
> dax_iomap_actor()
> invalidate_inode_pages2_range()
> - there's nothing to invalidate
> grab_mapping_entry()
> - we add zero page in the radix tree
> and map it to page tables
>
> The result is that hole page is mapped into page tables (and thus zeros
> are seen in mmap) while file has data written in that place.
>
> Fix the problem by locking exception entry before mapping blocks for the
> fault. That way we are sure invalidate_inode_pages2_range() call for
> racing write will either block on entry lock waiting for the fault to
> finish (and unmap stale page tables after that) or read fault will see
> already allocated blocks by write(2).
>
> Fixes: 9f141d6ef6258a3a37a045842d9ba7e68f368956
> CC: stable@...r.kernel.org
> Signed-off-by: Jan Kara <jack@...e.cz>
Yep, this looks correct to me. Thanks!
Reviewed-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
Powered by blists - more mailing lists