lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Feb 2016 23:39:43 +0100
From:	Cedric Blancher <cedric.blancher@...il.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	Jan Kara <jack@...e.cz>, Dan Williams <dan.j.williams@...el.com>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	Linux MM <linux-mm@...ck.org>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	Mel Gorman <mgorman@...e.de>,
	Matthew Wilcox <willy@...ux.intel.com>
Subject: Re: Another proposal for DAX fault locking

AFAIK Solaris 11 uses a sparse tree instead of a array. Solves the
scalability problem AND deals with variable page size.

Ced

On 10 February 2016 at 23:09, Dave Chinner <david@...morbit.com> wrote:
> On Wed, Feb 10, 2016 at 11:32:49AM +0100, Jan Kara wrote:
>> On Tue 09-02-16 10:18:53, Dan Williams wrote:
>> > On Tue, Feb 9, 2016 at 9:24 AM, Jan Kara <jack@...e.cz> wrote:
>> > > Hello,
>> > >
>> > > I was thinking about current issues with DAX fault locking [1] (data
>> > > corruption due to racing faults allocating blocks) and also races which
>> > > currently don't allow us to clear dirty tags in the radix tree due to races
>> > > between faults and cache flushing [2]. Both of these exist because we don't
>> > > have an equivalent of page lock available for DAX. While we have a
>> > > reasonable solution available for problem [1], so far I'm not aware of a
>> > > decent solution for [2]. After briefly discussing the issue with Mel he had
>> > > a bright idea that we could used hashed locks to deal with [2] (and I think
>> > > we can solve [1] with them as well). So my proposal looks as follows:
>> > >
>> > > DAX will have an array of mutexes (the array can be made per device but
>> > > initially a global one should be OK). We will use mutexes in the array as a
>> > > replacement for page lock - we will use hashfn(mapping, index) to get
>> > > particular mutex protecting our offset in the mapping. On fault / page
>> > > mkwrite, we'll grab the mutex similarly to page lock and release it once we
>> > > are done updating page tables. This deals with races in [1]. When flushing
>> > > caches we grab the mutex before clearing writeable bit in page tables
>> > > and clearing dirty bit in the radix tree and drop it after we have flushed
>> > > caches for the pfn. This deals with races in [2].
>> > >
>> > > Thoughts?
>> > >
>> >
>> > I like the fact that this makes the locking explicit and
>> > straightforward rather than something more tricky.  Can we make the
>> > hashfn pfn based?  I'm thinking we could later reuse this as part of
>> > the solution for eliminating the need to allocate struct page, and we
>> > don't have the 'mapping' available in all paths...
>>
>> So Mel originally suggested to use pfn for hashing as well. My concern with
>> using pfn is that e.g. if you want to fill a hole, you don't have a pfn to
>> lock. What you really need to protect is a logical offset in the file to
>> serialize allocation of underlying blocks, its mapping into page tables,
>> and flushing the blocks out of caches. So using inode/mapping and offset
>> for the hashing is easier (it isn't obvious to me we can fix hole filling
>> races with pfn-based locking).
>
> So how does that file+offset hash work when trying to lock different
> ranges?  file+offset hashing to determine the lock to use only works
> if we are dealing with fixed size ranges that the locks affect.
> e.g. offset has 4k granularity for a single page faults, but we also
> need to handle 2MB granularity for huge page faults, and IIRC 1GB
> granularity for giant page faults...
>
> What's the plan here?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@...morbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Cedric Blancher <cedric.blancher@...il.com>
Institute Pasteur

Powered by blists - more mailing lists