lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 10 Feb 2016 13:35:18 +0100
From:	Jan Kara <jack@...e.cz>
To:	Dmitry Monakhov <dmonlist@...il.com>
Cc:	Jan Kara <jack@...e.cz>,
	Ross Zwisler <ross.zwisler@...ux.intel.com>,
	linux-nvdimm@...ts.01.org, Dave Chinner <david@...morbit.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, mgorman@...e.de,
	linux-fsdevel@...r.kernel.org
Subject: Re: Another proposal for DAX fault locking

On Wed 10-02-16 15:29:34, Dmitry Monakhov wrote:
> Jan Kara <jack@...e.cz> writes:
> 
> > Hello,
> >
> > I was thinking about current issues with DAX fault locking [1] (data
> > corruption due to racing faults allocating blocks) and also races which
> > currently don't allow us to clear dirty tags in the radix tree due to races
> > between faults and cache flushing [2]. Both of these exist because we don't
> > have an equivalent of page lock available for DAX. While we have a
> > reasonable solution available for problem [1], so far I'm not aware of a
> > decent solution for [2]. After briefly discussing the issue with Mel he had
> > a bright idea that we could used hashed locks to deal with [2] (and I think
> > we can solve [1] with them as well). So my proposal looks as follows:
> >
> > DAX will have an array of mutexes (the array can be made per device but
> > initially a global one should be OK). We will use mutexes in the array as a
> > replacement for page lock - we will use hashfn(mapping, index) to get
> > particular mutex protecting our offset in the mapping. On fault / page
> > mkwrite, we'll grab the mutex similarly to page lock and release it once we
> > are done updating page tables. This deals with races in [1]. When flushing
> > caches we grab the mutex before clearing writeable bit in page tables
> > and clearing dirty bit in the radix tree and drop it after we have flushed
> > caches for the pfn. This deals with races in [2].
> >
> > Thoughts?
> Agree, only small note:
> Hash locks has side effect for batch locking due to collision.
> Some times we want to lock several pages/entries (migration/defragmentation)
> So we will endup with deadlock due to hash collision.

Yeah, but at least for the purposes we want the locks for locking just one
'page' is enough. If we ever needed locking more 'pages', we would have to
choose a different locking scheme.

									Honza

-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ