lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <AE838C22-1A11-4F93-AB88-80CF009BD301@dilger.ca>
Date:   Thu, 13 Jun 2019 15:13:40 -0600
From:   Andreas Dilger <adilger@...ger.ca>
To:     Kent Overstreet <kent.overstreet@...il.com>
Cc:     Dave Chinner <david@...morbit.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Dave Chinner <dchinner@...hat.com>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Christoph Hellwig <hch@....de>,
        Matthew Wilcox <willy@...radead.org>,
        Amir Goldstein <amir73il@...il.com>, Jan Kara <jack@...e.cz>,
        Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
        linux-xfs <linux-xfs@...r.kernel.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Josef Bacik <josef@...icpanda.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: pagecache locking (was: bcachefs status update) merged)

On Jun 13, 2019, at 12:36 PM, Kent Overstreet <kent.overstreet@...il.com> wrote:
> 
> On Thu, Jun 13, 2019 at 09:02:24AM +1000, Dave Chinner wrote:
>> On Wed, Jun 12, 2019 at 12:21:44PM -0400, Kent Overstreet wrote:
>>> Ok, I'm totally on board with returning EDEADLOCK.
>>> 
>>> Question: Would we be ok with returning EDEADLOCK for any IO where the buffer is
>>> in the same address space as the file being read/written to, even if the buffer
>>> and the IO don't technically overlap?
>> 
>> I'd say that depends on the lock granularity. For a range lock,
>> we'd be able to do the IO for non-overlapping ranges. For a normal
>> mutex or rwsem, then we risk deadlock if the page fault triggers on
>> the same address space host as we already have locked for IO. That's
>> the case we currently handle with the second IO lock in XFS, ext4,
>> btrfs, etc (XFS_MMAPLOCK_* in XFS).
>> 
>> One of the reasons I'm looking at range locks for XFS is to get rid
>> of the need for this second mmap lock, as there is no reason for it
>> existing if we can lock ranges and EDEADLOCK inside page faults and
>> return errors.
> 
> My concern is that range locks are going to turn out to be both more complicated
> and heavier weight, performance wise, than the approach I've taken of just a
> single lock per address space.
> 
> Reason being range locks only help when you've got multiple operations going on
> simultaneously that don't conflict - i.e. it's really only going to be useful
> for applications that are doing buffered IO and direct IO simultaneously to the
> same file. Personally, I think that would be a pretty gross thing to do and I'm
> not particularly interested in optimizing for that case myself... but, if you
> know of applications that do depend on that I might change my opinion. If not, I
> want to try and get the simpler, one-lock-per-address space approach to work.

There are definitely workloads that require multiple threads doing non-overlapping
writes to a single file in HPC.  This is becoming an increasingly common problem
as the number of cores on a single client increase, since there is typically one
thread per core trying to write to a shared file.  Using multiple files (one per
core) is possible, but that has file management issues for users when there are a
million cores running on the same job/file (obviously not on the same client node)
dumping data every hour.

We were just looking at this exact problem last week, and most of the threads are
spinning in grab_cache_page_nowait->add_to_page_cache_lru() and set_page_dirty()
when writing at 1.9GB/s when they could be writing at 5.8GB/s (when threads are
writing O_DIRECT instead of buffered).  Flame graph is attached for 16-thread case,
but high-end systems today easily have 2-4x that many cores.

Any approach for range locks can't be worse than spending 80% of time spinning.

Cheers, Andreas





Download attachment "shared_file_write.svg" of type "image/svg+xml" (161675 bytes)

Download attachment "signature.asc" of type "application/pgp-signature" (874 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ