lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Feb 2020 12:17:40 +0100
From:   Jan Kara <jack@...e.cz>
To:     Dave Chinner <david@...morbit.com>
Cc:     Christoph Hellwig <hch@....de>, ira.weiny@...el.com,
        linux-kernel@...r.kernel.org,
        Alexander Viro <viro@...iv.linux.org.uk>,
        "Darrick J. Wong" <darrick.wong@...cle.com>,
        Dan Williams <dan.j.williams@...el.com>,
        "Theodore Y. Ts'o" <tytso@....edu>, Jan Kara <jack@...e.cz>,
        linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH V4 07/13] fs: Add locking for a dynamic address space
 operations state

On Tue 25-02-20 11:09:37, Dave Chinner wrote:
> /me wonders if the best thing to do is to add a ->fault callout to
> tell the filesystem to lock/unlock the inode right up at the top of
> the page fault path, outside even the mmap_sem.  That means all the
> methods that the page fault calls are protected against S_DAX
> changes, and it gives us a low cost method of serialising page
> faults against DIO (e.g. via inode_dio_wait())....

Well, that's going to be pretty hard. The main problem is: you cannot
lookup VMA until you hold mmap_sem and the inode is inside the VMA. And
this is a fundamental problem because until you hold mmap_sem, the address
space can change and thus the virtual address you are faulting can be
changing inode it is mapped to. So you would have to do some dance like:

lock mmap_sem
lookup vma
get inode reference
drop mmap_sem
tell fs about page fault
lock mmap_sem
is the vma still the same?

And I'm pretty confident the overhead will be visible in page fault
intensive workloads...

								Honza

-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ