lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMUhpI/ZIuxvKCt8@zeniv-ca.linux.org.uk>
Date:   Sat, 12 Jun 2021 21:05:40 +0000
From:   Al Viro <viro@...iv.linux.org.uk>
To:     Andreas Gruenbacher <agruenba@...hat.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        cluster-devel <cluster-devel@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Jan Kara <jack@...e.cz>, Matthew Wilcox <willy@...radead.org>
Subject: Re: [RFC 4/9] gfs2: Fix mmap + page fault deadlocks (part 1)

On Fri, Jun 11, 2021 at 04:25:10PM +0000, Al Viro wrote:
> On Wed, Jun 02, 2021 at 01:16:32PM +0200, Andreas Gruenbacher wrote:
> 
> > Well, iomap_file_buffered_write() does that by using
> > iov_iter_fault_in_readable() and iov_iter_copy_from_user_atomic() as
> > in iomap_write_actor(), but the read and direct I/O side doesn't seem
> > to have equivalents. I suspect we can't just wrap
> > generic_file_read_iter() and iomap_dio_rw() calls in
> > pagefault_disable().
> 
> And it will have zero effect on O_DIRECT case, so you get the same
> deadlocks right back.  Because there you hit
> 	iomap_dio_bio_actor()
> 		bio_iov_iter_get_pages()
> 			....
> 				get_user_pages_fast()
> 					....
> 						faultin_page()
> 							handle_mm_fault()
> and at no point had CPU hit an exception, so disable_pagefault() will
> have no effect whatsoever.  You can bloody well hit gfs2 readpage/mkwrite
> if the destination is in mmapped area of some GFS2 file.  Do that
> while holding GFS2 locks and you are fucked.
> 
> No amount of prefaulting will protect you, BTW - it might make the
> deadlock harder to reproduce, but that's it.

AFAICS, what we have is
	* handle_mm_fault() can hit gfs2_fault(), which grabs per-inode lock
shared
	* handle_mm_fault() for write can hit gfs2_page_mkwrite(), which grabs
per-inode lock exclusive
	* pagefault_disable() prevents that for real page faults, but not for
get_user_pages_fast()
	* normal write:
        with inode_lock(inode)
		in a loop
			with per-inode lock exclusive
				__gfs2_iomap_get
				possibly gfs2_iomap_begin_write
				in a loop
					fault-in [read faults]
					iomap_write_begin
					copy_page_from_iter_atomic() [pf disabled]
					iomap_write_end
				gfs2_iomap_end
	* O_DIRECT write:
	with inode_lock(inode) and per-inode lock deferred (?)
		in a loop
			__gfs2_iomap_get
			possibly gfs2_iomap_begin_write
			bio_iov_iter_get_pages(), map and submit [gup]
			gfs2_iomap_end
	* normal read:
		in a loop
			filemap_get_pages (grab pages and readpage them if needed)
			copy_page_to_iter() for each [write faults]
	* O_DIRECT read:
        with per-inode lock deferred
		in a loop
			__gfs2_iomap_get
			either iov_iter_zero() (on hole) [write faults]
			or bio_iov_iter_get_pages(), map and submit [gup]
			gfs2_iomap_end

... with some amount of waiting on buffered IO in case of O_DIRECT writes

Is the above an accurate description of the mainline situation there?
In particular, normal read doesn't seem to bother with locks at all.
What exactly are those cluster locks for in O_DIRECT read?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ