lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20061010231036.66f609ea.akpm@osdl.org>
Date:	Tue, 10 Oct 2006 23:10:36 -0700
From:	Andrew Morton <akpm@...l.org>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Nick Piggin <npiggin@...e.de>,
	Linux Memory Management <linux-mm@...ck.org>,
	Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch 2/5] mm: fault vs invalidate/truncate race fix

On Wed, 11 Oct 2006 15:50:11 +1000
Nick Piggin <nickpiggin@...oo.com.au> wrote:

> Andrew Morton wrote:
> 
> >On Tue, 10 Oct 2006 16:21:49 +0200 (CEST)
> >Nick Piggin <npiggin@...e.de> wrote:
> >
> >
> >>--- linux-2.6.orig/mm/filemap.c
> >>+++ linux-2.6/mm/filemap.c
> >>@@ -1392,9 +1392,10 @@ struct page *filemap_nopage(struct vm_ar
> >> 	unsigned long size, pgoff;
> >> 	int did_readaround = 0, majmin = VM_FAULT_MINOR;
> >> 
> >>+	BUG_ON(!(area->vm_flags & VM_CAN_INVALIDATE));
> >>+
> >> 	pgoff = ((address-area->vm_start) >> PAGE_CACHE_SHIFT) + area->vm_pgoff;
> >> 
> >>-retry_all:
> >> 	size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
> >> 	if (pgoff >= size)
> >> 		goto outside_data_content;
> >>@@ -1416,7 +1417,7 @@ retry_all:
> >> 	 * Do we have something in the page cache already?
> >> 	 */
> >> retry_find:
> >>-	page = find_get_page(mapping, pgoff);
> >>+	page = find_lock_page(mapping, pgoff);
> >>
> >
> >Here's a little problem.  Locking the page in the pagefault handler takes
> >our deadlock while writing from a mmapped copy of the page into the same
> >page from "extremely hard to hit" to "super-easy to hit".  Try running
> >write-deadlock-demo.c from
> >http://www.zip.com.au/~akpm/linux/patches/stuff/ext3-tools.tar.gz
> >
> >It conveniently deadlocks while holding mmap_sem, so `ps' get stuck too.
> >
> >So this whole idea of locking the page in the fault handler is off the
> >table until we fix that deadlock for real.
> >
> 
> OK. Can it sit in -mm for now, though?

argh.  It took me two goes to unpickle all the bits and pieces (please
patch things like cachefiles separately, unless you want your stuff to be
merged after that stuff) and now I've gone and deleted it all.

Maybe later?  We do have that infinite-loop-on-EIO to look at as well.

> Or is this deadlock less theoretical
> than it sounds?

I _think_ people have hit it in the wild, due to memory pressure.

But no, it's a silly thing which will only hit when people are running
silly tests under silly amounts of load.

Or if they're trying to kill your computer...

> At any rate, thanks for catching this.
> 
> >  Coincidentally I started coding
> >a fix for that a couple of weeks ago, but spend too much time with my nose
> >in other people's crap to get around to writing my own crap.
> >
> >The basic idea is
> >
> >- revert the recent changes to the core write() code (the ones which
> >  killed writev() performance, especially on NFS overwrites).
> >
> >- clean some stuff up
> >
> >- modify the core of write() so that instead of doing copy_from_user(),
> >  we do inc_preempt_count();copy_from_user_inatomic().  So we never enter
> >  the pagefault handler while holding the lock on the pagecache page.
> >
> >  If the fault happens, we run commit_write() on however much stuff we
> >  managed to copy and then go back and try to fault the target page back in
> >  again.  Repeat for ten times then give up.
> >
> 
> Without looking at any code, perhaps we could instead run get_user_pages
> and copy the memory that way.

That would certainly work, but we've always shied away from doing that
because of the performance implications.

> We'd still want to do try the initial copy_from_user, because the TLB is
> quite likely to exist or at least the pte will exist so the low level TLB
> refill can reach it - so we don't want to walk the pagetables manually if
> we can help it.

Yeah, that's an alternative to the fault-it-in-ten-times-then-give-up
approach.

> At that point, if we end up doing the get_user_pages thing, do we even need
> to do the intermediate commit_write()?

Yes, we will.  get_user_pages() will run the pagefault handler, which will
lock the page, so we're back to square one.

> Or just do the whole copy (the 
> partial
> copied data is going to be in cache on physically indexed caches anyway, so
> it will be very low cost to copy again). And it should be a reasonably
> unlikely path... but I'll instrument it.

I'm not sure what you're suggesting here.

> >  It gets tricky because it means that we'll need to go back to zeroing
> >  out the uncopied part of the pagecache page before
> >  commit_write+unlock_page().  This will resurrect the recently-fixed
> >  problem where userspace can fleetingly see a bunch of zeroes in pagecache
> >  where it expected to see either the old data or the new data.
> >
> >  But I don't think that problem was terribly serious, and we can improve
> >  the situation quite a lot by not doing that zeroing if the page is
> >  already up-to-date.
> >
> >Anyway, if you're feeling up to it I'll document the patches I have and hand
> >them over - they're not making much progress here.
> >
> 
> Yeah I'll have a go.

Thanks.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ