[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1PE5sG-0005Em-Qb@pomaz-ex.szeredi.hu>
Date: Thu, 04 Nov 2010 20:53:28 +0100
From: Miklos Szeredi <miklos@...redi.hu>
To: Andrea Arcangeli <aarcange@...hat.com>
CC: dave@...ux.vnet.ibm.com, miklos@...redi.hu,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, shenlinf@...ibm.com,
volobuev@...ibm.com, mel@...ux.vnet.ibm.com, dingc@...ibm.com,
lnxninja@...ibm.com
Subject: Re: Deadlocks with transparent huge pages and userspace fs daemons
On Thu, 4 Nov 2010, Andrea Arcangeli wrote:
> On Wed, Nov 03, 2010 at 01:43:25PM -0700, Dave Hansen wrote:
> > some IBM testers ran into some deadlocks. It appears that the
> > khugepaged process is trying to migrate one of a filesystem daemon's
> > pages while khugepaged holds the daemon's mmap_sem for write.
>
> Correct. So now I'm wondering what happens if some library of this
> daemon happens to execute a munmap that calls split_vma and allocates
> memory while holding the mmap_sem, and the memory allocation triggers
> I/O that will have to be executed by the daemon.
mmap_sem is not really relevant here(*), page lock is. And in vmscan.c,
there's not a single blocking lock_page().
Also, as I mentioned, fuse does writeback in a special way: it copies dirty
pages to non-page cache pages which don't interact in any way with
reclaim. Fuse writeback is instantaneous from the reclaim PoV.
> I think this could be fixed in userland, this applies to openvpn too
> if used as nfs backend.
How?
Thanks,
Miklos
(*) In the original gpfs trace it is relevant but only because the
page migration is triggered by khugepaged. In the reproduced example
the page migration is triggered directly by an allocation. Since page
migration does blocking lock_page(), there's really no way to avoid a
deadlock in that case.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists