[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170302143949.GP16328@bombadil.infradead.org>
Date: Thu, 2 Mar 2017 06:39:49 -0800
From: Matthew Wilcox <willy@...radead.org>
To: "byungchul.park" <byungchul.park@....com>
Cc: 'Peter Zijlstra' <peterz@...radead.org>, mingo@...nel.org,
tglx@...utronix.de, walken@...gle.com, boqun.feng@...il.com,
kirill@...temov.name, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, npiggin@...il.com, kernel-team@....com
Subject: Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature
On Thu, Mar 02, 2017 at 01:45:35PM +0900, byungchul.park wrote:
> From: Matthew Wilcox [mailto:willy@...radead.org]
> > On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > > (And we should not be returning to userspace with locks held anyway --
> > > lockdep already has a check for that).
> >
> > Don't we return to userspace with page locks held, eg during async
> > directio?
>
> Hello,
>
> I think that the check when returning to user with crosslocks held
> should be an exception. Don't you think so?
Oh yes. We have to keep the pages locked during reads, and we have to
return to userspace before I/O is complete, therefore we have to return
to userspace with pages locked. They'll be unlocked by the interrupt
handler in page_endio().
Speaking of which ... this feature is far too heavy for use in production
on pages. You're almost trebling the size of struct page. Can we
do something like make all struct pages share the same lockdep_map?
We'd have to not complain about holding one crossdep lock and acquiring
another one of the same type, but with millions of pages in the system,
it must surely be creating a gargantuan graph right now?
Powered by blists - more mailing lists