[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170302235003.GE28562@X58A-UD3R>
Date: Fri, 3 Mar 2017 08:50:03 +0900
From: Byungchul Park <byungchul.park@....com>
To: Matthew Wilcox <willy@...radead.org>
CC: 'Peter Zijlstra' <peterz@...radead.org>, <mingo@...nel.org>,
<tglx@...utronix.de>, <walken@...gle.com>, <boqun.feng@...il.com>,
<kirill@...temov.name>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <iamjoonsoo.kim@....com>,
<akpm@...ux-foundation.org>, <npiggin@...il.com>,
<kernel-team@....com>
Subject: Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature
On Thu, Mar 02, 2017 at 06:39:49AM -0800, Matthew Wilcox wrote:
> On Thu, Mar 02, 2017 at 01:45:35PM +0900, byungchul.park wrote:
> > From: Matthew Wilcox [mailto:willy@...radead.org]
> > > On Tue, Feb 28, 2017 at 07:15:47PM +0100, Peter Zijlstra wrote:
> > > > (And we should not be returning to userspace with locks held anyway --
> > > > lockdep already has a check for that).
> > >
> > > Don't we return to userspace with page locks held, eg during async
> > > directio?
> >
> > Hello,
> >
> > I think that the check when returning to user with crosslocks held
> > should be an exception. Don't you think so?
>
> Oh yes. We have to keep the pages locked during reads, and we have to
> return to userspace before I/O is complete, therefore we have to return
> to userspace with pages locked. They'll be unlocked by the interrupt
> handler in page_endio().
Agree.
> Speaking of which ... this feature is far too heavy for use in production
> on pages. You're almost trebling the size of struct page. Can we
> do something like make all struct pages share the same lockdep_map?
> We'd have to not complain about holding one crossdep lock and acquiring
> another one of the same type, but with millions of pages in the system,
> it must surely be creating a gargantuan graph right now?
Um.. I will try it for page locks to work with one lockmap. That is also
what Peterz pointed out and what I worried about when implementing..
Thanks for your opinion.
Powered by blists - more mailing lists