[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1178745690.14928.167.camel@localhost.localdomain>
Date: Thu, 10 May 2007 07:21:30 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Hugh Dickins <hugh@...itas.com>
Cc: Nick Piggin <npiggin@...e.de>, linux-arch@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [rfc] optimise unlock_page
> Not good enough, I'm afraid. It looks like Ben's right and you need
> a count - and counts in the page struct are a lot harder to add than
> page flags.
>
> I've now played around with the hangs on my three 4CPU machines
> (all of them in io_schedule below __lock_page, waiting on pages
> which were neither PG_locked nor PG_waiters when I looked).
>
> Seeing Ben's mail, I thought the answer would be just to remove
> the "_exclusive" from your three prepare_to_wait_exclusive()s.
> That helped, but it didn't eliminate the hangs.
There might be a way ... by having the flags manipulation always
atomically deal with PG_locked and PG_waiters together. This is possible
but we would need even more weirdo bitops abstractions from the arch I'm
afraid... unless we start using atomic_* rather that bitops in order to
manipulate multiple bits at a time.
Ben.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists