[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bf10ff685a06b28084a61dad0740015.squirrel@webmail-b.css.fujitsu.com>
Date: Mon, 28 Dec 2009 19:08:58 +0900 (JST)
From: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>
To: "Peter Zijlstra" <peterz@...radead.org>
Cc: "KAMEZAWA Hiroyuki" <kamezawa.hiroyu@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"minchan.kim@...il.com" <minchan.kim@...il.com>,
cl@...ux-foundation.org
Subject: Re: [RFC PATCH] asynchronous page fault.
Peter Zijlstra wrote:
> On Mon, 2009-12-28 at 09:36 +0900, KAMEZAWA Hiroyuki wrote:
>> Hmm ? for single-thread apps ? This patch's purpose is not for lockless
>> lookup, it's just a part of work. My purpose is avoiding false-sharing.
>
> False sharing in the sense of the mmap_sem cacheline containing other
> variables? How could that ever be a problem for a single threaded
> application?
>
No problem at all. I just couldn't catch what you mean.
> For multi-threaded apps the contention on that cacheline is the largest
> issue, and moving it to a vma cacheline doesn't seem like a big
> improvement.
>
I feel mmap_sem's cacheline ping-pong is more terrible than
simple atomic_inc().
__down_read() does
write (spinlock)
write (->sem_activity)
write (unlock)
> You want something much finer grained than vmas, there's lots of apps
> working on a single (or very few) vma(s). Leaving you with pretty much
> the same cacheline contention. Only now its a different cacheline.
>
Ya, maybe. I hope I can find some magical one.
Using per-cpu counter here as Christoph did may be an idea...
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists