[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1373933551.4622.12.camel@buesod1.americas.hpqcorp.net>
Date: Mon, 15 Jul 2013 17:12:31 -0700
From: Davidlohr Bueso <davidlohr.bueso@...com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Gibson <david@...son.dropbear.id.au>,
Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...hat.com>,
Michel Lespinasse <walken@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Konstantin Khlebnikov <khlebnikov@...nvz.org>,
Michal Hocko <mhocko@...e.cz>,
"AneeshKumarK.V" <aneesh.kumar@...ux.vnet.ibm.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Hillf Danton <dhillf@...il.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Anton Blanchard <anton@...ba.org>
Subject: Re: [PATCH] mm/hugetlb: per-vma instantiation mutexes
On Mon, 2013-07-15 at 16:08 -0700, Andrew Morton wrote:
> On Mon, 15 Jul 2013 17:24:32 +1000 David Gibson <david@...son.dropbear.id.au> wrote:
>
> > I have previously proposed a correct method of improving scalability,
> > although it doesn't eliminate the lock. That's to use a set of hashed
> > mutexes.
>
> Yep - hashing the mutexes is an obvious and nicely localized way of
> improving this. It's a tweak, not a design change.
>
> The changelog should describe the choice of the hash key with great
> precision, please. It's important and is the first thing which
> reviewers and readers will zoom in on.
>
> Should the individual mutexes be cacheline aligned? Depends on the
> acquisition frequency, I guess. Please let's work through that.
In my test cases, involving different RDBMS, I'm getting around 114k
acquisitions.
>
> Let's not damage uniprocesor kernels too much. AFACIT the main offender
> here is fault_mutex_hash(), which is the world's most obfuscated "return
> 0;".
I guess we could add an ifndef CONFIG_SMP check to the function and
return 0 right away. That would eliminate any overhead in
fault_mutex_hash().
>
> > It wasn't merged before, but I don't recall the reasons
> > why.
So I've forward ported the patch (will send once everyone agrees that
the matter is settled), including the changes Anton Blanchard added a
exactly two years ago:
https://lkml.org/lkml/2011/7/15/31
My tests show that the number of lock contentions drops from ~11k to
around 500. So this approach alleviates a lot of the bottleneck. I've
also ran it against libhugetlbfs without any regressions.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists