[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1381186674.11046.105.camel@schen9-DESK>
Date: Mon, 07 Oct 2013 15:57:54 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Alex Shi <alex.shi@...aro.org>,
Andi Kleen <andi@...stfloor.org>,
Michel Lespinasse <walken@...gle.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Matthew R Wilcox <matthew.r.wilcox@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Rik van Riel <riel@...hat.com>,
Peter Hurley <peter@...leysoftware.com>,
"Paul E.McKenney" <paulmck@...ux.vnet.ibm.com>,
Jason Low <jason.low2@...com>,
Waiman Long <Waiman.Long@...com>, linux-kernel@...r.kernel.org,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v8 0/9] rwsem performance optimizations
On Thu, 2013-10-03 at 09:32 +0200, Ingo Molnar wrote:
> * Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
> > For version 8 of the patchset, we included the patch from Waiman to
> > streamline wakeup operations and also optimize the MCS lock used in
> > rwsem and mutex.
>
> I'd be feeling a lot easier about this patch series if you also had
> performance figures that show how mmap_sem is affected.
>
> These:
>
> > Tim got the following improvement for exim mail server
> > workload on 40 core system:
> >
> > Alex+Tim's patchset: +4.8%
> > Alex+Tim+Waiman's patchset: +5.3%
>
> appear to be mostly related to the anon_vma->rwsem. But once that lock is
> changed to an rwlock_t, this measurement falls away.
>
> Peter Zijlstra suggested the following testcase:
>
> ===============================>
> In fact, try something like this from userspace:
>
> n-threads:
>
> pthread_mutex_lock(&mutex);
> foo = mmap();
> pthread_mutex_lock(&mutex);
>
> /* work */
>
> pthread_mutex_unlock(&mutex);
> munma(foo);
> pthread_mutex_unlock(&mutex);
>
> vs
>
> n-threads:
>
> foo = mmap();
> /* work */
> munmap(foo);
Ingo,
I ran the vanilla kernel, the kernel with all rwsem patches and the
kernel with all patches except the optimistic spin one.
I am listing two presentations of the data. Please note that
there is about 5% run-run variation.
% change in performance vs vanilla kernel
#threads all without optspin
mmap only
1 1.9% 1.6%
5 43.8% 2.6%
10 22.7% -3.0%
20 -12.0% -4.5%
40 -26.9% -2.0%
mmap with mutex acquisition
1 -2.1% -3.0%
5 -1.9% 1.0%
10 4.2% 12.5%
20 -4.1% 0.6%
40 -2.8% -1.9%
The optimistic spin case does very well at low to moderate contentions,
but worse when there are very heavy contentions for the pure mmap case.
For the case with pthread mutex, there's not much change from vanilla
kernel.
% change in performance of the mmap with pthread-mutex vs pure mmap
#threads vanilla all without optspin
1 3.0% -1.0% -1.7%
5 7.2% -26.8% 5.5%
10 5.2% -10.6% 22.1%
20 6.8% 16.4% 12.5%
40 -0.2% 32.7% 0.0%
In general, vanilla and no-optspin case perform better with
pthread-mutex. For the case with optspin, mmap with
pthread-mutex is worse at low to moderate contention and better
at high contention.
Tim
>
> I've had reports that the former was significantly faster than the
> latter.
> <===============================
>
> this could be put into a standalone testcase, or you could add it as a new
> subcommand of 'perf bench', which already has some pthread code, see for
> example in tools/perf/bench/sched-messaging.c. Adding:
>
> perf bench mm threads
>
> or so would be a natural thing to have.
>
> Thanks,
>
> Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists