[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131009061551.GD7664@gmail.com>
Date: Wed, 9 Oct 2013 08:15:51 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Alex Shi <alex.shi@...aro.org>,
Andi Kleen <andi@...stfloor.org>,
Michel Lespinasse <walken@...gle.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Matthew R Wilcox <matthew.r.wilcox@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Rik van Riel <riel@...hat.com>,
Peter Hurley <peter@...leysoftware.com>,
"Paul E.McKenney" <paulmck@...ux.vnet.ibm.com>,
Jason Low <jason.low2@...com>,
Waiman Long <Waiman.Long@...com>, linux-kernel@...r.kernel.org,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v8 0/9] rwsem performance optimizations
* Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> Ingo,
>
> I ran the vanilla kernel, the kernel with all rwsem patches and the
> kernel with all patches except the optimistic spin one. I am listing
> two presentations of the data. Please note that there is about 5%
> run-run variation.
>
> % change in performance vs vanilla kernel
> #threads all without optspin
> mmap only
> 1 1.9% 1.6%
> 5 43.8% 2.6%
> 10 22.7% -3.0%
> 20 -12.0% -4.5%
> 40 -26.9% -2.0%
> mmap with mutex acquisition
> 1 -2.1% -3.0%
> 5 -1.9% 1.0%
> 10 4.2% 12.5%
> 20 -4.1% 0.6%
> 40 -2.8% -1.9%
Silly question: how do the two methods of starting N threads compare to
each other? Do they have identical runtimes? I think PeterZ's point was
that the pthread_mutex case, despite adding extra serialization, actually
runs faster in some circumstances.
Also, mind posting the testcase? What 'work' do the threads do - clear
some memory area? How big is the memory area?
I'd expect this to be about large enough mmap()s showing page fault
processing to be mmap_sem bound and the serialization via pthread_mutex()
sets up a 'train' of threads in one case, while the parallel startup would
run into the mmap_sem in the regular case.
So I'd expect this to be a rather sensitive workload and you'd have to
actively engineer it to hit the effect PeterZ mentioned. I could imagine
MPI workloads to run into such patterns - but not deterministically.
Only once you've convinced yourself that you are hitting that kind of
effect reliably on the vanilla kernel, could/should the effects of an
improved rwsem implementation be measured.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists