[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140602130832.9328cfef977b7ed837d59321@linux-foundation.org>
Date: Mon, 2 Jun 2014 13:08:32 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Davidlohr Bueso <davidlohr@...com>
Cc: mingo@...nel.org, peterz@...radead.org, riel@...hat.com,
mgorman@...e.de, aswin@...com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/5] mm: i_mmap_mutex to rwsem
On Thu, 29 May 2014 19:20:15 -0700 Davidlohr Bueso <davidlohr@...com> wrote:
> On Thu, 2014-05-22 at 20:33 -0700, Davidlohr Bueso wrote:
> > This patchset extends the work started by Ingo Molnar in late 2012,
> > optimizing the anon-vma mutex lock, converting it from a exclusive mutex
> > to a rwsem, and sharing the lock for read-only paths when walking the
> > the vma-interval tree. More specifically commits 5a505085 and 4fc3f1d6.
> >
> > The i_mmap_mutex has similar responsibilities with the anon-vma, protecting
> > file backed pages. Therefore we can use similar locking techniques: covert
> > the mutex to a rwsem and share the lock when possible.
> >
> > With the new optimistic spinning property we have in rwsems, we no longer
> > take a hit in performance when using this lock, and we can therefore
> > safely do the conversion. Tests show no throughput regressions in aim7 or
> > pgbench runs, and we can see gains from sharing the lock, in disk workloads
> > ~+15% for over 1000 users on a 8-socket Westmere system.
> >
> > This patchset applies on linux-next-20140522.
>
> ping? Andrew any chance of getting this in -next?
(top-posting repaired)
It was a bit late for 3.16 back on May 26, when you said "I will dig
deeper (probably for 3.17 now)". So, please take another look at the
patch factoring and let's get this underway for -rc1.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists