[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AANLkTinxkt_ttN68cLFqS8ubqpyQATIbtRMlILK10g69@mail.gmail.com>
Date: Fri, 18 Jun 2010 10:40:02 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>, yanmin.zhang@...el.com
Subject: Re: [PATCH v3 0/2] tmpfs: Improve tmpfs scalability
On Fri, Jun 18, 2010 at 8:56 AM, Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> This patch series helps to resolve scalability problem
> for tmpfs. With these patches, Aim7 fserver throughput for tmpfs
> improved by 270% on a 4 socket, 32 cores NHM-EX system.
>
> In current implementation of tmpfs, whenever we
> get a new page, stat_lock in shmem_sb_info needs to be acquired.
> This causes a lot of lock contentions when multiple
> threads are using tmpfs simultaneously, which makes
> system with large number of cpus scale poorly.
> Almost 75% of cpu time was spent contending on
> stat_lock when we ran Aim7 fserver load with 128 threads
> on a 4 socket, 32 cores NHM-EX system.
>
> We made use of the percpu_counter library for used blocks accounting to
> speed up the getting and returning of blocks to local per cpu counter
> without lock acquisition.
>
> The first patch in the series add a function to provide
> fast but accurate comparison for the percpu_counter library.
>
> The second patch update the shmem code of tmpfs to use percpu_counter
> library to improve tmpfs performance.
>
Patch series seems good to me.
If you send a patch, please, make sure Cc list.
tmpfs is related to mm. So when you send patch at next time, please Cc linux-mm.
You can use ./script/get_maintainer.pl -f mm/shmem.c
Thanks.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists