[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100621131802.c2f45c82.akpm@linux-foundation.org>
Date: Mon, 21 Jun 2010 13:18:02 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>, yanmin.zhang@...el.com
Subject: Re: [PATCH v3 2/2] tmpfs: Make tmpfs scalable with percpu_counter
for used blocks
On Thu, 17 Jun 2010 16:56:33 -0700
Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> The current implementation of tmpfs is not scalable.
> We found that stat_lock is contended by multiple threads
> when we need to get a new page, leading to useless spinning
> inside this spin lock.
>
> This patch makes use of the percpu_counter library to maintain local
> count of used blocks to speed up getting and returning
> of pages. So the acquisition of stat_lock is unnecessary
> for getting and returning blocks, improving the performance
> of tmpfs on system with large number of cpus. On a 4 socket
> 32 core NHM-EX system, we saw improvement of 270%.
So it had exactly the same performance as the token-jar approach?
It'd be good if the changelog were to mention the inaccuracy issues.
Describe their impact, if any.
Are you actually happy with this overall approach?
>
> ...
>
> @@ -2258,9 +2254,8 @@ static int shmem_remount_fs(struct super_block *sb, int *flags, char *data)
> return error;
>
> spin_lock(&sbinfo->stat_lock);
> - blocks = sbinfo->max_blocks - sbinfo->free_blocks;
> inodes = sbinfo->max_inodes - sbinfo->free_inodes;
> - if (config.max_blocks < blocks)
> + if (config.max_blocks < percpu_counter_sum(&sbinfo->used_blocks))
This could actually use percpu_counter_compare()?
> goto out;
> if (config.max_inodes < inodes)
> goto out;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists