[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1276298999.2385.71.camel@mudge.jf.intel.com>
Date: Fri, 11 Jun 2010 16:29:59 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
Andi Kleen <ak@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH v2 1/2] tmpfs: Quick token library to allow scalable
retrieval of tokens from token jar
On Fri, 2010-06-11 at 15:26 -0700, Andrew Morton wrote:
> }
> @@ -422,11 +423,11 @@ static swp_entry_t *shmem_swp_alloc(stru
> */
> if (sbinfo->max_blocks) {
> spin_lock(&sbinfo->stat_lock);
> - if (sbinfo->free_blocks <= 1) {
> + if (percpu_counter_read(&sbinfo->free_blocks) <= 1) {
> spin_unlock(&sbinfo->stat_lock);
Thanks for pointing me to look at this alternative implementation.
However, looking at the percpu counter code, it appears that the
percpu_counter_read is imprecise. The counters in the per cpu counters
are not accounted and the value read may be much less than the true
amount of free blocks left when used in the patch above. We could fail
the above test and not allocate pages when we actually have additional
pages available. Using percpu_counter_sum will give the precise count
but will cause the acquisition of the spin lock in the percpu_counter
and slowed things down in this performance critical path. If we feel
that we could tolerate fuzziness on the size we configured for tmpfs,
then this could be the way to go.
However, qtoken library implementation will impose a precise limit and
has the per cpu counter's speed advantage.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists