lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Jun 2010 19:52:11 -0700
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>, yanmin.zhang@...el.com
Subject: Re: [PATCH v3 2/2] tmpfs: Make tmpfs scalable with percpu_counter
 for used blocks

On Mon, 2010-06-21 at 13:18 -0700, Andrew Morton wrote:
> On Thu, 17 Jun 2010 16:56:33 -0700
> Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> 
> > The current implementation of tmpfs is not scalable.
> > We found that stat_lock is contended by multiple threads
> > when we need to get a new page, leading to useless spinning
> > inside this spin lock.  
> > 
> > This patch makes use of the percpu_counter library to maintain local
> > count of used blocks to speed up getting and returning
> > of pages.  So the acquisition of stat_lock is unnecessary
> > for getting and returning blocks, improving the performance 
> > of tmpfs on system with large number of cpus.  On a 4 socket
> > 32 core NHM-EX system, we saw improvement of 270%.
> 
> So it had exactly the same performance as the token-jar approach?
> 

The performance numbers are almost identical, the difference is quite
small (within 1%).

> It'd be good if the changelog were to mention the inaccuracy issues. 
> Describe their impact, if any.

You are talking about the small chance that we may overshoot the limit
a bit? There's a slight chance of race between threads when another
thread allocate the last block after we have read the block count,
thinking that the used blocks are still below limit. There's a small
chance that the same thing happen when we remount.  
> 
> Are you actually happy with this overall approach?
> 
I think qtoken approach can eliminate the small inaccuracy mentioned
above.  However, the inaccuracy will really be quite small and transient
(will go away once we return the used blocks) and will not cause any
problem to tmpfs.  So I'm fine with it.

> >
> > ...
> >
> > @@ -2258,9 +2254,8 @@ static int shmem_remount_fs(struct super_block *sb, int *flags, char *data)
> >  		return error;
> >  
> >  	spin_lock(&sbinfo->stat_lock);
> > -	blocks = sbinfo->max_blocks - sbinfo->free_blocks;
> >  	inodes = sbinfo->max_inodes - sbinfo->free_inodes;
> > -	if (config.max_blocks < blocks)
> > +	if (config.max_blocks < percpu_counter_sum(&sbinfo->used_blocks))
> 
> This could actually use percpu_counter_compare()?
> 

Yeah, using percpu_counter_compare probably is cleaner.

Tim



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ