lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 26 May 2010 12:33:38 -0700
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH 2/2] tmpfs: Make tmpfs scalable with caches for free
 blocks

On Thu, 2010-05-20 at 16:13 -0700, Andrew Morton wrote:

> >
> > -		spin_lock(&sbinfo->stat_lock);
> > -		sbinfo->free_blocks += pages;
> > +		spin_lock(&inode->i_lock);
> > +		qtoken_return(&sbinfo->token_jar, pages);
> >  		inode->i_blocks -= pages*BLOCKS_PER_PAGE;
> > -		spin_unlock(&sbinfo->stat_lock);
> > +		spin_unlock(&inode->i_lock);
> 
> Well most of the calls into the qtoken layer occur under inode->i_lock.
> So did we really need that spinlock inside the qtoken library code?
> 
> It is a problem when library code such as qtoken performs its own
> internal locking.  We have learned that such code is much more useful
> and flexible if it performs no locking at all, and requires that
> callers provide the locking (lib/rbtree.c, lib/radix-tree.c,
> lib/prio_heap.c, lib/flex_array.c, etcetera).  Can we follow this
> approach with qtoken?
> 

Andrew,

The inode->i_lock only locks a single inode. The token jar is shared by
all the inodes using the tmpfs so we do not want to use inode->i_lock to
lock the entire token jar for performance reason.  With the qtoken
scheme, the spinlock inside the qtoken library is used only to protect
the free tokens in the common pool of the token jar.  Most of the time,
this lock need not be taken as we can operate with the tokens in the per
cpu cache of the token jar.  We will only need to take the lock when we
run out of tokens in cache.  We put the intelligence in the library to
manage the cache and decides when it is necessary to lock and access the
free tokens in the common pool.  It is better to leave the locking
decision in the library code rather than exposing it to the user.
Otherwise the user will need to check whether tokens should be taken
from cache or the common pool and duplicate the code in qtoken library.

Regards,
Tim Chen



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ