lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100612073639.GA15974@basil.fritz.box>
Date:	Sat, 12 Jun 2010 09:36:39 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Tim Chen <tim.c.chen@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
	Andi Kleen <ak@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH v2 1/2] tmpfs: Quick token library to allow scalable
	retrieval of tokens from token jar

On Fri, Jun 11, 2010 at 04:54:25PM -0700, Andrew Morton wrote:
> > >  			spin_lock(&sbinfo->stat_lock);
> > > -			if (sbinfo->free_blocks <= 1) {
> > > +			if (percpu_counter_read(&sbinfo->free_blocks) <= 1) {
> > >  				spin_unlock(&sbinfo->stat_lock);
> > 
> > Thanks for pointing me to look at this alternative implementation.
> > 
> > However, looking at the percpu counter code, it appears that the
> > percpu_counter_read is imprecise.
> 
> Sure, that's inevitable if we want to avoid one-atomic-op-per-operation.

Only if you use the wrong primitive.
It's not inevitable as qtoken has proven.

> percpu_counters have a precise limit too!  It's
> percpu_counter_batch*num_online_cpus.  You can implement your own
> tolerance by not using percpu_counter_batch: pass your own batch into
> __percpu_counter_add().

Such a batch could be rather large on a large system.

e.g. on a 32 CPU system with batch 16 it's already 2MB.

> There's a trick that can be done to improve accuracy.  When checking to
> see if the fs is full, use percpu_counter_read().  If the number that
> percpu_counter_read() returns is "close" to max_blocks, then start
> using the more expensive percpu_counter_sum().  So the kernel will be
> fast, until the disk gets to within (batch*num_online_cpus) blocks of
> being full.

Ok it would work, but you would get a big dip in performance
once you're near the limit.

And the more CPUs you have the larger this window of slowness becomes.

> 
> This is not the first time I've seen that requirement, and it would be
> a good idea to implement the concept within an addition to the
> percpu_counter library.  Say, percpu_counter_compare().

But why not just use qtoken which solves this without any hacks?

I still think qtoken is a better proposal. Even if it's a bit more
code, but at least it solves this all cleanly without hacks and
arbitary limits.

Given perhaps it needs other users to really pay of its code weight.
I'll see if I can find any others. 

-Andi
-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ