lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100929215312.5fcb6976.akpm@linux-foundation.org>
Date:	Wed, 29 Sep 2010 21:53:12 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Dave Chinner <david@...morbit.com>
Cc:	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 15/17] fs: inode per-cpu last_ino allocator

On Wed, 29 Sep 2010 22:18:47 +1000 Dave Chinner <david@...morbit.com> wrote:

> From: Eric Dumazet <dada1@...mosbay.com>
> 
> last_ino was converted to an atomic variable to allow the inode_lock
> to go away. However, contended atomics do not scale on large
> machines, and new_inode() triggers excessive contention in such
> situations.
> 
> Solve this problem by providing to each cpu a per_cpu variable,
> feeded by the shared last_ino, but once every 1024 allocations.
> This reduces contention on the shared last_ino, and give same
> spreading ino numbers than before (i.e. same wraparound after 2^32
> allocations).
> 
> [npiggin: some extra commenting and use of defines]
> 
> ...
>  
> +#ifdef CONFIG_SMP
> +#define LAST_INO_BATCH 1024
> +/*
> + * Each cpu owns a range of LAST_INO_BATCH numbers.
> + * 'shared_last_ino' is dirtied only once out of LAST_INO_BATCH allocations,
> + * to renew the exhausted range.
> + *
> + * This does not significantly increase overflow rate because every CPU can
> + * consume at most LAST_INO_BATCH-1 unused inode numbers. So there is
> + * NR_CPUS*(LAST_INO_BATCH-1) wastage. At 4096 and 1024, this is ~0.1% of the
> + * 2^32 range, and is a worst-case. Even a 50% wastage would only increase
> + * overflow rate by 2x, which does not seem too significant.
> + *
> + * On a 32bit, non LFS stat() call, glibc will generate an EOVERFLOW
> + * error if st_ino won't fit in target struct field. Use 32bit counter
> + * here to attempt to avoid that.
> + */
> +static DEFINE_PER_CPU(unsigned int, last_ino);
> +static atomic_t shared_last_ino;
> +
> +static unsigned int last_ino_get(void)
> +{
> +	unsigned int *p = &get_cpu_var(last_ino);
> +	unsigned int res = *p;
> +
> +	if (unlikely((res & (LAST_INO_BATCH-1)) == 0))
> +		res = (unsigned int)atomic_add_return(LAST_INO_BATCH,
> +				&shared_last_ino) - LAST_INO_BATCH;

May as well remove the "- LAST_INO_BATCH" there, I think.  It'll skew
the results a tad at startup, but why does that matter?

> +	*p = ++res;
> +	put_cpu_var(last_ino);
> +	return res;
> +}
> +#else
> +static unsigned int last_ino_get(void)
> +{
> +	static unsigned int last_ino;
> +
> +	return ++last_ino;
> +}

This is racy with CONFIG_PREEMPT on some architectures, I suspect.  I'd
suggest conversion to atomic_t with, of course, an explanatory comment ;)


> +#endif

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ