lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 Aug 2015 14:59:51 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	George Spelvin <linux@...izon.com>, dave@...1.net,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	linux@...musvillemoes.dk, riel@...hat.com, rientjes@...gle.com,
	torvalds@...ux-foundation.org
Subject: Re: [PATCH 3/3 v6] mm/vmalloc: Cache the vmalloc memory info

On Tue, Aug 25, 2015 at 11:56:38AM +0200, Ingo Molnar wrote:
> +void get_vmalloc_info(struct vmalloc_info *vmi)
> +{
> +	unsigned int cache_gen, gen;

I see you dropped the u64 thing, good, ignore that previous email.

> +
> +	/*
> +	 * The common case is that the cache is valid, so first
> +	 * read it, then check its validity.
> +	 *
> +	 * The two read barriers make sure that we read
> +	 * 'cache_gen', 'vmap_info_cache' and 'gen' in
> +	 * precisely that order:
> +	 */
> +	cache_gen = vmap_info_cache_gen;
> +	smp_rmb();
> +	*vmi = vmap_info_cache;
> +	smp_rmb();
> +	gen = vmap_info_gen;
> +
> +	/*
> +	 * If the generation counter of the cache matches that of
> +	 * the vmalloc generation counter then return the cache:
> +	 */
> +	if (cache_gen == gen)
> +		return;

There is one aspect of READ_ONCE() that is not replaced with smp_rmb(),
and that is that READ_ONCE() should avoid split loads.

Without READ_ONCE() the compiler is free to turn the loads into separate
and out of order byte loads just because its insane, thereby also making
the WRITE_ONCE() pointless.

Now I'm fairly sure it all doesn't matter much, the info can change the
moment we've copied it, so the read is inherently racy.

And by that same argument I feel this is all somewhat over engineered.

> +
> +	/* Make sure 'gen' is read before the vmalloc info: */
> +	smp_rmb();
> +	calc_vmalloc_info(vmi);
> +
> +	/*
> +	 * All updates to vmap_info_cache_gen go through this spinlock,
> +	 * so when the cache got invalidated, we'll only mark it valid
> +	 * again if we first fully write the new vmap_info_cache.
> +	 *
> +	 * This ensures that partial results won't be used and that the
> +	 * vmalloc info belonging to the freshest update is used:
> +	 */
> +	spin_lock(&vmap_info_lock);
> +	if ((int)(gen-vmap_info_cache_gen) > 0) {
> +		vmap_info_cache = *vmi;
> +		/*
> +		 * Make sure the new cached data is visible before
> +		 * the generation counter update:
> +		 */
> +		smp_wmb();
> +		vmap_info_cache_gen = gen;
> +	}
> +	spin_unlock(&vmap_info_lock);
> +}
> +
> +#endif /* CONFIG_PROC_FS */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists