lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 30 Dec 2011 09:02:35 -0800 (PST)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Greg Kroah-Hartman <gregkh@...e.de>
Cc:	Brian King <brking@...ux.vnet.ibm.com>, devel@...verdev.osuosl.org,
	linux-kernel@...r.kernel.org, Konrad Wilk <konrad.wilk@...cle.com>,
	Nitin Gupta <ngupta@...are.org>
Subject: RE: [PATCH] staging: zcache: fix serialization bug in zv stats

> From: Seth Jennings [mailto:sjenning@...ux.vnet.ibm.com]
> Sent: Friday, December 30, 2011 9:42 AM
> To: Greg Kroah-Hartman
> Cc: Seth Jennings; Dan Magenheimer; Brian King; devel@...verdev.osuosl.org; linux-
> kernel@...r.kernel.org
> Subject: [PATCH] staging: zcache: fix serialization bug in zv stats
> 
> In a multithreaded workload, the zv_curr_dist_counts
> and zv_cumul_dist_counts statistics are being corrupted
> because the increments and decrements in zv_create
> and zv_free are not atomic.
> 
> This patch converts these statistics and their corresponding
> increments/decrements/reads to atomic operations.
> 
> Based on v3.2-rc7
> 
> Signed-off-by: Seth Jennings <sjenning@...ux.vnet.ibm.com>

I'm inclined to nack this change, at least unless inside an #ifdef DEBUG,
as these counts are interesting to a developer but not useful to a normal
end user, whereas the incremental cost for atomic_inc and atomic_dec are
non-trivial.  I don't think any off-by-one in these counters could
result in a bug and, before promotion from staging, they probably
should just go away.  (They are fun to "watch -d" though ;-)

That said, as a developer, I too am annoyed by the occasional 64-bit
"negative unsigned" that show up in the output but IMHO a good fix for
that might be simply for the "show" routine to convert negative values
to zero before printing.

> ---
>  drivers/staging/zcache/zcache-main.c |   14 +++++++-------
>  1 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index 56c1f9c..d39bb51 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -655,8 +655,8 @@ static unsigned int zv_max_zsize = (PAGE_SIZE / 8) * 7;
>   */
>  static unsigned int zv_max_mean_zsize = (PAGE_SIZE / 8) * 5;
> 
> -static unsigned long zv_curr_dist_counts[NCHUNKS];
> -static unsigned long zv_cumul_dist_counts[NCHUNKS];
> +static atomic_t zv_curr_dist_counts[NCHUNKS];
> +static atomic_t zv_cumul_dist_counts[NCHUNKS];
> 
>  static struct zv_hdr *zv_create(struct xv_pool *xvpool, uint32_t pool_id,
>  				struct tmem_oid *oid, uint32_t index,
> @@ -675,8 +675,8 @@ static struct zv_hdr *zv_create(struct xv_pool *xvpool, uint32_t pool_id,
>  			&page, &offset, ZCACHE_GFP_MASK);
>  	if (unlikely(ret))
>  		goto out;
> -	zv_curr_dist_counts[chunks]++;
> -	zv_cumul_dist_counts[chunks]++;
> +	atomic_inc(&zv_curr_dist_counts[chunks]);
> +	atomic_inc(&zv_cumul_dist_counts[chunks]);
>  	zv = kmap_atomic(page, KM_USER0) + offset;
>  	zv->index = index;
>  	zv->oid = *oid;
> @@ -698,7 +698,7 @@ static void zv_free(struct xv_pool *xvpool, struct zv_hdr *zv)
> 
>  	ASSERT_SENTINEL(zv, ZVH);
>  	BUG_ON(chunks >= NCHUNKS);
> -	zv_curr_dist_counts[chunks]--;
> +	atomic_dec(&zv_curr_dist_counts[chunks]);
>  	size -= sizeof(*zv);
>  	BUG_ON(size == 0);
>  	INVERT_SENTINEL(zv, ZVH);
> @@ -738,7 +738,7 @@ static int zv_curr_dist_counts_show(char *buf)
>  	char *p = buf;
> 
>  	for (i = 0; i < NCHUNKS; i++) {
> -		n = zv_curr_dist_counts[i];
> +		n = atomic_read(&zv_curr_dist_counts[i]);
>  		p += sprintf(p, "%lu ", n);
>  		chunks += n;
>  		sum_total_chunks += i * n;
> @@ -754,7 +754,7 @@ static int zv_cumul_dist_counts_show(char *buf)
>  	char *p = buf;
> 
>  	for (i = 0; i < NCHUNKS; i++) {
> -		n = zv_cumul_dist_counts[i];
> +		n = atomic_read(&zv_cumul_dist_counts[i]);
>  		p += sprintf(p, "%lu ", n);
>  		chunks += n;
>  		sum_total_chunks += i * n;
> --
> 1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ