lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 05 May 2014 11:00:44 -0700
From:	Davidlohr Bueso <davidlohr@...com>
To:	Seth Jennings <sjennings@...iantweb.net>
Cc:	Weijie Yang <weijie.yang@...sung.com>,
	'Minchan Kim' <minchan@...nel.org>,
	'Andrew Morton' <akpm@...ux-foundation.org>,
	'Nitin Gupta' <ngupta@...are.org>,
	'Sergey Senozhatsky' <sergey.senozhatsky@...il.com>,
	'Bob Liu' <bob.liu@...cle.com>,
	'Dan Streetman' <ddstreet@...e.org>, weijie.yang.kh@...il.com,
	heesub.shin@...sung.com,
	'linux-kernel' <linux-kernel@...r.kernel.org>,
	'Linux-MM' <linux-mm@...ck.org>
Subject: Re: [PATCH] zram: remove global tb_lock by using lock-free CAS

On Mon, 2014-05-05 at 10:20 -0500, Seth Jennings wrote:
> On Mon, May 05, 2014 at 12:01:21PM +0800, Weijie Yang wrote:
> > Currently, we use a rwlock tb_lock to protect concurrent access to
> > whole zram meta table. However, according to the actual access model,
> > there is only a small chance for upper user access the same table[index],
> > so the current lock granularity is too big.
> > 
> > This patch add a atomic state for every table[index] to record its access,
> > by using CAS operation, protect concurrent access to the same table[index],
> > meanwhile allow the maximum concurrency.
> > 
> > On 64-bit system, it will not increase the meta table memory overhead, and
> > on 32-bit system with 4K page_size, it will increase about 1MB memory overhead
> > for 1GB zram. So, it is cost-efficient.
> > 
> > Test result:
> > (x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04,
> > kernel v3.15.0-rc3, zram 1GB with 4 max_comp_streams LZO,
> > take the average of 5 tests)
> > 
> > iozone -t 4 -R -r 16K -s 200M -I +Z
> > 
> >       Test          base	   lock-free	ratio
> > ------------------------------------------------------
> >  Initial write   1348017.60    1424141.62   +5.6%
> >        Rewrite   1520189.16    1652504.81   +8.7%
> >           Read   8294445.45   11404668.35   +37.5%
> >        Re-read   8134448.83   11555483.75   +42.1%
> >   Reverse Read   6748717.97    8394478.17   +24.4%
> >    Stride read   7220276.66    9372229.95   +29.8%
> >    Random read   7133010.06    9187221.90   +28.8%
> > Mixed workload   4056980.71    5843370.85   +44.0%
> >   Random write   1470106.17    1608947.04   +9.4%
> >         Pwrite   1259493.72    1311055.32   +4.1%
> >          Pread   4247583.17    4652056.11   +9.5%
> > 
> > Signed-off-by: Weijie Yang <weijie.yang@...sung.com>
> > ---
> > 
> > This patch is based on linux-next tree, commit b5c8d48bf8f42 
> > 
> >  drivers/block/zram/zram_drv.c |   41 ++++++++++++++++++++++++++---------------
> >  drivers/block/zram/zram_drv.h |    5 ++++-
> >  2 files changed, 30 insertions(+), 16 deletions(-)
> > 
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index 48eccb3..8b70945
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -255,7 +255,6 @@ static struct zram_meta *zram_meta_alloc(u64 disksize)
> >  		goto free_table;
> >  	}
> >  
> > -	rwlock_init(&meta->tb_lock);
> >  	return meta;
> >  
> >  free_table:
> > @@ -339,12 +338,14 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
> >  	unsigned long handle;
> >  	u16 size;
> >  
> > -	read_lock(&meta->tb_lock);
> > +	while(atomic_cmpxchg(&meta->table[index].state, IDLE, ACCESS) != IDLE)
> > +		cpu_relax();
> > +
> 
> So... this might be dumb question, but this looks like a spinlock
> implementation.
> 
> What advantage does this have over a standard spinlock?

I was wondering the same thing. Furthermore by doing this you'll loose
the benefits of sharing the lock... your numbers do indicate that it is
for the better. Also, note that hopefully rwlock_t will soon be updated
to be fair and perform up to par with spinlocks, something which is long
overdue. So you could reduce the critical region by implementing the
same granularity, just don't implement your own locking schemes, like
this.

> Seth
> 
> >  	handle = meta->table[index].handle;
> >  	size = meta->table[index].size;
> >  
> >  	if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
> > -		read_unlock(&meta->tb_lock);
> > +		atomic_set(&meta->table[index].state, IDLE);
> >  		clear_page(mem);
> >  		return 0;
> >  	}
> > @@ -355,7 +356,7 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
> >  	else
> >  		ret = zcomp_decompress(zram->comp, cmem, size, mem);
> >  	zs_unmap_object(meta->mem_pool, handle);
> > -	read_unlock(&meta->tb_lock);
> > +	atomic_set(&meta->table[index].state, IDLE);
> >  
> >  	/* Should NEVER happen. Return bio error if it does. */
> >  	if (unlikely(ret)) {
> > @@ -376,14 +377,16 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
> >  	struct zram_meta *meta = zram->meta;
> >  	page = bvec->bv_page;
> >  
> > -	read_lock(&meta->tb_lock);
> > +	while(atomic_cmpxchg(&meta->table[index].state, IDLE, ACCESS) != IDLE)
> > +		cpu_relax();
> > +

So here you could reduce the amount of atomic ops and cacheline boucing
by doing a read before the CAS. It works well for our mutexes and
rwsems. Something like:

while (true) {
	if (atomic_read(&meta->table[index].state) == IDLE &&
	    atomic_cmpxchg(&meta->table[index].state, IDLE, ACCESS) == IDLE)
		/* yay! lock acquired */
}

But then again, that's kind of implementing your own locking scheme...
use a standard one instead ;)

Also, instead of cpu_relax() you probably want arch_mutex_cpu_relax()
for the sake of z systems.

Thanks,
Davidlohr

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ