lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 Aug 2012 10:23:21 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Konrad Wilk <konrad.wilk@...cle.com>
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Hugh Dickins <hughd@...gle.com>
Subject: RE: [PATCH] frontswap: support exclusive gets if tmem backend is
 capable

> From: Konrad Rzeszutek Wilk

Hi Konrad --

Thanks for the fast feedback!

> > +#define FRONTSWAP_HAS_EXCLUSIVE_GETS
> > +extern void frontswap_tmem_exclusive_gets(bool);
> 
> I don't think you need the #define here..

The #define is used by an ifdef in the backend to ensure
that it is using a version of frontswap that has this feature,
so avoids the need for the frontend (frontswap) and
the backend (e.g. zcache2) to merge in lockstep.

> > +EXPORT_SYMBOL(frontswap_tmem_exclusive_gets);
> 
> We got two of these now - the writethrough and this one. Merging
> them in one function and one flag might be better. So something like:
> static int frontswap_mode = 0;
>
> void frontswap_set_mode(int set_mode)
> {
> 	if (mode & (FRONTSWAP_WRITETH | FRONTSWAP_EXCLUS..)
> 		mode |= set_mode;
> }

IMHO, it's too soon to try to optimize this.  One or
both of these may go away.   Or the mode may become
more fine-grained in the future (e.g. to allow individual
gets to be exclusive).

So unless you object strongly, let's just leave this
as is for now and revisit in the future if more "modes"
are needed.
 
> ... and
> > +
> > +/*
> >   * Called when a swap device is swapon'd.
> >   */
> >  void __frontswap_init(unsigned type)
> > @@ -174,8 +190,13 @@ int __frontswap_load(struct page *page)
> >  	BUG_ON(sis == NULL);
> >  	if (frontswap_test(sis, offset))
> >  		ret = (*frontswap_ops.load)(type, offset, page);
> > -	if (ret == 0)
> > +	if (ret == 0) {
> >  		inc_frontswap_loads();
> > +		if (frontswap_tmem_exclusive_gets_enabled) {
> 
> For these perhaps use asm goto for optimization? Is this showing up in
> perf as a hotspot? The asm goto might be a bit too much.

This is definitely not a performance hotspot.  Frontswap code
only is ever executed in situations where a swap-to-disk would
otherwise have occurred.  And in this case, this code only
gets executed after the frontswap_test has confirmed that
tmem does already contain the page of data, in which case
there is thousands of cycles spent copying and/or decompressing.

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ