lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 31 Oct 2011 19:44:43 +0100
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	John Stoffel <john@...ffel.org>
Cc:	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Johannes Weiner <jweiner@...hat.com>,
	Pekka Enberg <penberg@...nel.org>,
	Cyclonus J <cyclonusj@...il.com>,
	Sasha Levin <levinsasha928@...il.com>,
	Christoph Hellwig <hch@...radead.org>,
	David Rientjes <rientjes@...gle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Konrad Wilk <konrad.wilk@...cle.com>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>, ngupta@...are.org,
	Chris Mason <chris.mason@...cle.com>, JBeulich@...ell.com,
	Dave Hansen <dave@...ux.vnet.ibm.com>,
	Jonathan Corbet <corbet@....net>
Subject: Re: [GIT PULL] mm: frontswap (for 3.2 window)

On Fri, Oct 28, 2011 at 02:28:20PM -0400, John Stoffel wrote:
> and service.  How would TM benefit me?  I don't use Xen, don't want to
> play with it honestly because I'm busy enough as it is, and I just
> don't see the hard benefits.

If you used Xen tmem would be more or less the equivalent of
cache=writethrough/writeback. For us tmem is the linux host pagecache
running on the baremetal in short. But at least when we vmexit for a
read we read 128-512k of it (depending on if=virtio or others and
guest kernel readahead decision), not just a fixed absolutely worst
case 4k unit like tmem would do...

Without tmem Xen can only work like KVM cache=off.

If at least it would drop us a copy, but no, it still does the bounce
buffer, so I'd rather bounce in the host kernel function
file_read_actor than in some superflous (as far as KVM is concerned)
tmem code, plus we normally read orders of magnitude more than 4k in
each vmexit, so our default cache=writeback/writethroguh may already
be more efficient than if we'd use tmem for that.

We could only consider for swap compression but for swap compression
I've no idea why we still need to do a copy, instead of just
compressing from userland page in zerocopy (worst case using any
mechanism introduced to provide stable pages).

And when host linux pagecache will go hugepage we'll get a >4k copy in
one go while tmem bounce will still be stuck at 4k...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ