lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Jun 2010 08:27:48 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Minchan Kim <minchan.kim@...il.com>
Cc:	chris.mason@...cle.com, viro@...iv.linux.org.uk,
	akpm@...ux-foundation.org, adilger@....com, tytso@....edu,
	mfasheh@...e.com, joel.becker@...cle.com, matthew@....cx,
	linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
	ocfs2-devel@....oracle.com, linux-mm@...ck.org, ngupta@...are.org,
	jeremy@...p.org, JBeulich@...ell.com, kurt.hackel@...cle.com,
	npiggin@...e.de, dave.mccracken@...cle.com, riel@...hat.com,
	avi@...hat.com, konrad.wilk@...cle.com
Subject: RE: [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview

Hi Minchan --

> I think cleancache approach is cool. :)
> I have some suggestions and questions.

Thanks for your interest!

> > If a get_page is successful on a non-shared pool, the page is flushed
> (thus
> > making cleancache an "exclusive" cache).  On a shared pool, the page
> 
> Do you have any reason about force "exclusive" on a non-shared pool?
> To free memory on pesudo-RAM?
> I want to make it "inclusive" by some reason but unfortunately I can't
> say why I want it now.

The main reason is to free up memory in pseudo-RAM and to
avoid unnecessary cleancache_flush calls.  If you want
inclusive, the page can be put immediately following
the get.  If put-after-get for inclusive becomes common,
the interface could easily be extended to add a "get_no_flush"
call.
 
> While you mentioned it's "exclusive", cleancache_get_page doesn't
> flush the page at below code.
> Is it a role of user who implement cleancache_ops->get_page?

Yes, the flush is done by the cleancache implementation.

> If backed device is ram(ie), Could we _move_ the pages from page cache
> to cleancache?
> I mean I don't want to copy page when get/put operation. we can just
> move page in case of backed device "ram". Is it possible?

By "move", do you mean changing the virtual mappings?  Yes,
this could be done as long as the source and destination are
both directly addressable (that is, true physical RAM), but
requires TLB manipulation and has some complicated corner
cases.  The copy semantics simplifies the implementation on
both the "frontend" and the "backend" and also allows the
backend to do fancy things on-the-fly like page compression
and page deduplication.

> You send the patches which is core of cleancache but I don't see any
> use case.
> Could you send use case patches with this series?
> It could help understand cleancache's benefit.

Do you mean the Xen Transcendent Memory ("tmem") implementation?
If so, this is four files in the Xen source tree (common/tmem.c,
common/tmem_xen.c, include/xen/tmem.h, include/xen/tmem_xen.h).
There is also an html document in the Xen source tree, which can
be viewed here:
http://oss.oracle.com/projects/tmem/dist/documentation/internals/xen4-internals-v01.html 

Or did you mean a cleancache_ops "backend"?  For tmem, there
is one file linux/drivers/xen/tmem.c and it interfaces between
the cleancache_ops calls and Xen hypercalls.  It should be in
a Xenlinux pv_ops tree soon, or I can email it sooner.

I am also eagerly awaiting Nitin Gupta's cleancache backend
and implementation to do in-kernel page cache compression.

Thanks,
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ