lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141209151253.GA17660@debian>
Date:	Tue, 9 Dec 2014 15:12:53 +0000
From:	Joe Thornber <thornber@...hat.com>
To:	device-mapper development <dm-devel@...hat.com>
Cc:	gregkh@...uxfoundation.org, snitzer@...hat.com, agk@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [dm-devel] [PATCH] staging: writeboost: Add dm-writeboost

On Mon, Dec 08, 2014 at 06:04:41AM +0900, Akira Hayakawa wrote:
> Mike and Alasdair,
> I need your ack

Hi Akira,

I just spent some time playing with your latest code.  On the positive
side I am seeing some good performance with the fio tests.  Which is
great, we know your design should outperform dm-cache with small
random io.

However I'm still getting v. poor results with the git-extract test,
which clones a linux kernel repo, and then checks out 5 revisions, all
with drop_caches in between.

I'll summarise the results I get here:


    raw SSD:            69, 107
    raw spindle:        73, 184

    dm-cache:           74, 118

    writeboost type 0:  115, 247
    writeboost type 1:  193, 275


Each result consists of two numbers, the time to do the clone and the
time to do the extract.

Writeboost is significantly slower than the spindle alone for this
very simple test.  I do not understand what is causing the issue.  At
first I thought it was because the working set is larger than the SSD
space, but I get the same results even if there's more SSD space than
spindle.

Running the same test using SSD on SSD also yields v. poor results:
115, 177 and 198, 218 for type 0 and type 1 respectively.  Obviously
this is a pointless configuration, but it does allow us to see the
overhead of the caching layer.

It's fine to have different benefits of the caching software depending
on the load.  But I think the worst case should always be close to the
performance of the raw spindle device.

If you get the following work items done I will ack to go upstream:

i) Get this test so it's performance is similar to raw spindle.

ii) Write good documentation in Documentation/device-mapper/.  eg. How
    do I remove a cache?  When should I use dm-writeboost rather than
    bcache or dm-cache?

iii) Provide an equivalent to the fsck tool to repair a damaged cache.

- Joe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ