lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141210131325.GD21108@debian>
Date:	Wed, 10 Dec 2014 13:13:25 +0000
From:	Joe Thornber <thornber@...hat.com>
To:	Akira Hayakawa <ruby.wktk@...il.com>
Cc:	ejt@...hat.com, dm-devel@...hat.com, gregkh@...uxfoundation.org,
	snitzer@...hat.com, agk@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [dm-devel] [PATCH] staging: writeboost: Add dm-writeboost

On Wed, Dec 10, 2014 at 09:59:12PM +0900, Akira Hayakawa wrote:
> Joe,
> 
> I appreciate your continuous work.
> 
> Is that read or write?
> The difference between Type 0 and 1 should only show up in write path.
> So is it write test?

Yes, writing across the whole device using 'dd'.

These are the tests:

  dmtest list --suite writeboost -n /wipe_device/

> And what is the unit of each result?

seconds.

> 
> > So maybe it's just volume of IO that's causing the problem?  What's
> > the difference between Type 0 and Type 1?  In the code I notice you
> > have 'rambuf' structures, are you caching IO in memory?
> "rambuf" is a temporary space that every write data comes in.
> 127*4KB data are once stored there and 4KB metadata section are added
> then it becomes a log and flushed to the cache device sequentially (512KB each).

So you copy the bio payload to a different block of ram and then
complete the bio?  Or does the rambuf refer to the bio payload
directly?

> By the way,
> I think more clearer discussion can be done if tests are done on physical machines
> to isolate things relevant to VM. I will also add these tests to dmts later and
> run on my machine.
> But, it will be much better if we have good server with RAID-ed backing store
> and the newest SSD (How would it be if it's PCI-e SSD)... 

I generally find it quicker to investigate problems on the machine
that are actually exhibiting the problem ;) Seriously though, you're
asking us to send this upstream; it needs to work on consumer level
hardware.

I've got a big machine with Fusion IO storage that I can run the same
tests on later.

- Joe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ