lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 25 May 2013 03:57:08 +0000
From:	Amit Kale <akale@...c-inc.com>
To:	Jens Axboe <axboe@...nel.dk>,
	OS Engineering <osengineering@...c-inc.com>
CC:	LKML <linux-kernel@...r.kernel.org>,
	Padmini Balasubramaniyan <padminib@...c-inc.com>,
	Amit Phansalkar <aphansalkar@...c-inc.com>
Subject: RE: EnhanceIO(TM) caching driver features [1/3]

Hi Jens,

I by mistake dropped the weblink to demartek study while composing my email. The demartek study is published here: http://www.demartek.com/Demartek_STEC_S1120_PCIe_Evaluation_2013-02.html. It's an independent study. Here are a few numbers taken from this report. In a database comparison using transactions per second
HDD baseline (40 disks) - 2570 tps
240GB Cache - 9844 tps
480GB cache - 19758 tps
RAID5 pure SSD - 32380 tps
RAID0 pure SSD - 40467 tps

There are two types of performance comparisons, application based and IO pattern based. Application based tests measure efficiency of cache replacement algorithms. These are time consuming. Above tests were done by demartek over a period of time. I don't have performance comparisons between EnhanceIO(TM) driver, bcache and dm-cache. I'll try to get them done in-house. 

IO pattern based tests can be done quickly. However since IO pattern is fixed prior to the test, output tends to depend on whether the IO pattern suits the caching algorithm. These are relatively easy. I can definitely post this comparison.

Regarding IO error handling - that's really our USP :-). While it won't be possible to do a testing of bcache and dm-cache on our internal error test suites, I'll try to come up with a few points based on code comparison.

Thanks.
-Amit


> -----Original Message-----
> From: linux-kernel-owner@...r.kernel.org [mailto:linux-kernel-
> owner@...r.kernel.org] On Behalf Of Jens Axboe
> Sent: Saturday, May 25, 2013 12:17 AM
> To: OS Engineering
> Cc: LKML; Padmini Balasubramaniyan; Amit Phansalkar
> Subject: Re: EnhanceIO(TM) caching driver features [1/3]
> 
> On Fri, May 24 2013, OS Engineering wrote:
> > Hi Jens and Kernel Gurus,
> 
> [snip]
> 
> Thanks for writing all of this up, but I'm afraid it misses the point
> somewhat. As stated previously, we have (now) two existing competing
> implementations in the kernel. I'm looking for justification on why
> YOUR solution is better. A writeup and documentation on error handling
> details is nice and all, but it doesn't answer the key important
> questions.
> 
> Lets say somebody sends in a patch that he/she claims improves memory
> management performance. To justify such a patch (or any patch, really),
> the maintenance burden vs performance benefit needs to be quantified.
> Such a person had better supply a set of before and after numbers, such
> that the benefit can be quantified.
> 
> It's really the same with your solution. You mention "the solution has
> been proven in independent testing, such as testing by Demartek.". I
> have no idea what this testing is, what they ran, compared with, etc.
> 
> So, to put it bluntly, I need to see some numbers. Run relevant
> workloads on EnhanceIO, bcache, dm-cache. Show why EnhanceIO is better.
> Then we can decide whether it really is the superior solution. Or,
> perhaps, it turns out there are inefficiencies in eg bcache/dm-cache
> that could be fixed up.
> 
> Usually I'm not such a stickler for including new code. But a new
> driver is different than EnhanceIO. If somebody submitted a patch to
> add a newly written driver for hw that we already have a driver for,
> that would be similar situation.
> 
> The executive summary: your writeup was good, but we need some relevant
> numbers to look at too.
> 
> --
> Jens Axboe
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> in the body of a message to majordomo@...r.kernel.org More majordomo
> info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

PROPRIETARY-CONFIDENTIAL INFORMATION INCLUDED

This electronic transmission, and any documents attached hereto, may contain confidential, proprietary and/or legally privileged information. The information is intended only for use by the recipient named above. If you received this electronic message in error, please notify the sender and delete the electronic message. Any disclosure, copying, distribution, or use of the contents of information received in error is strictly prohibited, and violators will be pursued legally.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists