lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130524184727.GQ29680@kernel.dk>
Date:	Fri, 24 May 2013 20:47:27 +0200
From:	Jens Axboe <axboe@...nel.dk>
To:	OS Engineering <osengineering@...c-inc.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Padmini Balasubramaniyan <padminib@...c-inc.com>,
	Amit Phansalkar <aphansalkar@...c-inc.com>
Subject: Re: EnhanceIO(TM) caching driver features [1/3]

On Fri, May 24 2013, OS Engineering wrote:
> Hi Jens and Kernel Gurus,

[snip]

Thanks for writing all of this up, but I'm afraid it misses the point
somewhat. As stated previously, we have (now) two existing competing
implementations in the kernel. I'm looking for justification on why YOUR
solution is better. A writeup and documentation on error handling
details is nice and all, but it doesn't answer the key important
questions.

Lets say somebody sends in a patch that he/she claims improves memory
management performance. To justify such a patch (or any patch, really),
the maintenance burden vs performance benefit needs to be quantified.
Such a person had better supply a set of before and after numbers, such
that the benefit can be quantified.

It's really the same with your solution. You mention "the solution has
been proven in independent testing, such as testing by Demartek.". I
have no idea what this testing is, what they ran, compared with, etc.

So, to put it bluntly, I need to see some numbers. Run relevant
workloads on EnhanceIO, bcache, dm-cache. Show why EnhanceIO is better.
Then we can decide whether it really is the superior solution. Or,
perhaps, it turns out there are inefficiencies in eg bcache/dm-cache
that could be fixed up.

Usually I'm not such a stickler for including new code. But a new driver
is different than EnhanceIO. If somebody submitted a patch to add a
newly written driver for hw that we already have a driver for, that
would be similar situation.

The executive summary: your writeup was good, but we need some relevant
numbers to look at too.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ