lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20071005064853.GI5711@kernel.dk> Date: Fri, 5 Oct 2007 08:48:53 +0200 From: Jens Axboe <jens.axboe@...cle.com> To: David Chinner <dgc@....com> Cc: David Miller <davem@...emloft.net>, cebbert@...hat.com, willy@...ux.intel.com, clameter@....com, nickpiggin@...oo.com.au, hch@....de, mel@...net.ie, linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, suresh.b.siddha@...el.com Subject: Re: SLUB performance regression vs SLAB On Fri, Oct 05 2007, David Chinner wrote: > On Thu, Oct 04, 2007 at 03:07:18PM -0700, David Miller wrote: > > From: Chuck Ebbert <cebbert@...hat.com> Date: Thu, 04 Oct 2007 17:47:48 > > -0400 > > > > > On 10/04/2007 05:11 PM, David Miller wrote: > > > > From: Chuck Ebbert <cebbert@...hat.com> Date: Thu, 04 Oct 2007 17:02:17 > > > > -0400 > > > > > > > >> How do you simulate reading 100TB of data spread across 3000 disks, > > > >> selecting 10% of it using some criterion, then sorting and summarizing > > > >> the result? > > > > > > > > You repeatedly read zeros from a smaller disk into the same amount of > > > > memory, and sort that as if it were real data instead. > > > > > > You've just replaced 3000 concurrent streams of data with a single stream. > > > That won't test the memory allocator's ability to allocate memory to many > > > concurrent users very well. > > > > You've kindly removed my "thinking outside of the box" comment. > > > > The point is was not that my specific suggestion would be perfect, but that > > if you used your creativity and thought in similar directions you might find > > a way to do it. > > > > People are too narrow minded when it comes to these things, and that's the > > problem I want to address. > > And it's a good point, too, because often problems to one person are a > no-brainer to someone else. > > Creating lots of "fake" disks is trivial to do, IMO. Use loopback on > sparse files containing sparse filesxi, use ramdisks containing sparse > files or write a sparse dm target for sparse block device mapping, > etc. I'm sure there's more than the few I just threw out... Or use scsi_debug to fake drives/controllers, works wonderful as well for some things and involve the full IO stack. I'd like to second Davids emails here, this is a serious problem. Having a reproducible test case lowers the barrier for getting the problem fixed by orders of magnitude. It's the difference between the problem getting fixed in a day or two and it potentially lingering for months, because email ping-pong takes forever and "the test team has moved on to other tests, we'll let you know the results of test foo in 3 weeks time when we have a new slot on the box" just removing any developer motivation to work on the issue. -- Jens Axboe - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists