lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070111012404.GW33919298@melbourne.sgi.com>
Date:	Thu, 11 Jan 2007 12:24:04 +1100
From:	David Chinner <dgc@....com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	David Chinner <dgc@....com>, Christoph Lameter <clameter@....com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [REGRESSION] 2.6.19/2.6.20-rc3 buffered write slowdown

On Thu, Jan 11, 2007 at 12:08:10PM +1100, Nick Piggin wrote:
> David Chinner wrote:
> >Sure, but that doesn't really show the how erratic the per-filesystem
> >throughput is because the test I'm running is PCI-X bus limited in
> >it's throughput at about 750MB/s. Each dm device is capable of about
> >340MB/s write, so when one slows down, the others will typically
> >speed up.
> 
> But you do also get aggregate throughput drops? (ie. 2.6.20-rc3-worse)

Yes - you can see that from the vmstat output I sent.

At 500GB into the write of each file (about 60% of the disks filled)
the per fs write rate should be around 220MB/s, so aggregate should
be around 650MB/s. That's what Im seeing with 2.6.18 and 2.6.20-rc3
with a tweaked dirty_ratio. Without the dirty_ratio tweak, you see
what is in 2.6.20-rc3-worse.

e.g. I just changed dirty ratio from 10 to 40 and I've gone from
consistent 210-215MB/s per filesystm (~630-650MB/s aggregate) to
ranging over 110-200MB/s per fielsystem and aggregates of ~450-600MB/s.
I changed dirty_ratio back to 10, and within 15 seconds we are back
to consistent 210MB/s per filesystem and 630-650MB/s write.

> >So, what I've attached is three files which have both
> >'vmstat 5' output and 'iostat 5 |grep dm-' output in them.
> 
> Ahh, sorry to be unclear, I meant:
> 
>   cat /proc/vmstat > pre
>   run_test
>   cat /proc/vmstat > post

Ok, I'll get back to you on that one - even at 600+MB/s, writing 5TB
of data takes some time....

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ