lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 03 Feb 2009 11:12:45 -0500
From:	Chris Mason <chris.mason@...cle.com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Jan Kara <jack@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
	npiggin@...e.de
Subject: Re: Commit 31a12666d8f0c22235297e1c1575f82061480029 slows down
 Berkeley DB

On Tue, 2009-02-03 at 13:11 +1100, Nick Piggin wrote:
> On Tuesday 03 February 2009 12:54:26 Zhang, Yanmin wrote:
> > On Tue, 2009-02-03 at 12:24 +1100, Nick Piggin wrote:
> > > On Friday 30 January 2009 12:23:15 Jan Kara wrote:
> > > >   Hi,
> > > >
> > > >   today I found that commit 31a12666d8f0c22235297e1c1575f82061480029
> > > > (mm: write_cache_pages cyclic fix) slows down operations over Berkeley
> > > > DB. Without this "fix", I can add 100k entries in about 5 minutes 30s,
> > > > with that change it takes about 20 minutes.
> > > >   What is IMO happening is that previously we scanned to the end of
> > > > file, we left writeback_index at the end of file and went to write next
> > > > file. With the fix, we wrap around (seek) and after writing some more
> > > > we go to next file (seek again).
> >
> > We also found this commit causes about 40~50% regression with iozone
> > mmap-rand-write. #iozone -B -r 4k -s 64k -s 512m -s 1200m
> >
> > My machine has 8GB memory.
> 
> Ah, thanks. Yes BDB I believe is basically just doing an mmap-rand-write,
> so maybe this is a good test case.
> 
> The interesting thing is why is this causing such a slowdown. If there is
> only a single main file active in the workload, then I don't see why this
> patch should make such a big difference. In either case, wouldn't pdflush
> come back and just start writing out from the start of the file anyway?

Perhaps the difference is that without the patch, pdflush will return
after running congestion_wait()?  This would give bdb and iozone a
chance to fill in more pages, and increases the chances we'll do
sequential IO.

-chris


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ