lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081112074503.47e175ae@tleilax.poochiereds.net>
Date:	Wed, 12 Nov 2008 07:45:03 -0500
From:	Jeff Layton <jlayton@...hat.com>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>,
	"Vitaly V. Bursov" <vitalyb@...enet.dn.ua>,
	linux-kernel@...r.kernel.org, bfields@...ldses.org
Subject: Re: Slow file transfer speeds with CFQ IO scheduler in some cases

On Wed, 12 Nov 2008 13:20:35 +0100
Jens Axboe <jens.axboe@...cle.com> wrote:

> On Tue, Nov 11 2008, Jeff Layton wrote:
> > On Tue, 11 Nov 2008 16:41:04 -0500
> > Jeff Layton <jlayton@...hat.com> wrote:
> > 
> > > On Tue, 11 Nov 2008 14:36:07 -0500
> > > Jeff Moyer <jmoyer@...hat.com> wrote:
> > > 
> > > > Jens Axboe <jens.axboe@...cle.com> writes:
> > > > 
> > > > > OK, that looks better. Can I talk you into just trying this little
> > > > > patch, just to see what kind of performance that yields? Remove the cfq
> > > > > patch first. I would have patched nfsd only, but this is just a quick'n
> > > > > dirty.
> > > > 
> > > > I went ahead and gave it a shot.  The updated CFQ patch with no I/O
> > > > context sharing does about 40MB/s reading a 1GB file.  Backing that
> > > > patch out, and then adding the patch to share io_context's between
> > > > kthreads yields 45MB/s.
> > > > 
> > > 
> > > Here's a quick and dirty patch to make all of the nfsd's have the same
> > > io_context. Comments appreciated -- I'm not that familiar with the IO
> > > scheduling code. If this looks good, I'll clean it up, add some
> > > comments and formally send it to Bruce.
> > > 
> > 
> > No sooner than I send it out than I find a bug. We need to eventually
> > put the io_context reference we get. This should be more correct:
> 
> That sort of thing happens a lot, I can definitely sympathize with you
> there :-)
> 
> > ----------------[snip]-------------------
> > 
> > From d0ee67045a12c677883f77791c6f260588c7b41f Mon Sep 17 00:00:00 2001
> > From: Jeff Layton <jlayton@...hat.com>
> > Date: Tue, 11 Nov 2008 16:54:16 -0500
> > Subject: [PATCH] knfsd: make all nfsd threads share an io_context
> > 
> > This apparently makes the I/O scheduler treat the threads as a group
> > which helps throughput when sequential I/O is multiplexed over several
> > nfsd's.
> 
> That's a lot more nifty than my stupid CLONE_IO flag addition. Both are
> only good for test purposes though.
> 
> It's a bit difficult to make this really mergeable. I don't know
> anything about how nfsd manages its thread pool, but something more
> appropriate would be an io context per client mount. That's still not
> perfect as you could easily have more than one process doing simultanous
> IO on the client side, but it's a lot better.
> 

Ahh Good point...I wasn't thinking about it the right way. I guess what
we really need is some way to tell that a series of I/O requests
originated from the same client thread. NFS isn't really conducive to
this...

We might be able to do something like that with NFSv4 though. Maybe an
io_context per state owner or something. That's probably not going to
be trivial to implement however...

-- 
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ