lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 21 Apr 2010 16:52:11 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Mike Snitzer <snitzer@...hat.com>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	"Theodore Ts'o" <tytso@....edu>, linux-ext4@...r.kernel.org,
	linux-kernel@...r.kernel.org, dm-devel@...hat.com
Subject: Re: [patch,rfc v2] ext3/4: enhance fsync performance when using cfq

Mike Snitzer <snitzer@...hat.com> writes:

> On Thu, Apr 8, 2010 at 10:09 AM, Jens Axboe <jens.axboe@...cle.com> wrote:
>> On Thu, Apr 08 2010, Vivek Goyal wrote:
>>> On Thu, Apr 08, 2010 at 01:04:42PM +0200, Jens Axboe wrote:
>>> > On Wed, Apr 07 2010, Vivek Goyal wrote:
>>> > > On Wed, Apr 07, 2010 at 05:18:12PM -0400, Jeff Moyer wrote:
>>> > > > Hi again,
>>> > > >
>>> > > > So, here's another stab at fixing this.  This patch is very much an RFC,
>>> > > > so do not pull it into anything bound for Linus.  ;-)  For those new to
>>> > > > this topic, here is the original posting:  http://lkml.org/lkml/2010/4/1/344
>>> > > >
>>> > > > The basic problem is that, when running iozone on smallish files (up to
>>> > > > 8MB in size) and including fsync in the timings, deadline outperforms
>>> > > > CFQ by a factor of about 5 for 64KB files, and by about 10% for 8MB
>>> > > > files.  From examining the blktrace data, it appears that iozone will
>>> > > > issue an fsync() call, and will have to wait until it's CFQ timeslice
>>> > > > has expired before the journal thread can run to actually commit data to
>>> > > > disk.
>>> > > >
>>> > > > The approach below puts an explicit call into the filesystem-specific
>>> > > > fsync code to yield the disk so that the jbd[2] process has a chance to
>>> > > > issue I/O.  This bring performance of CFQ in line with deadline.
>>> > > >
>>> > > > There is one outstanding issue with the patch that Vivek pointed out.
>>> > > > Basically, this could starve out the sync-noidle workload if there is a
>>> > > > lot of fsync-ing going on.  I'll address that in a follow-on patch.  For
>>> > > > now, I wanted to get the idea out there for others to comment on.
>>> > > >
>>> > > > Thanks a ton to Vivek for spotting the problem with the initial
>>> > > > approach, and for his continued review.
>>> > > >
> ...
>>> > > So we got to take care of two issues now.
>>> > >
>>> > > - Make it work with dm/md devices also. Somehow shall have to propogate
>>> > >   this yield semantic down the stack.
>>> >
>>> > The way that Jeff set it up, it's completely parallel to eg congestion
>>> > or unplugging. So that should be easily doable.
>>> >
>>>
>>> Ok, so various dm targets now need to define "yield_fn" and propogate the
>>> yield call to all the component devices.
>>
>> Exactly.
>
> To do so doesn't DM (and MD) need a blk_queue_yield() setter to
> establish its own yield_fn?  The established dm_yield_fn would call
> blk_yield() for all real devices in a given DM target.  Something like
> how blk_queue_merge_bvec() or blk_queue_make_request() allow DM to
> provide functional extensions.
>
> I'm not seeing such a yield_fn hook for stacking drivers to use. And
> as is, jbd and jbd2 just call blk_yield() directly and there is no way
> for the block layer to call into DM.
>
> What am I missing?

Nothing, it is I who am missing something (extra code).  When I send out
the next version, I'll add the setter function and ensure that
queue->yield_fn is called from blk_yield.  Hopefully that's not viewed
as upside down.  We'll see.

Thanks for the review, Mike!

-Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ