[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100408110045.GJ10103@kernel.dk>
Date: Thu, 8 Apr 2010 13:00:45 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: Theodore Ts'o <tytso@....edu>, Vivek Goyal <vgoyal@...hat.com>,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch,rfc v2] ext3/4: enhance fsync performance when using cfq
On Wed, Apr 07 2010, Jeff Moyer wrote:
> Hi again,
>
> So, here's another stab at fixing this. This patch is very much an RFC,
> so do not pull it into anything bound for Linus. ;-) For those new to
> this topic, here is the original posting: http://lkml.org/lkml/2010/4/1/344
>
> The basic problem is that, when running iozone on smallish files (up to
> 8MB in size) and including fsync in the timings, deadline outperforms
> CFQ by a factor of about 5 for 64KB files, and by about 10% for 8MB
> files. From examining the blktrace data, it appears that iozone will
> issue an fsync() call, and will have to wait until it's CFQ timeslice
> has expired before the journal thread can run to actually commit data to
> disk.
>
> The approach below puts an explicit call into the filesystem-specific
> fsync code to yield the disk so that the jbd[2] process has a chance to
> issue I/O. This bring performance of CFQ in line with deadline.
>
> There is one outstanding issue with the patch that Vivek pointed out.
> Basically, this could starve out the sync-noidle workload if there is a
> lot of fsync-ing going on. I'll address that in a follow-on patch. For
> now, I wanted to get the idea out there for others to comment on.
>
> Thanks a ton to Vivek for spotting the problem with the initial
> approach, and for his continued review.
I like the concept, it's definitely useful (and your results amply
demonstrate that). I was thinking if there was a way in through the ioc
itself, rather than bdi -> queue and like you are doing. But I can't
think of a nice way to do it, so this is probably as good as it gets.
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index dee9d93..a76ccd1 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -2065,7 +2065,7 @@ static void cfq_choose_cfqg(struct cfq_data *cfqd)
> cfqd->serving_group = cfqg;
>
> /* Restore the workload type data */
> - if (cfqg->saved_workload_slice) {
> + if (cfqg && cfqg->saved_workload_slice) {
> cfqd->workload_expires = jiffies + cfqg->saved_workload_slice;
> cfqd->serving_type = cfqg->saved_workload;
> cfqd->serving_prio = cfqg->saved_serving_prio;
Unrelated change?
> +static void cfq_yield(struct request_queue *q)
> +{
> + struct cfq_data *cfqd = q->elevator->elevator_data;
> + struct cfq_io_context *cic;
> + struct cfq_queue *cfqq;
> + unsigned long flags;
> +
> + cic = cfq_cic_lookup(cfqd, current->io_context);
> + if (!cic)
> + return;
> +
> + spin_lock_irqsave(q->queue_lock, flags);
spin_lock_irq() is sufficient here.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists