lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Jul 2010 16:31:09 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	Corrado Zoccolo <czoccolo@...il.com>, axboe@...nel.dk,
	Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.

On Mon, Jul 19, 2010 at 12:08:23PM -0400, Jeff Moyer wrote:
> Vivek Goyal <vgoyal@...hat.com> writes:
> 
> > On Tue, Jul 13, 2010 at 04:30:23PM -0400, Jeff Moyer wrote:
> >> Vivek Goyal <vgoyal@...hat.com> writes:
> 
> > I don't mind looking at traces. Do let me know where can I access those.
> 
> Forwarded privately.
> 
> >> Now, to answer your question, the jbd2 thread runs and issues a barrier,
> >> which causes a forced dispatch of requests.  After that a new queue is
> >> selected, and since the fs_mark thread is blocked on the journal commit,
> >> it's always the fio process that gets to run.
> >
> > Ok, that explains it.  So somehow after the barrier, fio always wins
> > as issues next read request before the fs_mark is able to issue the
> > next set of writes.
> >
> >> 
> >> This, of course, raises the question of why the blk_yield patches didn't
> >> run into the same problem.  Looking back at some saved traces, I don't
> >> see WBS (write barrier sync) requests, so I wonder if barriers weren't
> >> supported by my last storage system.
> >
> > I think that blk_yield patches will also run into the same issue if
> > barriers are enabled.
> 
> Agreed.
> 
> Here are the results again with barriers disabled for Corrado's patch:
> 
> fs_mark: 348.2 files/sec
> fio: 53324.6 KB/s
> 
> Remember that deadline was seeing 450 files/sec and 78 MB/s.  So, in
> this case, the buffered reader appears to be starved.  Looking into this
> further, I found that the journal thread is running with I/O priority 0,
> while the fio and fs_mark processes are running at the default (4).
> Because the jbd thread has a higher I/O priority, its requests are
> always closer to the front of the sort list, and thus the sync-noidle
> workload is chosen more often than the sync workload.  This essentially
> results in an elevated I/O priority for the fs_mark process as well.
> While troubling, that problem is not directly related to the problem
> we're looking at.
> 
> So, I'm still in favor of Corrado's approach.  Are there any remaining
> dissenting opinions on this?

Nope. I am fine with moving all WRITE_SYNC with RQ_NOIDLE to sync-noidle
tree and also marking jbd writes as WRITE_SYNC. By bringing dependent
threads on single service tree, we don't have to worry about slice
yielding.

Acked-by: Vivek Goyal <vgoyal@...hat.com>

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ