lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808211933.34565.nickpiggin@yahoo.com.au>
Date:	Thu, 21 Aug 2008 19:33:34 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	Dave Chinner <david@...morbit.com>
Cc:	gus3 <musicman529@...oo.com>,
	Szabolcs Szakacsits <szaka@...s-3g.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	xfs@....sgi.com
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system)

On Thursday 21 August 2008 18:53, Dave Chinner wrote:
> On Thu, Aug 21, 2008 at 05:00:39PM +1000, Nick Piggin wrote:
> > On Thursday 21 August 2008 16:14, Dave Chinner wrote:
> > > I think that we need to issue explicit unplugs to get the log I/O
> > > dispatched the way we want on all elevators and stop trying to
> > > give elevators implicit hints by abusing the bio types and hoping
> > > they do the right thing....
> >
> > FWIW, my explicit plugging idea is still hanging around in one of
> > Jens' block trees (actually he refreshed it a couple of months ago).
> >
> > It provides an API for VM or filesystems to plug and unplug
> > requests coming out of the current process, and it can reduce the
> > need to idle the queue. Needs more performance analysis and tuning
> > though.
>
> We've already got plenty of explicit unplugs in XFS to get stuff
> moving quickly - I'll just have to add another....

That doesn't really help at the elevator, though.


> > But existing plugging is below the level of the elevators, and should
> > only kick in for at most tens of ms at queue idle events, so it sounds
> > like it may not be your problem. Elevators will need some hint to give
> > priority to specific requests -- either via the current threads's io
> > priority, or information attached to bios.
>
> It's getting too bloody complex, IMO. What is right for one elevator
> is wrong for another, so as a filesystem developer I have to pick
> one to target.

I don't really see it as too complex. If you know how you want the
request to be handled, then it should be possible to implement.


> With the way the elevators have been regressing, 
> improving and changing behaviour,

AFAIK deadline, AS, and noop haven't significantly changed for years.


> I am starting to think that I 
> should be picking the noop scheduler.
> Any 'advanced' scheduler that 
> is slower than the same test on the noop scheduler needs fixing...

I disagree. On devices with no seek penalty or their own queueing,
noop is often the best choice. Same for specialized apps that do
their own disk scheduling.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ