lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110901041848.GO32358@dastard>
Date:	Thu, 1 Sep 2011 14:18:48 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Andi Kleen <andi@...stfloor.org>
Cc:	Christoph Hellwig <hch@...radead.org>,
	Daniel Ehrenberg <dehrenberg@...gle.com>,
	linux-kernel@...r.kernel.org
Subject: Re: Approaches to making io_submit not block

On Wed, Aug 31, 2011 at 10:08:50AM -0700, Andi Kleen wrote:
> Christoph Hellwig <hch@...radead.org> writes:
> >
> > I'll get it polished up and send it out for RFC once Dave sends out
> > the updated allocation workqueue patch.  With this he moves all
> > allocator calls in XFS into a workqueue.  My direct I/O patch uses that
> > fact to use that workqueue for the allocator call
> 
> Is that really a good direction? The problem when you push operations
> from multiple threads all into a single resource (per cpu workqueue)
> is that the CPU scheduler loses control over that because they
> are all mixed up.

Allocations are already serialised by a single resource - the AGF
lock - so whether they block on the workqueue queue or on the AGF
lock is irrelevant to scheduling. And a single thread can only have
a single allocation outstanding at a time because the caller has to
block waiting for the allocation to complete before moving on. 

> So if one guy submits a lot and another very little the "a lot" guy
> can overwhelm the queue for the very little guy.

If we get lots of allocations queued on the one per-CPU wq, they
will have all had to come from different contexts. In which case,
FIFO processing of the work queued up is *exactly* the fairness we
want, because that is exactly what doing them from process context
would end up with.

If the allocation work blocks (either on locks or metadata reads),
the workqueue is configured with a deep amount of concurrent
operations per CPU (I think I set it to the maximum of 512 works per
CPU), so other pending allocations from the same per-cpu workqueue
can be run in the mean time.

> We also have similar problems with the IO schedulers, which also
> rely on process context to make fairness decisions. If you remove
> the process context they do badly.

Which, IMO, is a significant failing of the IO scheduler in question
(CFQ) because it'll perform badly the moment your application or
filesytem uses a multithreaded IO architecture.  Filesystem metadata
is a global resource, not a per-process context resource, so IO
scehdulers need to treat it that way.

Indeed, taking the allocation IO out of the process context means
the filesystem operations are not subject to process context based
throttling, which can lead to priority inversion problems when a
low priority process is throttled on a metadata read IO needed to
complete an allocation that a high priority process is waiting on
being completed...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ