lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080204103352.GD15220@kernel.dk>
Date:	Mon, 4 Feb 2008 11:33:52 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	linux-kernel@...r.kernel.org, arjan@...ux.intel.com, mingo@...e.hu,
	ak@...e.de, James.Bottomley@...elEye.com, andrea@...e.de,
	clameter@....com, akpm@...ux-foundation.org,
	andrew.vasquez@...gic.com, willy@...ux.intel.com,
	Zach Brown <zach.brown@...cle.com>
Subject: Re: [rfc] direct IO submission and completion scalability issues

On Mon, Feb 04 2008, Nick Piggin wrote:
> On Mon, Feb 04, 2008 at 11:12:44AM +0100, Jens Axboe wrote:
> > On Sun, Feb 03 2008, Nick Piggin wrote:
> > > On Fri, Jul 27, 2007 at 06:21:28PM -0700, Suresh B wrote:
> > > 
> > > Hi guys,
> > > 
> > > Just had another way we might do this. Migrate the completions out to
> > > the submitting CPUs rather than migrate submission into the completing
> > > CPU.
> > > 
> > > I've got a basic patch that passes some stress testing. It seems fairly
> > > simple to do at the block layer, and the bulk of the patch involves
> > > introducing a scalable smp_call_function for it.
> > > 
> > > Now it could be optimised more by looking at batching up IPIs or
> > > optimising the call function path or even mirating the completion event
> > > at a different level...
> > > 
> > > However, this is a first cut. It actually seems like it might be taking
> > > slightly more CPU to process block IO (~0.2%)... however, this is on my
> > > dual core system that shares an llc, which means that there are very few
> > > cache benefits to the migration, but non-zero overhead. So on multisocket
> > > systems hopefully it might get to positive territory.
> > 
> > That's pretty funny, I did pretty much the exact same thing last week!
> 
> Oh nice ;)
> 
> 
> > The primary difference between yours and mine is that I used a more
> > private interface to signal a softirq raise on another CPU, instead of
> > allocating call data and exposing a generic interface. That put the
> > locking in blk-core instead, turning blk_cpu_done into a structure with
> > a lock and list_head instead of just being a list head, and intercepted
> > at blk_complete_request() time instead of waiting for an already raised
> > softirq on that CPU.
> 
> Yeah I was looking at that... didn't really want to add the spinlock
> overhead to the non-migration case. Anyway, I guess that sort of
> fine implementation details is going to have to be sorted out with
> results.

As Andi mentions, we can look into making that lockless. For the initial
implementation I didn't really care, just wanted something to play with
that would nicely allow me to control both the submit and complete side
of the affinity issue.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ