[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4E2D2784.3060701@fusionio.com>
Date: Mon, 25 Jul 2011 10:21:24 +0200
From: Jens Axboe <jaxboe@...ionio.com>
To: Shaohua Li <shli@...nel.org>
CC: Dan Williams <dan.j.williams@...el.com>,
Christoph Hellwig <hch@...radead.org>,
Roland Dreier <roland@...estorage.com>,
Dave Jiang <dave.jiang@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>
Subject: Re: [RFC PATCH 1/2] block: strict rq_affinity
On 2011-07-25 03:14, Shaohua Li wrote:
> 2011/7/24 Jens Axboe <jaxboe@...ionio.com>:
>> On 2011-07-22 22:59, Dan Williams wrote:
>>> Some storage controllers benefit from completions always being steered
>>> to the strict requester cpu rather than the looser "per-socket" steering
>>> that blk_cpu_to_group() attempts by default.
>>>
>>> echo 2 > /sys/block/<bdev>/queue/rq_affinity
>>
>> I have applied this one, with a modified patch description.
>>
>> I like the adaptive solution, but it should be rewritten to not declare
>> and expose softirq internals. Essentially have an API from
>> kernel/softirq.c that can return whether a given (or perhaps just local)
>> softirq handler is busy or not.
> Jens,
> I posted a similar patch about two years ago(
> http://marc.info/?l=linux-kernel&m=126136252929329&w=2).
> At that time, you actually did a lot of tests and said the same cpu
> approach will cause huge lock contention and bounce. Is that get fixed?
Yep, it's not ideal. But if we are running out of steam on a single
processor, there's really not much of an option currently.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists