[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANejiEVMHhOW0N4Wn9LG0NAXMX3uOz5QzcRXZV4Ow4RTsMvN5Q@mail.gmail.com>
Date: Mon, 25 Jul 2011 09:14:47 +0800
From: Shaohua Li <shli@...nel.org>
To: Jens Axboe <jaxboe@...ionio.com>
Cc: Dan Williams <dan.j.williams@...el.com>,
Christoph Hellwig <hch@...radead.org>,
Roland Dreier <roland@...estorage.com>,
Dave Jiang <dave.jiang@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>
Subject: Re: [RFC PATCH 1/2] block: strict rq_affinity
2011/7/24 Jens Axboe <jaxboe@...ionio.com>:
> On 2011-07-22 22:59, Dan Williams wrote:
>> Some storage controllers benefit from completions always being steered
>> to the strict requester cpu rather than the looser "per-socket" steering
>> that blk_cpu_to_group() attempts by default.
>>
>> echo 2 > /sys/block/<bdev>/queue/rq_affinity
>
> I have applied this one, with a modified patch description.
>
> I like the adaptive solution, but it should be rewritten to not declare
> and expose softirq internals. Essentially have an API from
> kernel/softirq.c that can return whether a given (or perhaps just local)
> softirq handler is busy or not.
Jens,
I posted a similar patch about two years ago(
http://marc.info/?l=linux-kernel&m=126136252929329&w=2).
At that time, you actually did a lot of tests and said the same cpu
approach will cause huge lock contention and bounce. Is that get fixed?
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists