lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94ec3d97-f75f-645d-94f1-24d3fd476940@oracle.com>
Date:   Wed, 14 Nov 2018 10:15:17 +0800
From:   "jianchao.wang" <jianchao.w.wang@...cle.com>
To:     Jens Axboe <axboe@...nel.dk>
Cc:     ming.lei@...hat.com, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V6 3/5] blk-mq: ensure hctx to be ran on mapped cpu when
 issue directly

Hi Jens

Thanks for your kindly response.

On 11/13/18 9:44 PM, Jens Axboe wrote:
> On 11/13/18 2:56 AM, Jianchao Wang wrote:
>> When issue request directly and the task is migrated out of the
>> original cpu where it allocates request, hctx could be ran on
>> the cpu where it is not mapped.
>> To fix this,
>>  - insert the request forcibly if BLK_MQ_F_BLOCKING is set.
>>  - check whether the current is mapped to the hctx, if not, insert
>>    forcibly.
>>  - invoke __blk_mq_issue_directly under preemption disabled.
> 
> I'm not too crazy about this one, adding a get/put_cpu() in the hot
> path, and a cpumask test. The fact is that most/no drivers care
> about strict placement. We always try to do so, if convenient,
> since it's faster, but this seems to be doing the opposite.
> 
> I'd be more inclined to have a driver flag if it needs guaranteed
> placement, using one an ops BLK_MQ_F_STRICT_CPU flag or similar.
> 
> What do you think?
> 

I'd inclined blk-mq should comply with a unified rule, no matter the
issuing directly path or inserting one. Then blk-mq would have a simpler
model. And also this guarantee could be a little good for drivers,
especially the case where cpu and hw queue mapping is 1:1.

Regarding with hot path, do you concern about the nvme device ?
If so, how about split a standalone path for it ?

Thanks
Jianchao



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ