lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20171212190134.535941-6-tj@kernel.org> Date: Tue, 12 Dec 2017 11:01:33 -0800 From: Tejun Heo <tj@...nel.org> To: axboe@...nel.dk Cc: linux-kernel@...r.kernel.org, oleg@...hat.com, peterz@...radead.org, kernel-team@...com, osandov@...com, linux-block@...r.kernel.org, hch@....de, Tejun Heo <tj@...nel.org>, "jianchao.wang" <jianchao.w.wang@...cle.com> Subject: [PATCH 5/6] blk-mq: remove REQ_ATOM_COMPLETE usages from blk-mq After the recent updates to use generation number and state based synchronization, blk-mq no longer depends on REQ_ATOM_COMPLETE for anything. Remove all REQ_ATOM_COMPLETE usages. This removes atomic bitops from hot paths too. v2: Removed blk_clear_rq_complete() from blk_mq_rq_timed_out(). Signed-off-by: Tejun Heo <tj@...nel.org> Cc: "jianchao.wang" <jianchao.w.wang@...cle.com> --- block/blk-mq.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 73d6444..7269552 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -596,14 +596,12 @@ void blk_mq_complete_request(struct request *rq) */ if (!(hctx->flags & BLK_MQ_F_BLOCKING)) { rcu_read_lock(); - if (blk_mq_rq_aborted_gstate(rq) != rq->gstate && - !blk_mark_rq_complete(rq)) + if (blk_mq_rq_aborted_gstate(rq) != rq->gstate) __blk_mq_complete_request(rq); rcu_read_unlock(); } else { srcu_idx = srcu_read_lock(hctx->queue_rq_srcu); - if (blk_mq_rq_aborted_gstate(rq) != rq->gstate && - !blk_mark_rq_complete(rq)) + if (blk_mq_rq_aborted_gstate(rq) != rq->gstate) __blk_mq_complete_request(rq); srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx); } @@ -650,8 +648,6 @@ void blk_mq_start_request(struct request *rq) write_seqcount_end(&rq->gstate_seq); set_bit(REQ_ATOM_STARTED, &rq->atomic_flags); - if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags)) - clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags); if (q->dma_drain_size && blk_rq_bytes(rq)) { /* @@ -819,7 +815,6 @@ static void blk_mq_rq_timed_out(struct request *req, bool reserved) req->aborted_gstate = 0; u64_stats_update_end(&req->aborted_gstate_sync); blk_add_timer(req); - blk_clear_rq_complete(req); break; case BLK_EH_NOT_HANDLED: break; @@ -870,8 +865,7 @@ static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx, * now guaranteed to see @rq->aborted_gstate and yield. If * @rq->aborted_gstate still matches @rq->gstate, @rq is ours. */ - if (READ_ONCE(rq->gstate) == rq->aborted_gstate && - !blk_mark_rq_complete(rq)) + if (READ_ONCE(rq->gstate) == rq->aborted_gstate) blk_mq_rq_timed_out(rq, reserved); } -- 2.9.5
Powered by blists - more mailing lists