lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20171212100935.dogysanf52mlwsau@hirez.programming.kicks-ass.net> Date: Tue, 12 Dec 2017 11:09:35 +0100 From: Peter Zijlstra <peterz@...radead.org> To: Tejun Heo <tj@...nel.org> Cc: axboe@...nel.dk, linux-kernel@...r.kernel.org, oleg@...hat.com, kernel-team@...com, osandov@...com Subject: Re: [PATCH 2/6] blk-mq: replace timeout synchronization with a RCU and generation based scheme I don't yet have a full picture, but let me make a few comments. On Sat, Dec 09, 2017 at 11:25:21AM -0800, Tejun Heo wrote: > +static u64 blk_mq_rq_aborted_gstate(struct request *rq) > +{ > + unsigned int start; > + u64 aborted_gstate; > + > + do { > + start = u64_stats_fetch_begin(&rq->aborted_gstate_sync); > + aborted_gstate = rq->aborted_gstate; > + } while (u64_stats_fetch_retry(&rq->aborted_gstate_sync, start)); > + > + return aborted_gstate; > +} > + /* if in-flight && overdue, mark for abortion */ > + if ((gstate & MQ_RQ_STATE_MASK) == MQ_RQ_IN_FLIGHT && > + time_after_eq(jiffies, deadline)) { > + u64_stats_update_begin(&rq->aborted_gstate_sync); > + rq->aborted_gstate = gstate; > + u64_stats_update_end(&rq->aborted_gstate_sync); > + data->nr_expired++; > + hctx->nr_expired++; > } else if (!data->next_set || time_after(data->next, deadline)) { > + /* > + * ->aborted_gstate is used by the timeout to claim a specific > + * recycle instance of this request. See blk_mq_timeout_work(). > + */ > + struct u64_stats_sync aborted_gstate_sync; > + u64 aborted_gstate; So I dislike that u64_stats_sync thingy. Esp when used on a single variable like this. There are two alternatives, but I don't understand the code well enough to judge the trade-offs. 1) use gstate_seq for this too; yes it will add some superfluous instructions on 64bit targets, but if timeouts are a slow path this might not matter. 2) use the pattern we use for cfs_rq::min_vruntime; namely: u64 aborted_gstate #ifdef CONFIG_64BIT u64 aborted_gstate_copy; #endif static inline void blk_mq_rq_set_abort(struct rq *rq, u64 gstate) { rq->aborted_gstate = gstate; #ifdef CONFIG_64BIT smp_wmb(); rq->aborted_gstate_copy = gstate; #endif } static inline u64 blk_mq_rq_get_abort(struct rq *rq) { #ifdef CONFIG_64BIT u64 abort, copy; do { copy = rq->aborted_gstate_copy; smp_rmb(); abort = rq->aborted_gstate; } while (abort != copy); return abort; #else return rq->aborted_gstate; #endif } which is actually _faster_ than the u64_stats_sync stuff (for a single variable). But it might not matter; I just dislike that thing, could be me.
Powered by blists - more mailing lists