lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <1406066975.2970.805.camel@schen9-DESK> Date: Tue, 22 Jul 2014 15:09:35 -0700 From: Tim Chen <tim.c.chen@...ux.intel.com> To: Herbert Xu <herbert@...dor.apana.org.au>, "H. Peter Anvin" <hpa@...or.com>, "David S.Miller" <davem@...emloft.net>, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org> Cc: Chandramouli Narayanan <mouli@...ux.intel.com>, Vinodh Gopal <vinodh.gopal@...el.com>, James Guilford <james.guilford@...el.com>, Wajdi Feghali <wajdi.k.feghali@...el.com>, Tim Chen <tim.c.chen@...ux.intel.com>, Jussi Kivilinna <jussi.kivilinna@....fi>, Thomas Gleixner <tglx@...utronix.de>, Tadeusz Struk <tadeusz.struk@...el.com>, tkhai@...dex.ru, linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org Subject: [PATCH v5 3/7] crypto: SHA1 multibuffer crypto opportunistic flush The crypto daemon can take advantage of available cpu cycles to flush any unfinished jobs if it is the only task running on the cpu, and there are no more crypto jobs to process. Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com> --- crypto/mcryptd.c | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c index 622d6b4..dbc20d1 100644 --- a/crypto/mcryptd.c +++ b/crypto/mcryptd.c @@ -116,9 +116,40 @@ static int mcryptd_enqueue_request(struct mcryptd_queue *queue, return err; } -/* Called in workqueue context, do one real cryption work (via +/* + * Try to opportunisticlly flush the partially completed jobs if + * crypto daemon is the only task running. + */ +static void mcryptd_opportunistic_flush(void) +{ + struct mcryptd_flush_list *flist; + struct mcryptd_alg_cstate *cstate; + + flist = per_cpu_ptr(mcryptd_flist, smp_processor_id()); + while (single_task_running()) { + mutex_lock(&flist->lock); + if (list_empty(&flist->list)) { + mutex_unlock(&flist->lock); + return; + } + cstate = list_entry(flist->list.next, + struct mcryptd_alg_cstate, flush_list); + if (!cstate->flusher_engaged) { + mutex_unlock(&flist->lock); + return; + } + list_del(&cstate->flush_list); + cstate->flusher_engaged = false; + mutex_unlock(&flist->lock); + cstate->alg_state->flusher(cstate); + } +} + +/* + * Called in workqueue context, do one real cryption work (via * req->complete) and reschedule itself if there are more work to - * do. */ + * do. + */ static void mcryptd_queue_worker(struct work_struct *work) { struct mcryptd_cpu_queue *cpu_queue; @@ -144,8 +175,10 @@ static void mcryptd_queue_worker(struct work_struct *work) preempt_enable(); local_bh_enable(); - if (!req) + if (!req) { + mcryptd_opportunistic_flush(); return; + } if (backlog) backlog->complete(backlog, -EINPROGRESS); -- 1.7.11.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists