lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 20 Jan 2018 14:11:39 +0530
From:   Pavan Kondeti <pkondeti@...eaurora.org>
To:     Frederic Weisbecker <frederic@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Levin Alexander <alexander.levin@...izon.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Mauro Carvalho Chehab <mchehab@...pensource.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Hannes Frederic Sowa <hannes@...essinduktion.org>,
        "Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Dmitry Safonov <dima@...sta.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Radu Rendec <rrendec@...sta.com>,
        Ingo Molnar <mingo@...nel.org>,
        Stanislaw Gruszka <sgruszka@...hat.com>,
        Rik van Riel <riel@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        David Miller <davem@...emloft.net>
Subject: Re: [RFC PATCH 2/4] softirq: Per vector deferment to workqueue

Hi Frederic,

On Fri, Jan 19, 2018 at 04:46:12PM +0100, Frederic Weisbecker wrote:
> Some softirq vectors can be more CPU hungry than others. Especially
> networking may sometimes deal with packet storm and need more CPU than
> IRQ tail can offer without inducing scheduler latencies. In this case
> the current code defers to ksoftirqd that behaves nicer. Now this nice
> behaviour can be bad for other IRQ vectors that usually need quick
> processing.
> 
> To solve this we only defer to threading the vectors that got
> re-enqueued on IRQ tail processing and leave the others inline on IRQ
> tail service. This is achieved using workqueues with
> per-CPU/per-vector worklets.
> 
> Note ksoftirqd is not yet removed as it is still needed for threaded IRQs
> mode.
> 

<snip>

> +static void do_softirq_workqueue(u32 pending)
> +{
> +	struct softirq *softirq = this_cpu_ptr(&softirq_cpu);
> +	struct softirq_action *h = softirq_vec;
> +	int softirq_bit;
> +
> +	pending &= ~softirq->pending_work_mask;
> +
> +	while ((softirq_bit = ffs(pending))) {
> +		struct vector *vector;
> +		unsigned int vec_nr;
> +
> +		h += softirq_bit - 1;
> +		vec_nr = h - softirq_vec;
> +		softirq->pending_work_mask |= BIT(vec_nr);
> +		vector = &softirq->vector[vec_nr];
> +		schedule_work_on(smp_processor_id(), &vector->work);
> +		h++;
> +		pending >>= softirq_bit;
> +	}
> +}
> +

I have couple questions/comments.

(1) Since the work is queued on a bounded per-cpu worker, we may run
into a deadlock if a TASKLET is killed from another work running on
the same bounded per-cpu worker.

For example,

(1) Schedule a TASKLET on CPU#0 from IRQ.
(2) Another IRQ comes on the same CPU and we queue a work to kill
the TASKLET. 
(3) The TASKLET vector is deferred to workqueue.
(4) We run the TASKLET kill work and wait for the TASKLET to finish,
which won't happen.

We can fix this by queueing the TASKLET kill work on an unbounded
workqueue so that this runs in parallel with TASKLET vector work.

Just wanted to know if we have to be aware of this *condition*.

(2) Ksoftirqd thread gets parked when a CPU is hotplugged out. So
there is a gaurantee that the softirq handling never happens on
another CPU. Where as a bounded worker gets detached and the queued
work can run on another CPU. I guess, some special handling is
needed to handle hotplug.

Thanks,
Pavan
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

Powered by blists - more mailing lists