[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160610121118.03f281e8@gandalf.local.home>
Date: Fri, 10 Jun 2016 12:11:18 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: alison@...oton-tech.com, LKML <linux-kernel@...r.kernel.org>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
netdev <netdev@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Clark Williams <williams@...hat.com>,
Eric Dumazet <eric.dumazet@...il.com>,
David Miller <davem@...emloft.net>
Subject: Re: [PATCH][RT] netpoll: Always take poll_lock when doing polling
On Fri, 10 Jun 2016 17:57:17 +0200
Sebastian Andrzej Siewior <bigeasy@...utronix.de> wrote:
> * Steven Rostedt | 2016-06-04 07:11:31 [-0400]:
>
> >From: Steven Rostedt <srostedt@...hat.com>
> >Date: Tue, 5 Jan 2016 14:53:09 -0500
> >Subject: [PATCH] softirq: Perform softirqs in local_bh_enable() for a limited
> > amount of time
> >
> >To prevent starvation of tasks like ksoftirqd, if the task that is
> >processing its softirqs takes more than 2 jiffies to do so, and the
> >softirqs are constantly being re-added, then defer the processing to
> >ksoftirqd.
>
> I'm not sure about this. Alison didn't scream "yes it solves my problem"
> and I am not sure what it is.
> It is true that in RT we don't have such a limit like in !RT. You would
> need to use __raise_softirq_irqoff_ksoft() instead the normal or +
> wakeup() since you may have timers pending and those need to go to the
> "other" ksoftirqd.
> But then I don't see much change. ksoftirqd runs now at SCHED_OTHER so
> it will end up on the CPU right away unless there other tasks that need
> the CPU. So the scheduler will balance it the same way. The only change
> will be that softirqs which are processed in context of any application
> for more than two jiffies will be moved to ksoftirqd. This could be a
> win.
We actually triggered a starvation due to this. I was just seeing if
Alison hit the same issue we did in our tests.
-- Steve
Powered by blists - more mailing lists