[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1357285597.21409.28406.camel@edumazet-glaptop>
Date: Thu, 03 Jan 2013 23:46:37 -0800
From: Eric Dumazet <erdnetdev@...il.com>
To: "Oleg A.Arkhangelsky" <sysoleg@...dex.ru>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Tom Herbert <therbert@...gle.com>
Subject: Re: [PATCH net-next] softirq: reduce latencies
On Fri, 2013-01-04 at 11:14 +0400, Oleg A.Arkhangelsky wrote:
> It leads to many context switches when softirqs processing deffered to
> ksoftirqd kthreads which can be very expensive. Here is some evidence
> of ksoftirqd activation effects:
>
> http://marc.info/?l=linux-netdev&m=124116262916969&w=2
>
> Look for "magic threshold". Yes, I know there was another bug in scheduler
> discovered that time, but this bug was only about tick accounting.
>
This thread is 3 years old :
- It was a router workload. Forwarded packets should not wakeup a task.
- The measure of how cpus spent their cycles was completely wrong.
- A lot of things have changed, both in network stack and scheduler.
In fact, under moderate load, my patch is able to loop more than 10
times before deferring to ksoftirqd.
Under stress, ksoftirqd will be started anyway, and its a good thing,
because it enables process migration.
500 "context switches" [1] per second instead of 50 on behalf of
ksoftirqd is absolutely not measurable. It also permits smoother RCU
cleanups.
I did a lot of benchmarks, and didnt see any regression yet, but usual
noise.
[1] Under load, __do_softirq() would be called 500 times per second,
instead of ~50 times per second.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists