[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180112143448.GA1950@lerouge>
Date: Fri, 12 Jan 2018 15:34:49 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Eric Dumazet <edumazet@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Levin Alexander <alexander.levin@...izon.com>,
Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Wanpeng Li <wanpeng.li@...mail.com>,
Dmitry Safonov <dima@...sta.com>,
Thomas Gleixner <tglx@...utronix.de>,
Radu Rendec <rrendec@...sta.com>,
Ingo Molnar <mingo@...nel.org>,
Stanislaw Gruszka <sgruszka@...hat.com>,
Paolo Abeni <pabeni@...hat.com>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Miller <davem@...emloft.net>
Subject: Re: [RFC PATCH 1/2] softirq: Account time and iteration stats per
vector
On Thu, Jan 11, 2018 at 10:22:58PM -0800, Eric Dumazet wrote:
> > asmlinkage __visible void __softirq_entry __do_softirq(void)
> > {
> > - unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
> > + struct softirq_stat *sstat = this_cpu_ptr(&softirq_stat_cpu);
> > unsigned long old_flags = current->flags;
> > - int max_restart = MAX_SOFTIRQ_RESTART;
> > struct softirq_action *h;
> > bool in_hardirq;
> > - __u32 pending;
> > + __u32 pending, overrun = 0;
> > int softirq_bit;
> >
> > /*
> > @@ -262,6 +273,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> > __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> > in_hardirq = lockdep_softirq_start();
> >
> > + memzero_explicit(sstat, sizeof(*sstat));
>
> If you clear sstat here, it means it does not need to be a per cpu
> variable, but an automatic one (defined on the stack)
That's right. But I thought it was bit large for the stack:
struct {
u64 time;
u64 count;
} [NR_SOFTIRQS]
although arguably we are either using softirq stack or a fresh task one.
>
> I presume we need a per cpu var to track cpu usage on last time window.
>
> ( typical case of 99,000 IRQ per second, one packet delivered per IRQ,
> 10 usec spent per packet)
So should I account, like, per vector stats in a jiffy window for example? And apply
the limits on top of that?
>
>
>
> > restart:
> > /* Reset the pending bitmask before enabling irqs */
> > set_softirq_pending(0);
> > @@ -271,8 +283,10 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> > h = softirq_vec;
> >
> > while ((softirq_bit = ffs(pending))) {
> > + struct vector_stat *vstat;
> > unsigned int vec_nr;
> > int prev_count;
> > + u64 startime;
> >
> > h += softirq_bit - 1;
> >
> > @@ -280,10 +294,18 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> > prev_count = preempt_count();
> >
> > kstat_incr_softirqs_this_cpu(vec_nr);
> > + vstat = &sstat->stat[vec_nr];
> >
> > trace_softirq_entry(vec_nr);
> > + startime = local_clock();
> > h->action(h);
> > + vstat->time += local_clock() - startime;
>
> You might store local_clock() in a variable, so that we do not call
> local_clock() two times per ->action() called.
So you mean I keep the second local_clock() call for the next first call in the while
iteration, right? Yep that sounds possible.
>
>
> > + vstat->count++;
> > trace_softirq_exit(vec_nr);
> > +
> > + if (vstat->time > MAX_SOFTIRQ_TIME || vstat->count > MAX_SOFTIRQ_RESTART)
>
> If we trust local_clock() to be precise enough, we do not need to
> track vstat->count anymore.
That's what I was thinking. Should I keep MAX_SOFTIRQ_TIME to 2 ms BTW? It looks a bit long
to me.
Thanks.
Powered by blists - more mailing lists