[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKc596Km6kjQcp2MJmH9BZLY_7i7yFmHDmRnaJGsm4WzUNjwaA@mail.gmail.com>
Date: Mon, 28 Sep 2020 18:51:48 +0800
From: jun qian <qianjun.kernel@...il.com>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, peterz@...radead.org,
will@...nel.org, luto@...nel.org, linux-kernel@...r.kernel.org,
Yafang Shao <laoar.shao@...il.com>, qais.yousef@....com,
Uladzislau Rezki <urezki@...il.com>
Subject: Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing loop
Frederic Weisbecker <frederic@...nel.org> 于2020年9月25日周五 上午8:42写道:
>
> On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> > Subject: softirq; Prevent starvation of higher softirq vectors
> [...]
> > + /*
> > + * Word swap pending to move the not yet handled bits of the previous
> > + * run first and then clear the duplicates in the newly raised ones.
> > + */
> > + swahw32s(&cur_pending);
> > + pending = cur_pending & ~(cur_pending << SIRQ_PREV_SHIFT);
> > +
> > for_each_set_bit(vec_nr, &pending, NR_SOFTIRQS) {
> > int prev_count;
> >
> > + vec_nr &= SIRQ_VECTOR_MASK;
>
> Shouldn't NR_SOFTIRQS above protect from that?
>
> > __clear_bit(vec_nr, &pending);
> > kstat_incr_softirqs_this_cpu(vec_nr);
> >
> [...]
> > + } else {
> > + /*
> > + * Retain the unprocessed bits and swap @cur_pending back
> > + * into normal ordering
> > + */
> > + cur_pending = (u32)pending;
> > + swahw32s(&cur_pending);
> > + /*
> > + * If the previous bits are done move the low word of
> > + * @pending into the high word so it's processed first.
> > + */
> > + if (!(cur_pending & SIRQ_PREV_MASK))
> > + cur_pending <<= SIRQ_PREV_SHIFT;
>
> If the previous bits are done and there is no timeout, should
> we consider to restart a loop?
>
> A common case would be to enter do_softirq() with RCU_SOFTIRQ set
> in the SIRQ_PREV_MASK and NET_RX_SOFTIRQ set in the normal mask.
>
> You would always end up processing the RCU_SOFTIRQ here and trigger
> ksoftirqd for the NET_RX_SOFTIRQ.
yes, I found that this problem also exists in our project. The RCU
softirq may cost
9ms, that will delay the net_rx/tx softirq to process, Peter's branch
maybe can slove
the problem
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git core/softirq
>
> Although that's probably no big deal as we should be already in ksoftirqd
> if we processed prev bits. We are just going to iterate the kthread loop
> instead of the do_softirq loop. Probably no real issue then...
>
>
> >
> > + /* Merge the newly pending ones into the low word */
> > + cur_pending |= new_pending;
> > + }
> > + set_softirq_pending(cur_pending);
> > wakeup_softirqd();
> > out:
> > lockdep_softirq_end(in_hardirq);
>
Powered by blists - more mailing lists