lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Sep 2020 13:44:28 +0200
From:   Frederic Weisbecker <frederic@...nel.org>
To:     jun qian <qianjun.kernel@...il.com>
Cc:     Thomas Gleixner <tglx@...utronix.de>, peterz@...radead.org,
        will@...nel.org, luto@...nel.org, linux-kernel@...r.kernel.org,
        Yafang Shao <laoar.shao@...il.com>, qais.yousef@....com,
        Uladzislau Rezki <urezki@...il.com>
Subject: Re: [PATCH V7 4/4] softirq: Allow early break the softirq processing
 loop

On Mon, Sep 28, 2020 at 06:51:48PM +0800, jun qian wrote:
> Frederic Weisbecker <frederic@...nel.org> 于2020年9月25日周五 上午8:42写道:
> >
> > On Thu, Sep 24, 2020 at 05:37:42PM +0200, Thomas Gleixner wrote:
> > > Subject: softirq; Prevent starvation of higher softirq vectors
> > [...]
> > > +     /*
> > > +      * Word swap pending to move the not yet handled bits of the previous
> > > +      * run first and then clear the duplicates in the newly raised ones.
> > > +      */
> > > +     swahw32s(&cur_pending);
> > > +     pending = cur_pending & ~(cur_pending << SIRQ_PREV_SHIFT);
> > > +
> > >       for_each_set_bit(vec_nr, &pending, NR_SOFTIRQS) {
> > >               int prev_count;
> > >
> > > +             vec_nr &= SIRQ_VECTOR_MASK;
> >
> > Shouldn't NR_SOFTIRQS above protect from that?
> >
> > >               __clear_bit(vec_nr, &pending);
> > >               kstat_incr_softirqs_this_cpu(vec_nr);
> > >
> > [...]
> > > +     } else {
> > > +             /*
> > > +              * Retain the unprocessed bits and swap @cur_pending back
> > > +              * into normal ordering
> > > +              */
> > > +             cur_pending = (u32)pending;
> > > +             swahw32s(&cur_pending);
> > > +             /*
> > > +              * If the previous bits are done move the low word of
> > > +              * @pending into the high word so it's processed first.
> > > +              */
> > > +             if (!(cur_pending & SIRQ_PREV_MASK))
> > > +                     cur_pending <<= SIRQ_PREV_SHIFT;
> >
> > If the previous bits are done and there is no timeout, should
> > we consider to restart a loop?
> >
> > A common case would be to enter do_softirq() with RCU_SOFTIRQ set
> > in the SIRQ_PREV_MASK and NET_RX_SOFTIRQ set in the normal mask.
> >
> > You would always end up processing the RCU_SOFTIRQ here and trigger
> > ksoftirqd for the NET_RX_SOFTIRQ.
> 
> yes, I found that this problem also exists in our project. The RCU
> softirq may cost
> 9ms,

Ouch!

> that will delay the net_rx/tx softirq to process, Peter's branch
> maybe can slove
> the problem
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git core/softirq

It's probably also the right time for me to resume on this patchset:

https://lwn.net/Articles/779564/

In the long term this will allow us to have per vector threads that can be
individually triggered upon high loads, and even soft interruptible by
other vectors from irq_exit(). Also if several vectors are on high loads
at the same time, this leaves the balance decisions to the scheduler instead
of all these workarounds we scratch our heads on for several years now.

Besides, I'm convinced that splitting the softirqs is something we want in
the long run anyway.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ