lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <947bcef0d56a4d0c82729d6899394f4a@hisilicon.com>
Date:   Wed, 10 Feb 2021 11:50:45 +0000
From:   "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To:     Andy Shevchenko <andy.shevchenko@...il.com>,
        luojiaxing <luojiaxing@...wei.com>
CC:     Linus Walleij <linus.walleij@...aro.org>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Grygorii Strashko <grygorii.strashko@...com>,
        Santosh Shilimkar <ssantosh@...nel.org>,
        "Kevin Hilman" <khilman@...nel.org>,
        "open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "linuxarm@...neuler.org" <linuxarm@...neuler.org>
Subject: RE: [Linuxarm]  Re: [PATCH for next v1 0/2] gpio: few clean up
 patches to replace spin_lock_irqsave with spin_lock



> -----Original Message-----
> From: Andy Shevchenko [mailto:andy.shevchenko@...il.com]
> Sent: Wednesday, February 10, 2021 11:51 PM
> To: luojiaxing <luojiaxing@...wei.com>
> Cc: Linus Walleij <linus.walleij@...aro.org>; Andy Shevchenko
> <andriy.shevchenko@...ux.intel.com>; Grygorii Strashko
> <grygorii.strashko@...com>; Santosh Shilimkar <ssantosh@...nel.org>; Kevin
> Hilman <khilman@...nel.org>; open list:GPIO SUBSYSTEM
> <linux-gpio@...r.kernel.org>; Linux Kernel Mailing List
> <linux-kernel@...r.kernel.org>; linuxarm@...neuler.org
> Subject: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches to
> replace spin_lock_irqsave with spin_lock
> 
> On Wed, Feb 10, 2021 at 5:43 AM luojiaxing <luojiaxing@...wei.com> wrote:
> > On 2021/2/9 17:42, Andy Shevchenko wrote:
> > > On Tue, Feb 9, 2021 at 11:24 AM luojiaxing <luojiaxing@...wei.com> wrote:
> > >> On 2021/2/8 21:28, Andy Shevchenko wrote:
> > >>> On Mon, Feb 8, 2021 at 11:11 AM luojiaxing <luojiaxing@...wei.com> wrote:
> > >>>> On 2021/2/8 16:56, Luo Jiaxing wrote:
> > >>>>> There is no need to use API with _irqsave in hard IRQ handler, So replace
> > >>>>> those with spin_lock.
> > >>> How do you know that another CPU in the system can't serve the
> > > The keyword here is: *another*.
> >
> > ooh, sorry, now I got your point.
> >
> > As to me, I don't think another CPU can serve the IRQ when one CPU
> > runing hard IRQ handler,
> 
> Why is it so?
> Each CPU can serve IRQs separately.
> 
> > except it's a per CPU interrupts.
> 
> I didn't get how it is related.
> 
> > The following is a simple call logic when IRQ come.
> >
> > elx_irq -> handle_arch_irq -> __handle_domain_irq -> desc->handle_irq ->
> > handle_irq_event
> 
> What is `elx_irq()`? I haven't found any mention of this in the kernel
> source tree.
> But okay, it shouldn't prevent our discussion.
> 
> > Assume that two CPUs receive the same IRQ and enter the preceding
> > process. Both of them will go to desc->handle_irq().
> 
> Ah, I'm talking about the same IRQ by number (like Linux IRQ number,
> means from the same source), but with different sequence number (means
> two consequent events).
> 
> > In handle_irq(), raw_spin_lock(&desc->lock) always be called first.
> > Therefore, even if two CPUs are running handle_irq(),
> >
> > only one can get the spin lock. Assume that CPU A obtains the spin lock.
> > Then CPU A will sets the status of irq_data to
> >
> > IRQD_IRQ_INPROGRESS in handle_irq_event() and releases the spin lock.
> > Even though CPU B gets the spin lock later and
> >
> > continue to run handle_irq(), but the check of irq_may_run(desc) causes
> > it to exit.
> >
> >
> > so, I think we don't own the situation that two CPU server the hard IRQ
> > handler at the same time.
> 
> Okay. Assuming your analysis is correct, have you considered the case
> when all IRQ handlers are threaded? (There is a kernel command line
> option to force this)
> 
> > >>> following interrupt from the hardware at the same time?
> > >> Yes, I have some question before.
> > >>
> > >> There are some similar discussion here,  please take a look, Song baohua
> > >> explained it more professionally.
> > >>
> > >>
> https://lore.kernel.org/lkml/e949a474a9284ac6951813bfc8b34945@hisilicon.co
> m/
> > >>
> > >> Here are some excerpts from the discussion:
> > >>
> > >> I think the code disabling irq in hardIRQ is simply wrong.
> > > Why?
> >
> >
> > I mention the following call before.
> >
> > elx_irq -> handle_arch_irq -> __handle_domain_irq -> desc->handle_irq ->
> > handle_irq_event
> >
> >
> > __handle_domain_irq() will call irq_enter(), it ensures that the IRQ
> > processing of the current CPU can not be preempted.
> >
> > So I think this is the reason why Song baohua said it's not need to
> > disable IRQ in hardIRQ handler.
> >
> > >> Since this commit
> > >>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
> ?id=e58aa3d2d0cc
> > >> genirq: Run irq handlers with interrupts disabled
> > >>
> > >> interrupt handlers are definitely running in a irq-disabled context
> > >> unless irq handlers enable them explicitly in the handler to permit
> > >> other interrupts.
> > > This doesn't explain any changes in the behaviour on SMP.
> > > IRQ line can be disabled on a few stages:
> > >   a) on the source (IP that generates an event)
> > >   b) on IRQ router / controller
> > >   c) on CPU side
> >
> > yes, you are right.
> >
> > > The commit above is discussing (rightfully!) the problem when all
> > > interrupts are being served by a *single* core. Nobody prevents them
> > > from being served by *different* cores simultaneously. Also, see [1].
> > >
> > > [1]: https://www.kernel.org/doc/htmldocs/kernel-locking/cheatsheet.html
> >
> > I check [1], quite useful description about locking, thanks. But you can
> > see Table of locking Requirements
> >
> > Between IRQ handler A and IRQ handle A, it's no need for a SLIS.
> 
> Right, but it's not the case in the patches you provided.

The code still holds spin_lock. So if two cpus call same IRQ handler,
spin_lock makes them spin; and if interrupts are threaded, spin_lock
makes two threads run the same handler one by one.

> 
> --
> With Best Regards,
> Andy Shevchenko

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ