lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Jun 2018 00:14:09 -0700
From:   Song Liu <liu.song.a23@...il.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Borislav Petkov <bp@...en8.de>,
        Dmitry Safonov <0x7f454c46@...il.com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Joerg Roedel <jroedel@...e.de>,
        Mike Travis <mike.travis@....com>, stable@...r.kernel.org
Subject: Re: [patch 7/8] genirq/affinity: Defer affinity setting if irq chip
 is busy

On Mon, Jun 4, 2018 at 8:33 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> The case that interrupt affinity setting fails with -EBUSY can be handled
> in the kernel completely by using the already available generic pending
> infrastructure.
>
> If a irq_chip::set_affinity() fails with -EBUSY, handle it like the
> interrupts for which irq_chip::set_affinity() can only be invoked from
> interrupt context. Copy the new affinity mask to irq_desc::pending_mask and
> set the affinity pending bit. The next raised interrupt for the affected
> irq will check the pending bit and try to set the new affinity from the
> handler. This avoids that -EBUSY is returned when an affinity change is
> requested from user space and the previous change has not been cleaned
> up. The new affinity will take effect when the next interrupt is raised
> from the device.
>
> Fixes: dccfe3147b42 ("x86/vector: Simplify vector move cleanup")
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> Cc: stable@...r.kernel.org

Tested-by: Song Liu <songliubraving@...com>

> ---
>  kernel/irq/manage.c |   37 +++++++++++++++++++++++++++++++++++--
>  1 file changed, 35 insertions(+), 2 deletions(-)
>
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -204,6 +204,39 @@ int irq_do_set_affinity(struct irq_data
>         return ret;
>  }
>
> +#ifdef CONFIG_GENERIC_PENDING_IRQ
> +static inline int irq_set_affinity_pending(struct irq_data *data,
> +                                          const struct cpumask *dest)
> +{
> +       struct irq_desc *desc = irq_data_to_desc(data);
> +
> +       irqd_set_move_pending(data);
> +       irq_copy_pending(desc, dest);
> +       return 0;
> +}
> +#else
> +static inline int irq_set_affinity_pending(struct irq_data *data,
> +                                          const struct cpumask *dest)
> +{
> +       return -EBUSY;
> +}
> +#endif
> +
> +static int irq_try_set_affinity(struct irq_data *data,
> +                               const struct cpumask *dest, bool force)
> +{
> +       int ret = irq_do_set_affinity(data, dest, force);
> +
> +       /*
> +        * In case that the underlying vector management is busy and the
> +        * architecture supports the generic pending mechanism then utilize
> +        * this to avoid returning an error to user space.
> +        */
> +       if (ret == -EBUSY && !force)
> +               ret = irq_set_affinity_pending(data, dest);
> +       return ret;
> +}
> +
>  int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
>                             bool force)
>  {
> @@ -214,8 +247,8 @@ int irq_set_affinity_locked(struct irq_d
>         if (!chip || !chip->irq_set_affinity)
>                 return -EINVAL;
>
> -       if (irq_can_move_pcntxt(data)) {
> -               ret = irq_do_set_affinity(data, mask, force);
> +       if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) {
> +               ret = irq_try_set_affinity(data, mask, force);
>         } else {
>                 irqd_set_move_pending(data);
>                 irq_copy_pending(desc, mask);
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ