lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANp29Y6HkhHyM1PZGbMtD=U6GeK2LfOHTDiOqiPVEhEXUHN7_A@mail.gmail.com>
Date:   Wed, 26 Jan 2022 19:11:23 +0100
From:   Aleksandr Nogikh <nogikh@...gle.com>
To:     Andrey Konovalov <andreyknvl@...il.com>
Cc:     kasan-dev <kasan-dev@...glegroups.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Marco Elver <elver@...gle.com>,
        Alexander Potapenko <glider@...gle.com>,
        Taras Madan <tarasmadan@...gle.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH v3 1/2] kcov: split ioctl handling into locked and
 unlocked parts

On Mon, Jan 24, 2022 at 11:33 PM Andrey Konovalov <andreyknvl@...il.com> wrote:
>
> .On Mon, Jan 17, 2022 at 4:36 PM Aleksandr Nogikh <nogikh@...gle.com> wrote:
> >
> > Currently all ioctls are de facto processed under a spinlock in order
> > to serialise them. This, however, prohibits the use of vmalloc and other
> > memory management functions in the implementations of those ioctls,
> > unnecessary complicating any further changes to the code.
> >
> > Let all ioctls first be processed inside the kcov_ioctl() function
> > which should execute the ones that are not compatible with spinlock
> > and then pass control to kcov_ioctl_locked() for all other ones.
> > KCOV_REMOTE_ENABLE is processed both in kcov_ioctl() and
> > kcov_ioctl_locked() as the steps are easily separable.
> >
> > Although it is still compatible with a spinlock, move KCOV_INIT_TRACE
> > handling to kcov_ioctl(), so that the changes from the next commit are
> > easier to follow.
> >
> > Signed-off-by: Aleksandr Nogikh <nogikh@...gle.com>
> > ---
> >  kernel/kcov.c | 68 ++++++++++++++++++++++++++++-----------------------
> >  1 file changed, 37 insertions(+), 31 deletions(-)
> >
> > diff --git a/kernel/kcov.c b/kernel/kcov.c
> > index 36ca640c4f8e..e1be7301500b 100644
> > --- a/kernel/kcov.c
> > +++ b/kernel/kcov.c
> > @@ -564,31 +564,12 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
> >                              unsigned long arg)
> >  {
> >         struct task_struct *t;
> > -       unsigned long size, unused;
> > +       unsigned long flags, unused;
> >         int mode, i;
> >         struct kcov_remote_arg *remote_arg;
> >         struct kcov_remote *remote;
> > -       unsigned long flags;
> >
> >         switch (cmd) {
> > -       case KCOV_INIT_TRACE:
> > -               /*
> > -                * Enable kcov in trace mode and setup buffer size.
> > -                * Must happen before anything else.
> > -                */
> > -               if (kcov->mode != KCOV_MODE_DISABLED)
> > -                       return -EBUSY;
> > -               /*
> > -                * Size must be at least 2 to hold current position and one PC.
> > -                * Later we allocate size * sizeof(unsigned long) memory,
> > -                * that must not overflow.
> > -                */
> > -               size = arg;
> > -               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> > -                       return -EINVAL;
> > -               kcov->size = size;
> > -               kcov->mode = KCOV_MODE_INIT;
> > -               return 0;
> >         case KCOV_ENABLE:
> >                 /*
> >                  * Enable coverage for the current task.
> > @@ -692,9 +673,32 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >         struct kcov_remote_arg *remote_arg = NULL;
> >         unsigned int remote_num_handles;
> >         unsigned long remote_arg_size;
> > -       unsigned long flags;
> > +       unsigned long size, flags;
> >
> > -       if (cmd == KCOV_REMOTE_ENABLE) {
> > +       kcov = filep->private_data;
> > +       switch (cmd) {
> > +       case KCOV_INIT_TRACE:
> > +               /*
> > +                * Enable kcov in trace mode and setup buffer size.
> > +                * Must happen before anything else.
> > +                *
> > +                * First check the size argument - it must be at least 2
> > +                * to hold the current position and one PC. Later we allocate
> > +                * size * sizeof(unsigned long) memory, that must not overflow.
> > +                */
> > +               size = arg;
> > +               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> > +                       return -EINVAL;
> > +               spin_lock_irqsave(&kcov->lock, flags);
>
> Arguably, we could keep the part of the KCOV_INIT_TRACE handler that
> happens under the lock in kcov_ioctl_locked(). In a similar way as
> it's done for KCOV_REMOTE_ENABLE. This would get rid of the asymmetric
> fallthrough usage.
>
> But I'll leave this up to you, either way looks acceptable to me.
>

That would indeed look nice and would work with this particular
commit, but it won't work with the changes that are introduced in the
next one. So it would go against the objective of splitting the change
into a patch series in the first place - the simplification of
reviewing of the commit with functional changes.

With kcov->area allocation in KCOV_INIT_TRACE, we unfortunately cannot
draw a single line between the unlocked and locked parts.

> > +               if (kcov->mode != KCOV_MODE_DISABLED) {
> > +                       spin_unlock_irqrestore(&kcov->lock, flags);
> > +                       return -EBUSY;
> > +               }
> > +               kcov->size = size;
> > +               kcov->mode = KCOV_MODE_INIT;
> > +               spin_unlock_irqrestore(&kcov->lock, flags);
> > +               return 0;
> > +       case KCOV_REMOTE_ENABLE:
> >                 if (get_user(remote_num_handles, (unsigned __user *)(arg +
> >                                 offsetof(struct kcov_remote_arg, num_handles))))
> >                         return -EFAULT;
> > @@ -710,16 +714,18 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >                         return -EINVAL;
> >                 }
> >                 arg = (unsigned long)remote_arg;
> > +               fallthrough;
> > +       default:
> > +               /*
> > +                * All other commands can be normally executed under a spin lock, so we
> > +                * obtain and release it here in order to simplify kcov_ioctl_locked().
> > +                */
> > +               spin_lock_irqsave(&kcov->lock, flags);
> > +               res = kcov_ioctl_locked(kcov, cmd, arg);
> > +               spin_unlock_irqrestore(&kcov->lock, flags);
> > +               kfree(remote_arg);
> > +               return res;
> >         }
> > -
> > -       kcov = filep->private_data;
> > -       spin_lock_irqsave(&kcov->lock, flags);
> > -       res = kcov_ioctl_locked(kcov, cmd, arg);
> > -       spin_unlock_irqrestore(&kcov->lock, flags);
> > -
> > -       kfree(remote_arg);
> > -
> > -       return res;
> >  }
> >
> >  static const struct file_operations kcov_fops = {
> > --
> > 2.34.1.703.g22d0c6ccf7-goog
> >
>
> Reviewed-by: Andrey Konovalov <andreyknvl@...il.com>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@...glegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/CA%2BfCnZdO%2BoOLQSfH%3D%2BH8wKNv1%2BhYasyyyNHxumWa5ex1P0xp0g%40mail.gmail.com.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ