lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 Mar 2020 01:21:32 -0700
From:   Matt Ranostay <matt.ranostay@...sulko.com>
To:     Jonathan Cameron <jic23@...nel.org>
Cc:     Rohit Sarkar <rohitsarkar5398@...il.com>,
        "open list:IIO SUBSYSTEM AND DRIVERS" <linux-iio@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] iio: health: max30100: remove mlock usage

On Sun, Mar 15, 2020 at 2:46 AM Jonathan Cameron <jic23@...nel.org> wrote:
>
> On Tue, 10 Mar 2020 00:01:28 +0530
> Rohit Sarkar <rohitsarkar5398@...il.com> wrote:
>
> > Use local lock instead of indio_dev's mlock.
> > The mlock was being used to protect local driver state thus using the
> > local lock is a better option here.
> >
> > Signed-off-by: Rohit Sarkar <rohitsarkar5398@...il.com>
>
> Matt.  Definitely need your input on this.
>
> > ---
> >  drivers/iio/health/max30100.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/iio/health/max30100.c b/drivers/iio/health/max30100.c
> > index 84010501762d..8ddc4649547d 100644
> > --- a/drivers/iio/health/max30100.c
> > +++ b/drivers/iio/health/max30100.c
> > @@ -388,7 +388,7 @@ static int max30100_read_raw(struct iio_dev *indio_dev,
> >                * Temperature reading can only be acquired while engine
> >                * is running
> >                */
> > -             mutex_lock(&indio_dev->mlock);
> > +             mutex_lock(&data->lock);
>
> Hmm.. It's another complex one.  What is actually being protected here is
> the buffer state, but not to take it exclusively like claim_direct does.
>
> Here we need the inverse, we want to ensure we are 'not' in the direct
> mode because this hardware requires the buffer to be running to read the
> temperature.
>
> That is the sort of interface that is going to get userspace very
> confused.
>
> Matt, normally what I'd suggest here is that the temperature read should:
>
> 1) Claim direct mode, if it fails then do the dance you have here
> (with more comments to explain why you are taking an internal lock)
> 2) Start up capture as if we were in buffered mode
> 3) Grab that temp
> 4) stop capture to return to non buffered mode.
> 5) Release direct mode.
>
> I guess we decided it wasn't worth the hassle.
>
> So Rohit.  This one probably needs a comment rather than any change.
> We 'could' add a 'hold_buffered_mode' function that takes the mlock,
> verifies we are in buffered mode and continues to hold the lock
> until the 'release_buffered_mode'.  However, I'm not sure any other
> drivers do this particular dance, so clear commenting in the driver
> might be enough.   Should we ever change how mlock is used in the
> core, we'd have to fix this driver up as well.
>
> Hmm.  This is really hammering home that perhaps all the remaining
> mlock cases are 'hard'.

Heh really had to look this over what I was doing since it has been
almost half a decade now :).

Think locking that mutex was only to prevent another read during the
temp reading, and not really
not sure how effective that is actually. Especially since the I2C
subsystem should handle those reads
in a queue like fashion.

- Matt

>
> Thanks,
>
> Jonathan
>
> >
> >               if (!iio_buffer_enabled(indio_dev))
> >                       ret = -EAGAIN;
> > @@ -399,7 +399,7 @@ static int max30100_read_raw(struct iio_dev *indio_dev,
> >
> >               }
> >
> > -             mutex_unlock(&indio_dev->mlock);
> > +             mutex_unlock(&data->lock);
> >               break;
> >       case IIO_CHAN_INFO_SCALE:
> >               *val = 1;  /* 0.0625 */
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ