lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240810105411.705cb225@jic23-huawei>
Date: Sat, 10 Aug 2024 10:54:11 +0100
From: Jonathan Cameron <jic23@...nel.org>
To: Matteo Martelli <matteomartelli3@...il.com>
Cc: Lars-Peter Clausen <lars@...afoo.de>, Rob Herring <robh@...nel.org>,
 Krzysztof Kozlowski <krzk+dt@...nel.org>, Conor Dooley
 <conor+dt@...nel.org>, Marius Cristea <marius.cristea@...rochip.com>,
 linux-iio@...r.kernel.org, devicetree@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 3/3] iio: adc: add support for pac1921

On Tue, 06 Aug 2024 11:53:12 +0200
Matteo Martelli <matteomartelli3@...il.com> wrote:

> Jonathan Cameron wrote:
> > > > > +
> > > > > +/*
> > > > > + * Emit on sysfs the list of available scales contained in scales_tbl
> > > > > + *
> > > > > + * TODO:: this function can be replaced with iio_format_avail_list() if the
> > > > > + * latter will ever be exported.    
> > > > 
> > > > You could just have added a precursor patch doing that.
> > > > If you have time I'd certainly consider a patch that does export that function
> > > > and uses it here.
> > > >    
> > > I wasn't sure that one usage was enough to justify the export. I could
> > > definitely do it, I am assuming it would now go to a new patch series since
> > > this has already been merged into testing, right?  
> > The requirements for justifying exporting an existing function is less
> > than it would be to add a new one.  As such I think it makes sense.
> > 
> > As you note, needs a separate patch on top of the tree.
> >   
> I will try to address this more generally by adding a new
> read_avail_release_resource() iio_info function, see below. If that goes
> through, exporting the iio_format_avail_list() would not be necessary since the
> driver could directly use the read_avail iio_info function.
> 
> > >   
> > > > > + *
> > > > > + * Must be called with lock held if the scales_tbl can change runtime (e.g. for
> > > > > + * the current scales table)
> > > > > + */
> > > > > +static ssize_t pac1921_format_scale_avail(const int (*const scales_tbl)[2],
> > > > > +					  size_t size, char *buf)
> > > > > +{
> > > > > +	ssize_t len = 0;
> > > > > +
> > > > > +	for (unsigned int i = 0; i < size; i++) {
> > > > > +		if (i != 0) {
> > > > > +			len += sysfs_emit_at(buf, len, " ");
> > > > > +			if (len >= PAGE_SIZE)
> > > > > +				return -EFBIG;
> > > > > +		}
> > > > > +		len += sysfs_emit_at(buf, len, "%d.%09d", scales_tbl[i][0],
> > > > > +				     scales_tbl[i][1]);
> > > > > +		if (len >= PAGE_SIZE)
> > > > > +			return -EFBIG;
> > > > > +	}
> > > > > +
> > > > > +	len += sysfs_emit_at(buf, len, "\n");
> > > > > +	return len;
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * Read available scales for a specific channel
> > > > > + *
> > > > > + * NOTE: using extended info insted of iio.read_avail() because access to
> > > > > + * current scales must be locked as they depend on shunt resistor which may
> > > > > + * change runtime. Caller of iio.read_avail() would access the table unlocked
> > > > > + * instead.    
> > > > 
> > > > That's a corner case we should think about closing. Would require an indicator
> > > > to read_avail that the buffer it has been passed is a snapshot that it should
> > > > free on completion of the string building.  I don't like passing ownership
> > > > of data around like that, but it is fiddly to do anything else given
> > > > any simple double buffering is subject to race conditions.
> > > >    
> > > If I understand your suggestion the driver would allocate a new table and copy
> > > the values into it at each read_avail() call. Then
> > > iio_read_channel_info_avail() would free the buffer if some sort of
> > > free-after-use indicator flag is set. I guess such indicator might be set via an
> > > additional read_avail function argument (would be an extensive API change) or
> > > maybe via a new iio_chan_spec attribute.  
> > 
> > Probably needs to be in read_avail() as otherwise we end up with yet more masks.
> > However, doesn't need to be global.  read_avail_ext() could be added that
> > is used in preference to read_avail() if it is supplied.  That new one can
> > be used only be drivers that need to handle the allocation and free.
> > However I prefer the explicit resource free option as we can in theory
> > at least do much cleverer things than simply freeing the buffer.
> >   
> > >   
> > > > An alternative would use a key of sometype to associate individual read_avail
> > > > calls with new ones to read_avail_release_resource. That might be cleaner.
> > > >     
> > > Are you referring to introduce a new read_avail_realease_resource callback that
> > > would be called at the end of iio_read_channel_info_avail() if set? Similarly
> > > to the previous point the driver would allocate a new table and copy the values
> > > into it at each read_avail() call, but the driver would also define a release
> > > callback to free such table. If otherwise you are referring to something less
> > > trivial, is there a similar API in the kernel that can be referred to for
> > > clarity?  
> > 
> > Indeed what you suggest. Key is it puts the burden on the driver to do it's
> > own management. That avoids handing ownership of the buffer to the core
> > which is a pattern I'm not that keen on if we can avoid it.
> > 
> > The new callback would take the buffer pointer that came back from read_avail()
> > and pass that back to the driver.  In simple case the driver could just
> > free the buffer.  However, it could also do some cleverer stuff to keep
> > it around if a write hasn't raced with this code.  That might make sense if
> > it's a big table and calculating the values is expensive.
> >  
> I am trying to achieve this and it looks pretty straightforward for the case we
> considered, iio would be extended like the following:
> 
> diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
> index e6fad8a6a1fc..fe6ad8e9722f 100644
> --- a/drivers/iio/industrialio-core.c
> +++ b/drivers/iio/industrialio-core.c
> @@ -860,12 +860,20 @@ static ssize_t iio_read_channel_info_avail(struct device *dev,
>                 return ret;
>         switch (ret) {
>         case IIO_AVAIL_LIST:
> -               return iio_format_avail_list(buf, vals, type, length);
> +               ret = iio_format_avail_list(buf, vals, type, length);
> +               break;
>         case IIO_AVAIL_RANGE:
> -               return iio_format_avail_range(buf, vals, type);
> +               ret = iio_format_avail_range(buf, vals, type);
> +               break;
>         default:
> -               return -EINVAL;
> +               ret = -EINVAL;
>         }
> +
> +       if (indio_dev->info->read_avail_release_resource)
> +               indio_dev->info->read_avail_release_resource(
> +                       indio_dev, this_attr->c, vals, this_attr->address);
> +
> +       return ret;
>  }
> 
>  /**
> diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
> index f6c0499853bb..0ab08b94bad0 100644
> --- a/include/linux/iio/iio.h
> +++ b/include/linux/iio/iio.h
> @@ -491,6 +491,10 @@ struct iio_info {
>                           int *length,
>                           long mask);
> 
> +       void (*read_avail_release_resource)(struct iio_dev *indio_dev,
> +                                           struct iio_chan_spec const *chan,
> +                                           const int *vals, long mask);
> +
>         int (*write_raw)(struct iio_dev *indio_dev,
>                          struct iio_chan_spec const *chan,
>                          int val, 
> 
> And with the following usage example for the pac1921 driver:
> 
> static int pac1921_read_avail(struct iio_dev *indio_dev,
> 			      struct iio_chan_spec const *chan,
> 			      const int **vals, int *type, int *length,
> 			      long mask)
> {
> 	switch (mask) {
> 	//...
> 	case IIO_CHAN_INFO_SCALE:
> 		switch (chan->channel) {
> 		//...
> 		case PAC1921_CHAN_CURRENT: {
> 			struct pac1921_priv *priv = iio_priv(indio_dev);
> 			size_t len;
> 			int *buf;
> 
> 			len = ARRAY_SIZE(priv->current_scales) * 2;
> 			buf = kmalloc_array(len, sizeof(int), GFP_KERNEL);
> 			if (!buf)
> 				return -ENOMEM;
> 
> 			for (unsigned int i = 0; i < len; i++)
> 				buf[i] = ((int *)priv->current_scales)[i];
> 
> 			*vals = buf;
> 			*length = (int)len;
> 			*type = IIO_VAL_INT_PLUS_NANO;
> 			return IIO_AVAIL_LIST;
> 		}
> 		default:
> 			return -EINVAL;
> 		}
> 	default:
> 		return -EINVAL;
> 	}
> }
> 
> static void pac1921_read_avail_release_res(struct iio_dev *indio_dev,
> 					   struct iio_chan_spec const *chan,
> 					   const int *vals, long mask)
> {
> 	if (mask == IIO_CHAN_INFO_SCALE &&
> 	    chan->channel == PAC1921_CHAN_CURRENT)
> 		kfree(vals);
> }
> 
> static const struct iio_info pac1921_iio = {
> 	//...
> 	.read_avail = pac1921_read_avail,
> 	.read_avail_release_resource = pac1921_read_avail_release_res,
> };
> 
> However I noticed that some consumer drivers also expose the producer's
> available lists through the following functions:
> - iio_read_avail_channel_attribute()
> - iio_read_avail_channel_raw()
> - iio_channel_read_max()
> - iio_channel_read_min()
> 
> While addressing the read_max()/read_min() is trivial since the
> release_resource() can be called at the end of those function, I think the
> first twos should be tracked as well for later release by the consumer drivers.

We can mostly avoid this by taking a copy in the consumers that use these interfaces then
immediately calling the release. 

> So for example the consumer driver would also expose a
> iio_read_avail_channel_attribute_release_resource() (any suggestion for shorter
> function names?) mapped to the read_avail_release_resource() iio_info function.

> However the fact that iio_read_avail_channel_attribute() locks on
> info_exist_lock, makes me think that the driver could be unregistered between a
> read_avail() and a read_avail_release_resource() and in that case an allocated
> list would be leaked, right? Any suggestion on how best handle this case? My
> guess is to let iio destroy the list at some point during device release, that
> would be done if the list allocation was done through devm_kmalloc (or similar)
> but I think it would result in double frees during usual case, so maybe there
> should be a way to let it free the list only if not already freed? Or maybe a
> complete different approach?

Locking is a bit of a pain. I don't want to reference count for something
as trivial as this.

Perhaps the original idea of a release callback isn't best solution for these
in kernel interfaces and we should just 'always' make a copy of the data to
avoid the lifetime issue.  I don't want to do that for the IIO core case
because it's a big waste of memory and we don't have the lifetime issues,
but for the in kernel consumer interfaces copying sounds fine.

> 
> > >   
> > > > oh well, a cleanup job for another day.   I suspect we have drivers today
> > > > that are subject to tearing of their available lists.
> > > >     
> > > I've just taken a quick look at the other drivers and the following twos seem
> > > to have the race condition issue since they are updating an available table
> > > during a write_raw() call and also exposing it during a read_avail() call:
> > > * drivers/iio/light/as73211.c: see int_time_avail table
> > > * drivers/iio/adc/ad7192.c: see filter_freq_avail table
> > > 
> > > There might be others, I've only looked into those that seemed likely to have
> > > this issue after some trivial greps.
> > > 
> > > Is there already a common way for iio to keep track of open issues (e.g. Issue
> > > tracker/TODO lists/etc)?  
> > 
> > Not really.  Email to the list tends to be the most we do for tracking.
> > I have had various todo lists public over the years, but they tend to rot.
> > 
> > Fix stuff before we forget about it! :(
> >   
> I could try to provide fix patches for those two drivers as well, but I could
> not test them on the real HW. I am wondering whether to add them to the same
> release_resource() patch series or into a separate series since those fixes
> could be sit for a while waiting for additional tests.
Either is fine.  I don't necessarily have to pick the whole series up in one
go.  Just put those other drivers towards the end.

Jonathan

> 
> Thanks,
> Matteo Martelli
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ