[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACRpkdaERtyiYhJVB536YOgB6JOMTV=eME2Tq6ed3JndZkhq7g@mail.gmail.com>
Date: Fri, 11 Feb 2022 02:10:56 +0100
From: Linus Walleij <linus.walleij@...aro.org>
To: Massimo Toscanelli <massimo.toscanelli@...ca-geosystems.com>
Cc: linux-kernel@...r.kernel.org, jic23@...nel.org, lars@...afoo.de,
caihuoqing@...du.com, aardelean@...iqon.com,
andy.shevchenko@...il.com, hdegoede@...hat.com,
Qing-wu.Li@...ca-geosystems.com.cn, stephan@...hold.net,
linux-iio@...r.kernel.org, bsp-development.geo@...ca-geosystems.com
Subject: Re: [PATCH 1/2] iio: st_sensors: add always_on flag
On Mon, Feb 7, 2022 at 10:05 AM Massimo Toscanelli
<massimo.toscanelli@...ca-geosystems.com> wrote:
> The st_sensors_read_info_raw() implementation allows to get raw data
> from st_sensors, enabling and disabling the device at every read.
> This leads to delays in data access, caused by the msleep that waits
> the hardware to be ready after every read.
>
> Introduced always_on flag in st_sensor_data, to allow the user to
> keep the device always enabled. In this way, every data access to the
> device can be performed with no delays.
>
> Add always_on sysfs attribute.
>
> Signed-off-by: Massimo Toscanelli <massimo.toscanelli@...ca-geosystems.com>
This creates special dependencies on sysfs poking etc.
Couldn't the runtime PM solve this problem in a better way?
If you look in for example:
drivers/iio/accel/kxsd9.c
how the different pm_runtime* primitives are used, you get an
idea.
Especially note
/*
* Set autosuspend to two orders of magnitude larger than the
* start-up time. 20ms start-up time means 2000ms autosuspend,
* i.e. 2 seconds.
*/
pm_runtime_set_autosuspend_delay(dev, 2000);
This creates a "hysteresis window" around when the device is
on, so it is not repeatedly shut off and on, but only after 2 seconds
of inactivity.
This way no special userspace is needed to achieve what you want,
and it benefits everyone.
I wanted to fix this for all the ST sensors but never got around to.
Yours,
Linus Walleij
Powered by blists - more mailing lists