lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 18 Mar 2021 11:30:38 -0700
From:   Bhaumik Bhatt <bbhatt@...eaurora.org>
To:     Loic Poulain <loic.poulain@...aro.org>,
        Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
        hemantk@...eaurora.org
Cc:     Jeffrey Hugo <jhugo@...eaurora.org>,
        Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>,
        Carl Yin(殷张成) 
        <carl.yin@...ctel.com>, Naveen Kumar <naveen.kumar@...ctel.com>,
        jhugo=codeaurora.org@...eaurora.org
Subject: Re: [PATCH v4 1/3] bus: mhi: core: Introduce internal register poll
 helper function

On 2021-03-18 09:43 AM, Loic Poulain wrote:
> On Thu, 18 Mar 2021 at 17:13, Jeffrey Hugo <jhugo@...eaurora.org> 
> wrote:
>> 
>> On 3/17/2021 3:26 PM, Bhaumik Bhatt wrote:
>> > On 2021-03-11 11:59 AM, Jeffrey Hugo wrote:
>> >> On 3/11/2021 1:00 AM, Loic Poulain wrote:
>> >>> Hi Bhaumik,
>> >>>
>> >>> On Thu, 11 Mar 2021 at 00:31, Bhaumik Bhatt <bbhatt@...eaurora.org>
>> >>> wrote:
>> >>>>
>> >>>> Introduce helper function to allow MHI core driver to poll for
>> >>>> a value in a register field. This helps reach a common path to
>> >>>> read and poll register values along with a retry time interval.
>> >>>>
>> >>>> Signed-off-by: Bhaumik Bhatt <bbhatt@...eaurora.org>
>> >>>> ---
>> >>>>   drivers/bus/mhi/core/internal.h |  3 +++
>> >>>>   drivers/bus/mhi/core/main.c     | 23 +++++++++++++++++++++++
>> >>>>   2 files changed, 26 insertions(+)
>> >>>>
>> >>>> diff --git a/drivers/bus/mhi/core/internal.h
>> >>>> b/drivers/bus/mhi/core/internal.h
>> >>>> index 6f80ec3..005286b 100644
>> >>>> --- a/drivers/bus/mhi/core/internal.h
>> >>>> +++ b/drivers/bus/mhi/core/internal.h
>> >>>> @@ -643,6 +643,9 @@ int __must_check mhi_read_reg(struct
>> >>>> mhi_controller *mhi_cntrl,
>> >>>>   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
>> >>>>                                      void __iomem *base, u32 offset,
>> >>>> u32 mask,
>> >>>>                                      u32 shift, u32 *out);
>> >>>> +int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
>> >>>> +                                   void __iomem *base, u32 offset,
>> >>>> u32 mask,
>> >>>> +                                   u32 shift, u32 val, u32 delayus);
>> >>>>   void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem
>> >>>> *base,
>> >>>>                     u32 offset, u32 val);
>> >>>>   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void
>> >>>> __iomem *base,
>> >>>> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
>> >>>> index 4e0131b..7c7f41a 100644
>> >>>> --- a/drivers/bus/mhi/core/main.c
>> >>>> +++ b/drivers/bus/mhi/core/main.c
>> >>>> @@ -4,6 +4,7 @@
>> >>>>    *
>> >>>>    */
>> >>>>
>> >>>> +#include <linux/delay.h>
>> >>>>   #include <linux/device.h>
>> >>>>   #include <linux/dma-direction.h>
>> >>>>   #include <linux/dma-mapping.h>
>> >>>> @@ -37,6 +38,28 @@ int __must_check mhi_read_reg_field(struct
>> >>>> mhi_controller *mhi_cntrl,
>> >>>>          return 0;
>> >>>>   }
>> >>>>
>> >>>> +int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
>> >>>> +                                   void __iomem *base, u32 offset,
>> >>>> +                                   u32 mask, u32 shift, u32 val,
>> >>>> u32 delayus)
>> >>>> +{
>> >>>> +       int ret;
>> >>>> +       u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
>> >>>> +
>> >>>> +       while (retry--) {
>> >>>> +               ret = mhi_read_reg_field(mhi_cntrl, base, offset,
>> >>>> mask, shift,
>> >>>> +                                        &out);
>> >>>> +               if (ret)
>> >>>> +                       return ret;
>> >>>> +
>> >>>> +               if (out == val)
>> >>>> +                       return 0;
>> >>>> +
>> >>>> +               udelay(delayus);
>> >>>
>> >>> Have you read my previous comment?
>> >>> Do you really want to risk hogging the CPU for several seconds? we
>> >>> know that some devices take several seconds to start/boot.
>> >>> Why not using msleep variant here?
>> >>
>> >> usleep_range() if there is a desire to stay in us units?
>> >>
>> >> Given that the use of this function is for 25ms in one case, I wonder
>> >> if this warning is applicable:
>> >> https://elixir.bootlin.com/linux/latest/source/include/linux/delay.h#L28
>> >>
>> >> Counter point, 1ms latency over PCIe is not unusual.  I know we've
>> >> removed the PCIe dependencies from MHI, but PCIe is the real usecase
>> >> at this time.  Seems like this function could behave a bit weird if
>> >> the parameter to udelay is something like "100", but the
>> >> mhi_read_reg_field() call takes significantly longer than that.  Feels
>> >> like in some scenarios, we could actually exceed the timeout by a
>> >> non-trivial margin.
>> >>
>> >> I guess I'm going back and forth in determining if us scale timing is
>> >> a benefit in any way.
>> > Thanks for all the inputs. I think a good idea here would be to use
>> > fsleep()
>> > API as we need to allow any timeout the caller specifies. Also, plan is to
>> > drop the patch #3 in this series since that will require a busywait due to
>> > the code being in panic path.
>> >
>> > I don't wish to accommodate another variable here for busywait but that
>> > would be an option to pick sleep or delay depending on the caller's path.
>> >
>> > Please respond if there are any concerns.
>> 
>> fsleep() would be some improvement, but I think there is still the 
>> issue
>> Loic points out where if delayus is small, but timeout_ms is large 
>> (say
>> 50us and 25s), this function will end up burning a lot of cpu cycles
>> However that is likely an edge case, and if your poll cycle is that
>> small, I think it should be assumed that the event is expected to 
>> happen
>> quickly, so the full timeout should not be hit.
> 
> Well, my point is that during initial power_up, with a device
> cold-booting, it can take several seconds for it to reach ready state
> (not a corner case). That why timeout_ms can be as large as 20 seconds
> for mhi_pci_modem. If polling is based on busy-wait, that means the
> while loop will not let the CPU running anything else for several
> seconds. Not sure what is the expected meaning of this timeout_ms in
> first place... maybe I just use it badly.
> 
> Moreover, do we need microsecond latency on detecting ready
> transition, this is not a critical path, right?
> 
> Regards,
> Loic
At initial boot, yes, device could take longer to boot.

If we were to force caller to use an interval in the order of 
milliseconds, I'd
still be using fsleep() internally anyway and just multiply the value by 
1000
before passing it on as there's a need to check if the value is greater 
than 20ms
or not.

I don't wish to reinvent the wheel and implement what we already have in 
fsleep()
internally for msec.

It would be recommended that the caller specifies an interval of at 
least 20+
msec but I don't think we can enforce that.

A point in favor of using microseconds is, if we were to expand usage of 
this
API in the future for panic path to do a busywait, we wouldn't have to 
change
the parameters.

3 options:
1. Use msec granularity and implement a partial fsleep for msec within 
the new
API.
2. Use fsleep and leave it as usec granularity.
3. Leave it at usec, and add a busywait boolean allowing caller to 
choose between
udelay() and fsleep() to also allow usage of this API in panic path (for 
patch #3).

I like options 2 and 3. Hemant/Mani, your guidance is welcome.

Thanks,
Bhaumik
---
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum,
a Linux Foundation Collaborative Project

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ