lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoZEc-m7NMnaAa5bjtCSp4wyJqic3cLHk95xracoWcCUA@mail.gmail.com>
Date:   Tue, 22 Oct 2019 10:40:55 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Ram Prakash Gupta <rampraka@...eaurora.org>
Cc:     Asutosh Das <asutoshd@...eaurora.org>,
        Sahitya Tummala <stummala@...eaurora.org>,
        Sayali Lokhande <sayalil@...eaurora.org>,
        Veerabhadrarao Badiganti <vbadigan@...eaurora.org>,
        cang@...eaurora.org, ppvk@...eaurora.org,
        Adrian Hunter <adrian.hunter@...el.com>,
        Rob Herring <robh+dt@...nel.org>,
        "linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        DTML <devicetree@...r.kernel.org>
Subject: Re: [RFC 0/6] mmc: Add clock scaling support for mmc driver

On Mon, 21 Oct 2019 at 16:30, Ram Prakash Gupta <rampraka@...eaurora.org> wrote:
>
> This change adds the use of devfreq based clock scaling to MMC.
> This applicable for eMMC and SDCard.
> For some workloads, such as video playback, it isn't necessary
> for these cards to run at high speed. Running at lower
> frequency, in such cases can still meet the deadlines for data
> transfers.
>
> Scaling down the clock frequency dynamically has power savings
> not only because the bus is running at lower frequency but also
> has an advantage of scaling down the system core voltage, if
> supported. Provide an ondemand clock scaling support similar
> to the cpufreq ondemand governor having two thresholds,
> up_threshold and down_threshold to decide whether to increase
> the frequency or scale it down respectively as per load.

This sounds simple, but what the series is doing is far more
complicated but scaling the bus clock, as it also re-negotiates the
bus speed mode.

Each time the triggering point for scaling up/down is hit, then a
series of commands needs to be sent to the card, including running the
tuning procedure. The point is, for sure, this doesn't come for free,
both from a latency point of view, but also from an energy cost point
of view. So, whether this really improves the behaviour, seems like
very use case sensitive, right!?

Overall, when it comes to use cases, we have very limited knowledge
about them at the mmc block layer level. I think it should remain like
that. If at any place at all, this information is better maintained by
the generic block layer and potentially the configured I/O scheduler.

This brings me to a question about the tests you have you run. Can you
share some information and data about that?

>
>
> Ram Prakash Gupta (6):
>   mmc: core: Parse clk scaling dt entries
>   mmc: core: Add core scaling support in driver
>   mmc: core: Initialize clk scaling for mmc and SDCard
>   mmc: core: Add debugfs entries for scaling support
>   mmc: sdhci-msm: Add capability in platfrom host
>   dt-bindings: mmc: sdhci-msm: Add clk scaling dt parameters
>
>  .../devicetree/bindings/mmc/sdhci-msm.txt          |  19 +

I noticed that the DT patch was put last in the series, but the
parsing is implemented in the first patch. Please flip this around. If
you want to implement DT parsing of new bindings, please make sure to
discuss the new bindings first.

>  drivers/mmc/core/block.c                           |  19 +-
>  drivers/mmc/core/core.c                            | 777 +++++++++++++++++++++
>  drivers/mmc/core/core.h                            |  17 +
>  drivers/mmc/core/debugfs.c                         |  90 +++
>  drivers/mmc/core/host.c                            | 226 ++++++
>  drivers/mmc/core/mmc.c                             | 246 ++++++-
>  drivers/mmc/core/queue.c                           |   2 +
>  drivers/mmc/core/sd.c                              |  84 ++-
>  drivers/mmc/host/sdhci-msm.c                       |   2 +
>  include/linux/mmc/card.h                           |   7 +
>  include/linux/mmc/host.h                           |  66 ++
>  12 files changed, 1550 insertions(+), 5 deletions(-)

This is a lot of new code in the mmc core, which I would then need to
maintain, of course. I have to admit, I am a bit concerned about that,
so you have to convince me that there are good reasons for me to apply
this.

As I stated above, I think the approach looks quite questionable in
general. And even if you can share measurement, that it improves the
behaviour, I suspect (without a deeper code review) that some of the
code better belongs in common block device layer.

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ