lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=NsqxDA=usDRa-KV48RkeEROARsw8JqBF5vyJcEEV5r_Fg1w@mail.gmail.com>
Date:   Wed, 25 Oct 2023 10:01:44 +0200
From:   Kornel Dulęba <korneld@...omium.org>
To:     Adrian Hunter <adrian.hunter@...el.com>
Cc:     linux-mmc@...r.kernel.org, linux-kernel@...r.kernel.org,
        Asutosh Das <quic_asutoshd@...cinc.com>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Radoslaw Biernacki <biernacki@...gle.com>,
        Guenter Roeck <groeck@...omium.org>,
        Gwendal Grignou <gwendal@...omium.org>, upstream@...ihalf.com
Subject: Re: [PATCH] mmc: cqhci: Be more verbose in error irq handler

On Mon, Oct 23, 2023 at 1:38 PM Adrian Hunter <adrian.hunter@...el.com> wrote:
>
> On 20/10/23 11:53, Kornel Dulęba wrote:
> > On Fri, Oct 20, 2023 at 9:41 AM Adrian Hunter <adrian.hunter@...el.com> wrote:
> >>
> >> On 16/10/23 12:56, Kornel Dulęba wrote:
> >>> There are several reasons for controller to generate an error interrupt.
> >>> They include controller<->card timeout, and CRC mismatch error.
> >>> Right now we only get one line in the logs stating that CQE recovery was
> >>> triggered, but with no information about what caused it.
> >>> To figure out what happened be more verbose and dump the registers from
> >>> irq error handler logic.
> >>> This matches the behaviour of the software timeout logic, see
> >>> cqhci_timeout.
> >>>
> >>> Signed-off-by: Kornel Dulęba <korneld@...omium.org>
> >>> ---
> >>>  drivers/mmc/host/cqhci-core.c | 5 +++--
> >>>  1 file changed, 3 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c
> >>> index b3d7d6d8d654..33abb4bd53b5 100644
> >>> --- a/drivers/mmc/host/cqhci-core.c
> >>> +++ b/drivers/mmc/host/cqhci-core.c
> >>> @@ -700,8 +700,9 @@ static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error,
> >>>
> >>>       terri = cqhci_readl(cq_host, CQHCI_TERRI);
> >>>
> >>> -     pr_debug("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
> >>> -              mmc_hostname(mmc), status, cmd_error, data_error, terri);
> >>> +     pr_warn("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d\n",
> >>> +              mmc_hostname(mmc), status, cmd_error, data_error);
> >>> +     cqhci_dumpregs(cq_host);
> >>
> >> For debugging, isn't dynamic debug seems more appropriate?
> >
> > Dynamic debug is an option, but my personal preference would be to
> > just log more info in the error handler.
>
> Interrupt handlers can get called very rapidly, so some kind of rate
> limiting should be used if the message is unconditional.  Also you need
> to provide actual reasons for your preference.
>
> For dynamic debug of the register dump, something like below is
> possible.
>
> #define cqhci_dynamic_dumpregs(cqhost) \
>         _dynamic_func_call_no_desc("cqhci_dynamic_dumpregs", cqhci_dumpregs, cqhost)
>
Fair point.
The reason I'm not a fan of using dynamic debug for this is that my
goal here is to improve the warning/error logging information that we
get from systems running in production.
I.e. if we get a lot of "running CQE recovery" messages, at the very
least I'd like to know what is causing them.
> > To give you some background.
> > We're seeing some "running CQE recovery" lines in the logs, followed
> > by a dm_verity mismatch error.
> > The reports come from the field, with no feasible way to reproduce the
> > issue locally.
>
> If it is a software error, some kind of error injection may well
> reproduce it.  Also if it is a hardware error that only happens
> during recovery, error injection could increase the likelihood of
> reproducing it.

We tried software injection and it didn't yield any results.
We're currently looking into "injecting" hw errors by using a small
burst field generator to interfere with transfers on the data line
directly.
>
> >
> > I'd argue that logging only the info that CQE recovery was executed is
> > not particularly helpful for someone looking into those logs.
>
> As the comment says, that message is there because recovery reduces
> performance, it is not to aid debugging per se.
>
> > Ideally we would have more data about the state the controller was in
> > when the error happened, or at least what caused the recovery to be
> > triggered.
> > The question here is how verbose should we be in this error scenario.
> > Looking at other error scenarios, in the case of a software timeout
> > we're dumping the controller registers. (cqhci_timeout)
>
> Timeout means something is broken - either the driver, the cq engine
> or the card.  On the other hand, an error interrupt is most likely a
> CRC error which is not unexpected occasionally, due to thermal drift
> or perhaps interference.

Right, but my point is that we don't know what triggered CQE recovery.

>
> > Hence I thought that I'd be appropriate to match that and do the same
> > in CQE recovery logic.
>
> It needs to be consistent. There are other pr_debugs, such as:
>
>                 pr_debug("%s: cqhci: Failed to clear tasks\n",
>                 pr_debug("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
>                 pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));
>
> which should perhaps be treated the same.
>
> And there are no messages for errors from the commands in
> mmc_cqe_recovery().

How about this.
As a compromise would it be okay to just do a single pr_warn directly
from cqhci_error_irq.
We could simply promote the existing pr_debug to pr_warn at the
beginning of that function.
This would tell us what triggered the recovery. (controller timeout,
CRC mismatch)
We can also consider removing the "running CQE recovery" print for the
sake of brevity.
The only downside of this that I can see is that we'd be running the
logic from the interrupt handler directly, but I can't see an easy way
around that.
What do you think?
>
> >
> >>
> >>>
> >>>       /* Forget about errors when recovery has already been triggered */
> >>>       if (cq_host->recovery_halt)
> >>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ