lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Feb 2023 16:21:34 +0800
From:   Wenchao Chen <wenchao.chen666@...il.com>
To:     Ulf Hansson <ulf.hansson@...aro.org>
Cc:     Adrian Hunter <adrian.hunter@...el.com>, orsonzhai@...il.com,
        baolin.wang@...ux.alibaba.com, zhang.lyra@...il.com,
        axboe@...nel.dk, avri.altman@....com, kch@...dia.com,
        CLoehle@...erstone.com, vincent.whitchurch@...s.com,
        bigeasy@...utronix.de, s.shtylyov@....ru,
        michael@...winnertech.com, linux-mmc@...r.kernel.org,
        linux-kernel@...r.kernel.org, megoo.tang@...il.com,
        lzx.stg@...il.com
Subject: Re: [PATCH V2 0/2] mmc: block: Support Host to control FUA

On Thu, Feb 9, 2023 at 10:51 PM Ulf Hansson <ulf.hansson@...aro.org> wrote:
>
> On Fri, 11 Nov 2022 at 13:04, Ulf Hansson <ulf.hansson@...aro.org> wrote:
> >
> > [...]
> >
> > > >
> > > > Considering the data integrity, we did a random power-down test, and
> > > > the experimental results were good.
> > > >
> > > > FUA can only reduce data loss under abnormal conditions, but cannot
> > > > prevent data loss under abnormal conditions.
> > > >
> > > > I think there should be a balance between FUA and NO FUA, but
> > > > filesystems seem to favor FUA.
> > > >
> > > > FUA brings a drop in random write performance. If enough tests are
> > > > done, NO FUA is acceptable.
> > >
> > > Testing this isn't entirely easy. It requires you to hook up
> > > electrical switches to allow you to automate the powering on/off of
> > > the platform(s). Then at each cycle, really make sure to stress test
> > > the data integrity of the flash memory. Is that what the tests did -
> > > or can you elaborate a bit on what was really tested?
> > >
> > > In any case, the performance impact boils down to how each eMMC/SD
> > > card internally manages reliable writes vs regular writes. Some
> > > vendors may treat them very similarly, while others do not.
> > >
> > > That said, trying to disable REQ_FUA from an mmc host driver is the
> > > wrong approach, as also pointed out by Adrian above. These types of
> > > decisions belong solely in the mmc core layer.
> > >
> > > Instead of what the $subject series proposes, I would rather suggest
> > > we discuss (and test) whether it could make sense to disable REQ_FUA -
> > > *if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
> > > Hence, the mmc core could then announce only REQ_OP_FLUSH.
> > >
> >
> > Below is a simple patch that does the above. We may not want to enable
> > this for *all* eMMC/SD cards, but it works fine for testing and to
> > continue the discussions here.
> >
> >
> > From: Ulf Hansson <ulf.hansson@...aro.org>
> > Date: Fri, 11 Nov 2022 12:48:02 +0100
> > Subject: [PATCH] mmc: core: Disable REQ_FUA if the card supports an internal
> >  cache
> >
> > !!!! This is not for merge, but only for test and discussions!!!
> >
> > It has been reported that REQ_FUA can be costly on some eMMC devices. A
> > potential option that could mitigate this problem, is to rely solely on
> > REQ_OP_FLUSH instead, but that requires the eMMC/SD to support an internal
> > cache. This is an attempt to try this out to see how it behaves.
> >
> > Signed-off-by: Ulf Hansson <ulf.hansson@...aro.org>
> > ---
> >  drivers/mmc/core/block.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> > index db6d8a099910..197e9f6cdaad 100644
> > --- a/drivers/mmc/core/block.c
> > +++ b/drivers/mmc/core/block.c
> > @@ -2494,15 +2494,15 @@ static struct mmc_blk_data
> > *mmc_blk_alloc_req(struct mmc_card *card,
> >                         md->flags |= MMC_BLK_CMD23;
> >         }
> >
> > -       if (md->flags & MMC_BLK_CMD23 &&
> > -           ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> > -            card->ext_csd.rel_sectors)) {
> > +       if (mmc_cache_enabled(card->host)) {
> > +               cache_enabled  = true;
> > +       } else if (md->flags & MMC_BLK_CMD23 &&
> > +                 (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN ||
> > +                  card->ext_csd.rel_sectors)) {
> >                 md->flags |= MMC_BLK_REL_WR;
> >                 fua_enabled = true;
> >                 cache_enabled = true;
> >         }
> > -       if (mmc_cache_enabled(card->host))
> > -               cache_enabled  = true;
> >
> >         blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
> >
> > --
> > 2.34.1
>
> Wenchao,
>
> Did you manage to try the above patch to see if that could improve the
> situation?
>

Hi Uffe,
Yes, it can solve my problem. Thank you very much.

> Kind regards
> Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ