lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9Jb+i_p44q=sS4P=B3Pr-T_jsM9Q-mUHg6i657dT7bSqKULw@mail.gmail.com>
Date:   Wed, 16 Feb 2022 18:01:27 +0100
From:   Pankaj Gupta <pankaj.gupta.linux@...il.com>
To:     Dan Williams <dan.j.williams@...el.com>
Cc:     Linux NVDIMM <nvdimm@...ts.linux.dev>,
        virtualization@...ts.linux-foundation.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        jmoyer <jmoyer@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>,
        Vishal L Verma <vishal.l.verma@...el.com>,
        Dave Jiang <dave.jiang@...el.com>,
        "Weiny, Ira" <ira.weiny@...el.com>,
        Pankaj Gupta <pankaj.gupta@...os.com>
Subject: Re: [RFC v3 2/2] pmem: enable pmem_submit_bio for asynchronous flush

> > > > Return from "pmem_submit_bio" when asynchronous flush is
> > > > still in progress in other context.
> > > >
> > > > Signed-off-by: Pankaj Gupta <pankaj.gupta.linux@...il.com>
> > > > ---
> > > >  drivers/nvdimm/pmem.c        | 15 ++++++++++++---
> > > >  drivers/nvdimm/region_devs.c |  4 +++-
> > > >  2 files changed, 15 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> > > > index fe7ece1534e1..f20e30277a68 100644
> > > > --- a/drivers/nvdimm/pmem.c
> > > > +++ b/drivers/nvdimm/pmem.c
> > > > @@ -201,8 +201,12 @@ static void pmem_submit_bio(struct bio *bio)
> > > >         struct pmem_device *pmem = bio->bi_bdev->bd_disk->private_data;
> > > >         struct nd_region *nd_region = to_region(pmem);
> > > >
> > > > -       if (bio->bi_opf & REQ_PREFLUSH)
> > > > +       if (bio->bi_opf & REQ_PREFLUSH) {
> > > >                 ret = nvdimm_flush(nd_region, bio);
> > > > +               /* asynchronous flush completes in other context */
> > >
> > > I think a negative error code is a confusing way to capture the case
> > > of "bio successfully coalesced to previously pending flush request.
> > > Perhaps reserve negative codes for failure, 0 for synchronously
> > > completed, and > 0 for coalesced flush request.
> >
> > Yes. I implemented this way previously, will revert it to. Thanks!
> >
> > >
> > > > +               if (ret == -EINPROGRESS)
> > > > +                       return;
> > > > +       }
> > > >
> > > >         do_acct = blk_queue_io_stat(bio->bi_bdev->bd_disk->queue);
> > > >         if (do_acct)
> > > > @@ -222,13 +226,18 @@ static void pmem_submit_bio(struct bio *bio)
> > > >         if (do_acct)
> > > >                 bio_end_io_acct(bio, start);
> > > >
> > > > -       if (bio->bi_opf & REQ_FUA)
> > > > +       if (bio->bi_opf & REQ_FUA) {
> > > >                 ret = nvdimm_flush(nd_region, bio);
> > > > +               /* asynchronous flush completes in other context */
> > > > +               if (ret == -EINPROGRESS)
> > > > +                       return;
> > > > +       }
> > > >
> > > >         if (ret)
> > > >                 bio->bi_status = errno_to_blk_status(ret);
> > > >
> > > > -       bio_endio(bio);
> > > > +       if (bio)
> > > > +               bio_endio(bio);
> > > >  }
> > > >
> > > >  static int pmem_rw_page(struct block_device *bdev, sector_t sector,
> > > > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> > > > index 9ccf3d608799..8512d2eaed4e 100644
> > > > --- a/drivers/nvdimm/region_devs.c
> > > > +++ b/drivers/nvdimm/region_devs.c
> > > > @@ -1190,7 +1190,9 @@ int nvdimm_flush(struct nd_region *nd_region, struct bio *bio)
> > > >         if (!nd_region->flush)
> > > >                 rc = generic_nvdimm_flush(nd_region);
> > > >         else {
> > > > -               if (nd_region->flush(nd_region, bio))
> > > > +               rc = nd_region->flush(nd_region, bio);
> > > > +               /* ongoing flush in other context */
> > > > +               if (rc && rc != -EINPROGRESS)
> > > >                         rc = -EIO;
> > >
> > > Why change this to -EIO vs just let the error code through untranslated?
> >
> > The reason was to be generic error code instead of returning host side
> > return codes to guest?
>
> Ok, maybe a comment to indicate the need to avoid exposing these error
> codes toa guest so someone does not ask the same question in the
> future?

Sure.

Thanks,
Pankaj

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ