lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79989c1537c66a90eca5339028ed447b3805e906.camel@linux.intel.com>
Date:   Mon, 17 Jul 2023 16:35:39 -0500
From:   Tom Zanussi <tom.zanussi@...ux.intel.com>
To:     Rex Zhang <rex.zhang@...el.com>
Cc:     dave.jiang@...el.com, davem@...emloft.net,
        dmaengine@...r.kernel.org, fenghua.yu@...el.com,
        giovanni.cabiddu@...el.com, herbert@...dor.apana.org.au,
        james.guilford@...el.com, kanchana.p.sridhar@...el.com,
        linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
        tony.luck@...el.com, vinodh.gopal@...el.com, vkoul@...nel.org,
        wajdi.k.feghali@...el.com
Subject: Re: [PATCH v7 12/14] crypto: iaa - Add support for deflate-iaa
 compression algorithm

Hi Rex,

On Mon, 2023-07-17 at 10:12 +0800, Rex Zhang wrote:
> Hi, Tom,
> 

[snip]

> > +
> > +static int iaa_comp_acompress(struct acomp_req *req)
> > +{
> > +       struct iaa_compression_ctx *compression_ctx;
> > +       struct crypto_tfm *tfm = req->base.tfm;
> > +       dma_addr_t src_addr, dst_addr;
> > +       int nr_sgs, cpu, ret = 0;
> > +       struct iaa_wq *iaa_wq;
> > +       u32 compression_crc;
> > +       struct idxd_wq *wq;
> > +       struct device *dev;
> > +
> > +       compression_ctx = crypto_tfm_ctx(tfm);
> > +
> > +       if (!iaa_crypto_enabled) {
> > +               pr_debug("iaa_crypto disabled, not compressing\n");
> > +               return -ENODEV;
> > +       }
> > +
> > +       if (!req->src || !req->slen) {
> > +               pr_debug("invalid src, not compressing\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       cpu = get_cpu();
> > +       wq = wq_table_next_wq(cpu);
> > +       put_cpu();
> > +       if (!wq) {
> > +               pr_debug("no wq configured for cpu=%d\n", cpu);
> > +               return -ENODEV;
> > +       }
> > +
> > +       ret = iaa_wq_get(wq);
> > +       if (ret) {
> > +               pr_debug("no wq available for cpu=%d\n", cpu);
> > +               return -ENODEV;
> > +       }
> > +
> > +       iaa_wq = idxd_wq_get_private(wq);
> > +
> > +       if (!req->dst) {
> > +               gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
> > +               /* incompressible data will always be < 2 * slen */
> > +               req->dlen = 2 * req->slen;
> 2 * req->slen is an estimated size for dst buf. When slen is greater
> than 2048 bytes, dlen is greater than 4096 bytes.

Right, so you're saying that because sgl_alloc uses order 0, this could
result in nr_sgs > 1.  Could also just change this to sg_init_one like
all the other callers.

> > +               req->dst = sgl_alloc(req->dlen, flags, NULL);
> > +               if (!req->dst) {
> > +                       ret = -ENOMEM;
> > +                       goto out;
> > +               }
> > +       }
> > +
> > +       dev = &wq->idxd->pdev->dev;
> > +
> > +       nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
> > +       if (nr_sgs <= 0 || nr_sgs > 1) {
> > +               dev_dbg(dev, "couldn't map src sg for iaa device %d,"
> > +                       " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
> > +                       iaa_wq->wq->id, ret);
> > +               ret = -EIO;
> > +               goto out;
> > +       }
> > +       src_addr = sg_dma_address(req->src);
> > +       dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
> > +               " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
> > +               req->src, req->slen, sg_dma_len(req->src));
> > +
> > +       nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
> > +       if (nr_sgs <= 0 || nr_sgs > 1) {
> when dlen is greater than 4096 bytes, nr_sgs maybe greater than 1,
> but the actual output size maybe less than 4096 bytes.
> In other words, the condition nr_sgs > 1 may block a case which could
> have been done.

Currently all existing callers use sg_init_one(), so nr_sgs is never >
1.  But yes, we should add code to be able to handle > 1, I agree.

Thanks,

Tom


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ