lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94d4b082-7b08-82e0-bb42-6ac36821ea61@arm.com>
Date:   Thu, 29 Jun 2023 18:20:58 +0100
From:   Robin Murphy <robin.murphy@....com>
To:     Christoph Hellwig <hch@....de>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Tomasz Figa <tfiga@...omium.org>,
        Ricardo Ribalda <ribalda@...omium.org>,
        Sergey Senozhatsky <senozhatsky@...gle.com>,
        Laurent Pinchart <laurent.pinchart@...asonboard.com>,
        iommu@...ts.linux-foundation.org
Cc:     linux-media@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-doc@...r.kernel.org
Subject: Re: [PATCH 3/6] dma-mapping: add a dma_alloc_noncontiguous API

[Archaeology ensues...]

On 2021-03-01 08:52, Christoph Hellwig wrote:
[...]
> +static struct sg_table *alloc_single_sgt(struct device *dev, size_t size,
> +		enum dma_data_direction dir, gfp_t gfp)
> +{
> +	struct sg_table *sgt;
> +	struct page *page;
> +
> +	sgt = kmalloc(sizeof(*sgt), gfp);
> +	if (!sgt)
> +		return NULL;
> +	if (sg_alloc_table(sgt, 1, gfp))
> +		goto out_free_sgt;
> +	page = __dma_alloc_pages(dev, size, &sgt->sgl->dma_address, dir, gfp);
> +	if (!page)
> +		goto out_free_table;
> +	sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
> +	sg_dma_len(sgt->sgl) = sgt->sgl->length;
> +	return sgt;
> +out_free_table:
> +	sg_free_table(sgt);
> +out_free_sgt:
> +	kfree(sgt);
> +	return NULL;
> +}
> +
> +struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,
> +		enum dma_data_direction dir, gfp_t gfp, unsigned long attrs)
> +{
> +	const struct dma_map_ops *ops = get_dma_ops(dev);
> +	struct sg_table *sgt;
> +
> +	if (WARN_ON_ONCE(attrs & ~DMA_ATTR_ALLOC_SINGLE_PAGES))
> +		return NULL;
> +
> +	if (ops && ops->alloc_noncontiguous)
> +		sgt = ops->alloc_noncontiguous(dev, size, dir, gfp, attrs);
> +	else
> +		sgt = alloc_single_sgt(dev, size, dir, gfp);
> +
> +	if (sgt) {
> +		sgt->nents = 1;
> +		debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);

It turns out this is liable to trip up DMA_API_DEBUG_SG (potentially 
even in the alloc_single_sgt() case), since we've filled in sgt without 
paying attention to the device's segment boundary/size parameters.

Now, it would be entirely possible to make the allocators "properly" 
partition the pages into multiple segments per those constraints, but 
given that there's no actual dma_map_sg() operation involved, and AFAIR 
the intent here is really only to describe a single DMA-contiguous 
buffer as pages, rather than represent a true scatter-gather operation, 
I'm now wondering whether it makes more sense to just make dma-debug a 
bit cleverer instead. Any other opinions?

Thanks,
Robin.

> +	}
> +	return sgt;
> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ