lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <fa296712-c816-0b58-46b6-8db74cfef038@quicinc.com>
Date:   Tue, 19 Sep 2023 14:26:50 -0700
From:   Wesley Cheng <quic_wcheng@...cinc.com>
To:     Hillf Danton <hdanton@...a.com>,
        Mathias Nyman <mathias.nyman@...ux.intel.com>
CC:     Greg KH <gregkh@...uxfoundation.org>,
        Christoph Hellwig <hch@....de>,
        LKML <linux-kernel@...r.kernel.org>,
        USB <linux-usb@...r.kernel.org>
Subject: Re: [PATCH v6 03/33] xhci: sideband: add initial api to register a
 sideband entity

Hi Hillf,

On 9/16/2023 2:02 AM, Hillf Danton wrote:
> On Fri, 15 Sep 2023 17:09:56 -0700 Wesley Cheng <quic_wcheng@...cinc.com>
>> +static int
>> +xhci_ring_to_sgtable(struct xhci_sideband *sb, struct xhci_ring *ring, struct device *dev)
>> +{
>> +	struct sg_table	*sgt;
>> +	struct xhci_segment *seg;
>> +	struct page **pages;
>> +	unsigned int n_pages;
>> +	size_t sz;
>> +	int i;
>> +
>> +	sz = ring->num_segs * TRB_SEGMENT_SIZE;
>> +	n_pages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
>> +	pages = kvmalloc_array(n_pages, sizeof(struct page *), GFP_KERNEL);
>> +	if (!pages)
>> +		return 0;
>> +
>> +	sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
>> +	if (!sgt) {
>> +		kvfree(pages);
>> +		return 0;
>> +	}
>> +
>> +	seg = ring->first_seg;
>> +
>> +	/*
>> +	 * Rings can potentially have multiple segments, create an array that
>> +	 * carries page references to allocated segments.  Utilize the
>> +	 * sg_alloc_table_from_pages() to create the sg table, and to ensure
>> +	 * that page links are created.
>> +	 */
>> +	for (i = 0; i < ring->num_segs; i++) {
>> +		pages[i] = vmalloc_to_page(seg->trbs);
>> +		seg = seg->next;
>> +	}
> 
> Given dma_pool_zalloc() in xhci_segment_alloc() and dma_alloc_coherent() in
> pool_alloc_page(), it is incorrect to get page from the cpu address returned
> by the dma alloc routine.

Thanks for the review.  That's true...at least on my particular platform 
it was working, because it looks like the cpu addr returned from those 
dma calls was going through the path where we call iommu_dma_alloc() --> 
iommu_dma_alloc_remap().  However, not every implementation will have 
IOMMU involved either.

I'll change the logic to below, so that we can fetch the pages based on 
the IOVA/DMA address.

	for (i = 0; i < ring->num_segs; i++) {
		dma_get_sgtable(dev, sgt, seg->trbs, seg->dma,
					TRB_SEGMENT_SIZE);
		pages[i] = sg_page(sgt->sgl);
		sg_free_table(sgt);
		seg = seg->next;
	}

Thanks
Wesley Cheng

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ