lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Dec 2014 17:38:57 +0100
From:	Arnd Bergmann <arnd@...db.de>
To:	Arend van Spriel <arend@...adcom.com>
Cc:	linux-arm-kernel@...ts.infradead.org,
	Hante Meuleman <meuleman@...adcom.com>,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	linux-wireless <linux-wireless@...r.kernel.org>,
	brcm80211-dev-list <brcm80211-dev-list@...adcom.com>,
	Will Deacon <will.deacon@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	David Miller <davem@...emloft.net>,
	Marek Szyprowski <m.szyprowski@...sung.com>, hauke@...ke-m.de
Subject: Re: using DMA-API on ARM

On Monday 08 December 2014 17:22:44 Arend van Spriel wrote:
> >> The log: first the ring allocation info is printed. Starting at
> >> 16.124847, ring 2, 3 and 4 are rings used for device to host. In this
> >> log the failure is on a read of ring 3. Ring 3 is 1024 entries of each
> >> 16 bytes. The next thing printed is the kernel page tables. Then some
> >> OpenWRT info and the logging of part of the connection setup. Then at
> >> 1780.130752 the logging of the failure starts. The sequence number is
> >> modulo 253 with ring size of 1024 matches an "old" entry (read 40,
> >> expected 52). Then the different pointers are printed followed by
> >> the kernel page table. The code does then a cache invalidate on the
> >> dma_handle and the next read the sequence number is correct.
> >
> > How do you invalidate the cache? A dma_handle is of type dma_addr_t
> > and we don't define an operation for that, nor does it make sense
> > on an allocation from dma_alloc_coherent(). What happens if you
> > take out the invalidate?
> 
> dma_sync_single_for_cpu(, DMA_FROM_DEVICE) which ends up invalidating 
> the cache (or that is our suspicion).

I'm not sure about that:

static void arm_dma_sync_single_for_cpu(struct device *dev,
                dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
        unsigned int offset = handle & (PAGE_SIZE - 1);
        struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
        __dma_page_dev_to_cpu(page, offset, size, dir);
}

Assuming a noncoherent linear (no IOMMU, no swiotlb, no dmabounce) mapping,
dma_to_pfn will return the correct pfn here, but pfn_to_page will return a
page pointer into the kernel linear mapping, which is not the same
as the pointer you get from __alloc_remap_buffer(). The pointer that
was returned from dma_alloc_coherent is a) non-cachable, and b) not the
same that you flush here.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ