lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jul 2008 11:37:11 -0600
From:	Robert Hancock <hancockr@...w.ca>
To:	"V.Radhakrishnan" <rk@...-labs.com>
CC:	Sanka Piyaratna <cesanka@...oo.com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	linux-kernel@...r.kernel.org
Subject: Re: PCIe device driver question

V.Radhakrishnan wrote:
> Hi Robert,
> 
> Thanks for the reply. I was thinking that the MMIO and reserve memory
> will be below 4 GB was only applicable for 32-bit environments, since I
> don't have much experience in 64-bit.
> 
> However, I had an IDENTICAL problem over 2 years ago. I had used
> posix_memalign() in user space to allocate pages aligned to 4096 byte
> pages, allocated several additional memaligned pages in user space, used
> mlock() to lock all these pages, gathered the user space addresses into
> the original pages as arrays of structures, passed this array into the
> kernel using an ioctl() call, used get_user_pages() to extract the
> struct page pointers, performed a kmap() to get the kernel virtual
> addresses and then extracted the physical addresses and 'sent' this to
> the chip to perform DMA.
> 
> This situation is almost identical to what has been reported and hence
> my interest.
> 
> However, I had a PCI access problem. The DMA was just NOT happening on
> any machine which had highmem, i.e over 896 MB.

My guess there was a bug in your DMA mapping code. I don't think kmap is 
what is normally used for this. I think with get_user_pages one usually 
takes the returned page pointers to create an SG list and uses 
dma_map_sg to create a DMA mapping for them.

> 
> I "solved" the problem since I didn't have much time to do R&D, by
> booting with kernel command line option of mem=512M and the DMA went
> thru successfully.
> 
> This was the linux-2.6.15 kernel then. Since the project was basically
> to test the DMA capability of the device, the actual address to where it
> was DMA-ed didn't matter, and I got paid for my work. However, this
> matter was always at the back of my head.
> 
> What could have been the problem with the x86 32-bit PCI ? 
> 
> Thanks and regards
> 
> V. Radhakrishnan
> www.atr-labs.com
> 
> On Wed, 2008-07-30 at 13:21 -0600, Robert Hancock wrote:
>> V.Radhakrishnan wrote:
>>>>> am testing this in an X86_64 architecture machine with 4 GB of RAM. I
>>>>> am able to successfully dma data into any memory (dma) address >
>>>>> 0x0000_0001_0000_0000.
>>> How can you DMA "successfully" into this address which is > 4 GB when
>>> you have only 4 GB RAM ? Or am I missing something ?
>> The MMIO and other reserved memory space at the top of the 32-bit memory 
>> space will cause the top part of memory to be relocated above 4GB.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ