lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <234ad71a39aa28e71c3fccd1f16fd1f3.squirrel@webmail.fl-eng.com>
Date:	Thu, 9 Sep 2010 21:38:39 -0400
From:	"steve spano" <steve@...eng.com>
To:	linux-kernel@...r.kernel.org
Subject: ### DMA_ALLOC_COHERENT question

Hi Folks,

I probably have a simple question and I hope someone can point me in the
right direction.

I have a 2.6.30.2 kernel running with a custom device driver for a
multi-channel sound card via PCI-Express. Interrupts,DMA, everything is
working fine.

We allocate a very small 64KB buffer for our DMA actions using
dma_alloc_coherent.

Tripped across an issue with larger amounts of memory on the motherboard.
This is an x86 ATOM motherboard.

At 256MB of ram, dma_alloc_coherent gets us an address within the first
128MB of ram, near the upper 1/2 of the 256MB actually.

Then at 1GB, we get an address near the upper 1/2 again (512MB)
Then at 2GB, we get an address near the upper 1/2 again (1GB)

This causes a problem on the sound card board because we need to then map
in nearly all 2GB of address space across the PCI bar registers so we can
access our tiny 64KB dma address range sitting up near the 1GB boundary.

This seems like we are doing something not right?

Ideally, we would get a non-cached block of ram that we can DMA to and the
processor can access. This block would ideally always be located near the
base of ram (maybe right after the kernel?). That way we can open a 128MB
window into the system ram from PCI-Express and always be sure to be able
to access our DMA buffer.

Can someone provide some insight in how to do this?
Is there a differnet dma allocation procedure we should use?
Should we just put an "unsigned char []" of 64k in the driver? Can we make
that also non cache?
Or do we have to hack the kernel allocation process?

Im sure there is something simple we can do.

Thanks again

Steve Spano
FLE

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ