[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CCF15D66-9712-40DF-8B0F-6EB5CEEFCA20@gmail.com>
Date: Tue, 22 Nov 2011 14:50:32 -0500
From: Jean-Francois Dagenais <jeff.dagenais@...il.com>
To: "Hans J. Koch" <hjk@...sjkoch.de>, Matthew Wilcox <matthew@....cx>
Cc: "Michael S. Tsirkin" <mst@...hat.com>, Greg KH <gregkh@...e.de>,
tglx@...utronix.de, linux-pci@...r.kernel.org,
open list <linux-kernel@...r.kernel.org>
Subject: Re: extra large DMA buffer for PCI-E device under UIO
On Nov 22, 2011, at 13:52, Hans J. Koch wrote:
> On Tue, Nov 22, 2011 at 08:40:40PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Nov 22, 2011 at 06:54:02PM +0100, Hans J. Koch wrote:
>>> On Tue, Nov 22, 2011 at 07:37:23PM +0200, Michael S. Tsirkin wrote:
>>> [...]
>>>>> Or am I better off with a UIO solution?
>>>>
>>>> You should probably write a proper kernel driver, not a UIO one.
>>>> your kernel driver would have to prevent the device fom DMA into memory
>>>> outside the allocated range, even if userspace is malicious.
>>>> That's why UIO is generally not recommended for PCI devices that do DMA.
>>>
>>> When UIO was designed, the main goal was the ability to handle interrupts
>>> from userspace. There was no requirement for DMA. In fact, in five years I
>>> didn't get one real world device on my desk that needed it. That doesn't
>>> mean there are no such devices. Adding DMA support to the UIO core was
>>> discussed several times but noone ever did it. Ideas are still welcome...
>>>
>>> If parts of the driver should be in userspace, you should really try
>>> to extend the UIO core instead of re-implementing UIO functionality in
>>> a "proper kernel driver".
>>>
>>> Thanks,
>>> Hans
>>
>> Right, I really meant put all of the driver in the kernel.
>> If parts are in userspace, and device can do DMA,
>> you are faced with the problem as userspace suddenly
>> can access arbitrary memory through the device.
>
> That's nothing UIO specific. You have the same problem with /dev/mem
> or graphic cards. If you're root, you can do lots of things that can
> compromise security or crash your system.
Exactly, and remember, this is a closed, embedded and controlled system, with a
"kiosk" application running.
As a product supporter, if the end-user decides to temper with his unit and read/write
anywhere in the system, it would surely qualify as un-supported use of the system
and so my design should not account for this.
Since this FPGA-PCI-e device is the reason for being of the product, it rules over
any other factor as priority 1. I mean, as long as the system doesn't swap, we're
happy.
Matthew Wilcox <matthew@....cx> wrote:
> Is it really key? If you supported, ohidon'tknow, 2MB pages, you'd
> need 64 entries in the FPGA to store the addresses of those 2MB pages,
> which doesn't sound like a huge burden.
This is an excellent idea, and thank you for bringing me back to earth. It is definitely
doable and would solve my 4MB cap problem. It would require work on
the FPGA side as well as mods to my uio driver to manage the memory and
override uio's page_fault implementation (if I am correct), but the rest of our code
base would be unaffected.
What would be completely ideal for me is the idea to cut out a chunk of physical ram
from the kernel though, like those integrated graphic chips do.
"Michael S. Tsirkin" <mst@...hat.com> wrote:
> You can use alloc_bootmem I guess.
That sounds like something I could use, any idea how to do this elegantly? Meaning,
where is the earliest point I can successfully call alloc_bootmem(128MB) successfully?
How do I communicate this cleanly to my pci probe function so I can hand it to uio?
Using plain old global exported symbols?
Would I encounter cache problems or the like? Or is snooping guaranteed on modern
Intel platforms (Atom E6XX)?
>
> Thanks,
> Hans
Super thanks again for all the help guys, making real progress here.
/jfd--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists