lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 6 Jul 2015 20:24:14 +0300
From:	Dmitry Kalinkin <dmitry.kalinkin@...il.com>
To:	Martyn Welch <martyn.welch@...com>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org,
	Manohar Vanga <manohar.vanga@...il.com>,
	Igor Alekseev <igor.alekseev@...p.ru>
Subject: Re: [PATCHv3 08/16] staging: vme_user: provide DMA functionality

On Mon, Jul 6, 2015 at 5:48 PM, Martyn Welch <martyn.welch@...com> wrote:
>
>
> On 06/07/15 14:50, Dmitry Kalinkin wrote:
>>
>> On Mon, Jul 6, 2015 at 4:22 PM, Martyn Welch <martyn.welch@...com> wrote:
>>>
>>>
>>> Sorry about the *really* late reply, loads of emails some how missed my
>>> periodic search of the mailing list.
>>>
>>> I'm happy with the addition of DMA, just not sure whether it's worth
>>> adding
>>> an extra device file just to handle DMA. Could the user space application
>>> not just use the control device?
>>
>> That would require an additional ioctl field for DMA channel id in case we
>> want
>> to support both DMA channels on tsi148.
>>
>
> Or just dynamically allocate and free a resource for the DMA operation?
That seems to be a too high level.
Also notice how vme_user_dma_ioctl is doing without locks now. Acquiring a
resource for operation would introduce at least one.
>
>> It would make sense to save that device minor if Documentation/devices.txt
>> was good.
>> But it has only 4 slave and 4 master windows whereas we would want to
>> make some parameters for vme_user to configure this allocation numbers up
>> to 8 slaves and 8 masters.
>>
>
> The vme_user module was originally envisaged as a mechanism to provide
> support for applications that had been written to use the original driver at
> vmelinux.org.
That part I never understood. vmelinux.org's cvs has a very dated driver
with a very limited capabilities.

This one looks like a grandpa of the one we have (both tsi148 and universe):
ftp://ftp.prosoft.ru/pub/Hardware/Fastwel/CPx/CPC600/Software/Drivers/Linux/tsi148.tar.gz

There is also VME4L driver by MEN (tsi148 only):
https://www.men.de/software/13z014-90/

Some other driver:
http://www.awa.tohoku.ac.jp/~sanshiro/kinoko-e/vmedrv/

Some other driver (universe only):
https://github.com/mgmarino/VMELinux/blob/master/driver/universe.c

Driver by CERN (dynamic window allocation):
https://github.com/cota/ht-drivers/tree/master/vmebridge/driver

The point is: there are many drivers of different quality. All all of them
include some sort of userspace interface and that, as you mention below,
seems to work well for many cases.
All I'm trying to do is to make vme_user to be at least as useful as
drivers above
without looking back at vmelinux.
> Some functionality was dropped as it was not good practice
> (such as receiving VME interrupts in user space, it's not really doable if
> the slave card is Release On Register Access rather than Release on
> Acknowledge),
Didn't know about RORA. I wonder how different this is compared to the
PCI bus case.
> so the interface became more of a debug mechanism for me.
> Others have clearly found it provides enough for them to allow drivers to be
> written in user space.
>
> I was thinking that the opposite might be better, no windows were mapped at
> module load, windows could be allocated and mapped using the control device.
> This would ensure that unused resources were still available for kernel
> based drivers and would mean the driver wouldn't be pre-allocating a bunch
> of fairly substantially sized slave window buffers (the buffers could also
> be allocated to match the size of the slave window requested). What do you
> think?
I'm not a VME expert, but it seems that VME windows are a quiet limited resource
no matter how you allocate your resources. Theoretically we could put up to 32
different boards in a single crate, so there won't be enough windows for each
driver to allocate. That said, there is no way around this when putting together
a really heterogeneous VME system. To overcome such problem, one could
develop a different kernel API that would not provide windows to the
drivers, but
handle reads and writes by reconfiguring windows on the fly, which in turn would
introduce more latency. Those who need such API are welcome to develop it :)

As for dynamic vme_user device allocation, I don't see the point in this.
The only existing kernel VME driver allocates windows in advance, user is just
to make sure to leave one free window if she wants to use that. Module parameter
for window count will be dynamic enough to handle that.

Cheers,
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ