[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51FECDA6.5070001@linuxtoys.org>
Date: Sun, 04 Aug 2013 14:54:46 -0700
From: Bob Smith <bsmith@...uxtoys.org>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
CC: Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 001/001] CHAR DRIVERS: a simple device to give daemons
a /sys-like interface
Greg
I've added some white space and snipped some text to make
the questions more visible.
Greg Kroah-Hartman wrote:
> No signed-off-by:, or body of text here that explains what this is and
> why it should be accepted.
D'oh! I'll fix this and add Joe's changes before resubmitting
the patch.
BTW: Several people have said they don't understand how to use
this device, so I'll give Documentation/proxy.txt a simple
example program that illustrates the two most common uses cases.
>> +Proxy has some unique features that make ideal for providing a
>> +/sys like interface. It has no internal buffering. The means
>> +the daemon can not write until a client program is listening.
>> +Both named pipes and pseudo-ttys have internal buffers.
>
> So what is wrong with internal buffers? Named pipes have been around
> for a long time, they should be able to be used much like this, right?
Buffers are great for streaming data but are unneeded for
configuration and status information. Neither sysfs or procfs
have internal buffers because they are not needed.
In a way the problem is not the buffer itself but that a write
into a named pipe, for example, will succeed even if there is no
one at the other end to receive the data. I think you'd want an
open and write on a device driver to fail if the driver is not
there and ready for the request.
>> +Proxy will succeed on a write of zero bytes. A zero byte write
>> +gives the client an EOF. (snip)
>> No other IPC mechanism can close one
>> +side of a device and leave the other side open.
> No "direct" IPC, but you can always emulate this just fine with existing
> IPC mechanisms.
OK.
>> +Proxy works well with select(), an important feature for daemons.
>> +In contrast, the FUSE filesystem has some issues with select() on
>> +the client side.
> What are those issues? Why not just fix them?
When I resubmit this patch, I'll remove all references to FUSE.
There is nothing wrong with FUSE. It is just not the right tool
for the this job. I shouldn't have build and load a full file
system when all I want is a handful of device nodes.
> Adding a new IPC function to the kernel should not be burried down in
> drivers/char/. We have 10+ different IPC mechanisms already, some
> simple, some more complex. Are you _sure_ none of the existing ones
> will not work for you?
I'm convinced this has the fewest lines of new code and the
smallest impact on the rest of the system, but I could be wrong.
The minimum feature set I want is to emulate for my user-space
device driver what the kernel has for procfs and sysfs, That is,
echo 1 > /proc/sys/net/ipv4/ip_forward # procfs
echo 75 > /dev/motors/left/speed # proxy dev
> Maybe a simple userspace library that wraps the
> existing mechanisms would be better (no kernel changes needed, portable
> to any kernel release, etc.)?
Yes, this is the traditional model for approaching problems like
the one I have. It would involve opening a unix socket, defining
a protocol for that socket, and then writing several bindings for
that protocol for different languages. Wow, that is a LOT of work.
Luckily for us the procfs and sysfs authors have given us a much
better model to use: ASCII characters terminated by a newline. My
Raspberry Pi customers expect to control an LED with a command like
this: echo 1 > /sys/class/gpio/gpio25
So it is entirely reasonable on their part to want to control a
stepper motor with a command like this:
echo 300 > /dev/robot/stepper0/count
Again, I hope you don't mind the text snip and extra white space.
thanks
Bob Smith
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists