[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4BFB11CF-1835-4AFA-BDC6-F42288A9A6F4@wilsonet.com>
Date: Wed, 25 Nov 2009 12:44:26 -0500
From: Jarod Wilson <jarod@...sonet.com>
To: Krzysztof Halasa <khc@...waw.pl>
Cc: Andy Walls <awalls@...ix.net>,
Christoph Bartelmus <lirc@...telmus.de>,
dmitry.torokhov@...il.com, j@...nau.net, jarod@...hat.com,
linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-media@...r.kernel.org, mchehab@...hat.com, superm1@...ntu.com
Subject: Re: [RFC] Should we create a raw input interface for IR's ? - Was: Re: [PATCH 1/3 v2] lirc core device driver infrastructure
On Nov 25, 2009, at 11:53 AM, Krzysztof Halasa wrote:
> Jarod Wilson <jarod@...sonet.com> writes:
...
>> Now, I'm all for "improving" things and integrating better with the
>> input subsystem, but what I don't really want to do is break
>> compatibility with the existing setups on thousands (and thousands?)
>> of MythTV boxes around the globe. The lirc userspace can be pretty
>> nimble. If we can come up with a shiny new way that raw IR can be
>> passed out through an input device, I'm pretty sure lirc userspace can
>> be adapted to handle that.
>
> Lirc can already handle input layer. Since both ways require userspace
> changes, why not do it the right way the first time? Most of the code
> is already written.
There's obviously still some debate as to what "the right way" is. :)
And the matter of someone having the time to write the rest of the code that would be needed.
>> If a new input-layer-based transmit interface is developed, we can
>> take advantage of that too. But there's already a very mature lirc
>> interface for doing all of this. So why not start with adding things
>> more or less as they exist right now and evolve the drivers into an
>> idealized form? Getting *something* into the kernel in the first place
>> is a huge step in that direction.
>
> What I see as potentially problematic is breaking compatibility multiple
> times.
Ah, but the approach I'd take to converting to in-kernel decoding[*] would be this:
1) bring drivers in in their current state
- users keep using lirc as they always have
2) add in-kernel decoding infra that feeds input layer
3) add option to use in-kernel decoding to existing lirc drivers
- users can keep using lirc as they always have
- users can optionally try out in-kernel decoding via a modparam
4) switch the default mode from lirc decode to kernel decode for each lirc driver
- modparam can be used to continue using lirc interface instead
5) assuming users aren't coming at us with pitchforks, because things don't actually work reliably with in-kernel decoding, deprecate the lirc interface in driver
6) remove lirc interface from driver, its now a pure input device
This would all be on a per-lirc-driver basis, and if/when all decoding could be reliably done in-kernel, and/or there was a way other than the lirc interface to pass raw IR signals out to userspace, the lirc interface could be removed entirely.
And we still need to consider IR transmitters as well. Those are handled quite well through the lirc interface, and I've not seen any concrete code (or even fully fleshed out ideas) on how IR transmit could be handled in this in-kernel decoding world.
[*] assuming, of course, that it was actually agreed upon that in-kernel decoding was the right way, the only way, all others will be shot on sight. ;)
--
Jarod Wilson
jarod@...sonet.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists