lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B13F075.2090806@redhat.com>
Date:	Mon, 30 Nov 2009 14:19:01 -0200
From:	Mauro Carvalho Chehab <mchehab@...hat.com>
To:	Andy Walls <awalls@...ix.net>
CC:	Ray Lee <ray-lk@...rabbit.org>,
	Maxim Levitsky <maximlevitsky@...il.com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Jon Smirl <jonsmirl@...il.com>,
	Krzysztof Halasa <khc@...waw.pl>,
	Christoph Bartelmus <lirc@...telmus.de>,
	dmitry.torokhov@...il.com, j@...nau.net, jarod@...hat.com,
	jarod@...sonet.com, linux-input@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
	stefanr@...6.in-berlin.de, superm1@...ntu.com
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel
 IR  system?

Andy Walls wrote:

> Nonetheless I'd still rather debug a problem with a dead process in
> userspace than an oops or panic (not that an end user cares) and avoid
> the risk of filesystem corruption.

Considering my experience adding in-kernel support for IR's, I'd say that
in general, a driver does some things:

1) it polls or waits IRQ's for an IR event. On raw IR devices, the read value
means a mark or a space;
2) it counts the timings between each pulse, and pulse/space duration;
3) it runs a protocol decoding logic that, based on pulse/space duration, one
   scancode is produced;
4) it does a table lookup to convert the scancode into the corresponding keycode;
5) it generates an evdev event.

Steps 2 and 3 happen only when the device doesn't have hardware decoding capabilities.
For devices with hardware decoding, the polling/IRQ process already retrieves a scancode.

Based on my experience, I can say that, from the above logic, the one
where you're more likely to generate an OOPS is at the first one, 
where you need to do the proper memory barriers for example to avoid
unregistering an IR while you're in the middle of an IRQ or pull handling.
In the case of IRQ, you'll also need to take care to not sleep, since you're
in interrupt mode.

If you're outputing raw pulse/space to userspace (a lirc-like raw interface), 
you'll still need to do steps (1) and (2) in kernel, and doing a logic close
to (5) to output an event to userspace.

So, the basic difference is that you won't run the decoder (3) nor do a table lookup (4).

The logic for (4) is trivial (a simple a table lookup). If you do a mistake
there, the bug will likely arise at the development time. Also, if you're not able
to write a proper code to get a value from a table, you shouldn't be trying
to write a driver anyway.

The logic for (3) is as simple as identifying the length of a pulse and the length of
the spaces. Depending on the length, it will produce a zero or one. Pure integer math.
The only risk of such logic is if you try to divide by zero. Except of that, this type
of code shouldn't cause any OOPS or panic.

Also, for (3) and (4), it is very easy to write it first on userspace (if you feel
more comfortable on doing so) and, after doing enough testing, add the same code to
kernelspace.

Cheers,
Mauro.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ