lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZZ/zR+GybLjR8Dcb@nixie71>
Date: Thu, 11 Jan 2024 07:55:19 -0600
From: Jeff LaBundy <jeff@...undy.com>
To: Javier Carrasco <javier.carrasco@...fvision.net>
Cc: Dmitry Torokhov <dmitry.torokhov@...il.com>,
	Rob Herring <robh+dt@...nel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
	Conor Dooley <conor+dt@...nel.org>,
	Henrik Rydberg <rydberg@...math.org>,
	Bastian Hecht <hechtb@...il.com>,
	Michael Riesch <michael.riesch@...fvision.net>,
	linux-kernel@...r.kernel.org, linux-input@...r.kernel.org,
	devicetree@...r.kernel.org
Subject: Re: [PATCH v6 2/4] Input: touch-overlay - Add touchscreen overlay
 handling

Hi Javier,

On Fri, Jan 05, 2024 at 08:21:25PM +0100, Javier Carrasco wrote:
> Hi Jeff,
> 
> On 30.12.23 21:29, Jeff LaBundy wrote:
> > Having reviewed this version in detail, it's clear how the implementation
> > imposes this restriction. However, it's not clear why we have to have this
> > restriction straight out of the gate; it also breaks the "square doughnut"
> > example we discussed in v5, where a button resides inside a touch surface
> > which is split into four discrete rectangles that report X/Y coordinates
> > relative to the same origin.
> > 
> > From my naive point of view, a driver should merely pass a contact's ID
> > (for HW that can track contacts), coordinates, and pressure. Your helper(s)
> > are then responisble for iterating over the list, determining the segment
> > in which the coordinates fall, and then reporting the event by way of
> > input_report_abs() or input_report_key() based on whether or not a keycode
> > is defined.
> > 
> > I think the problem with that approach is that touchscreen drivers only
> > report coordinates when the pressure is nonzero. The process of dropping
> > a contact, i.e. button release for some segments, happens inside input-mt
> > by virtue of the driver calling input_mt_sync_frame().
> > 
> > It makes sense now why you are duplicating the contact tracking to a degree
> > here. Therefore, it's starting to look more and more like the overlay segment
> > handling needs to move into the input-mt core, where much of the information
> > you need already exists.
> > 
> > If we look at input_mt_report_pointer_emulation(), the concept of button
> > release is already happening; all we really want to do here is gently
> > expand the core to understand that some ranges of coordinates are simply
> > quantized to a keycode with binary pressure (i.e. press/release).
> > 
> > In addition to removing duplicate code as well as the restriction of supporting
> > only one X/Y surface, moving overlay support into the input-mt core would
> > remove the need to modify each touchscreen driver one at a time with what
> > are largely the same nontrivial changes. If we think about it more, the
> > touchscreen controller itself is not changing, so the driver really shouldn't
> > have to change much either.
> > 
> > Stated another way, I think it's a better design pattern if we let drivers
> > continue to do their job of merely lobbing hardware state to the input
> > subsytem via input_mt_slot(), touchscreen_report_pos() and input_mt_sync_frame(),
> > then leave it to the input subsystem alone to iterate over the list and
> > determine whether some coordinates must be handled differently.
> > 
> > The main drawback to this approach is that the overlay buttons would need
> > to go back to being part of the touchscreen input device as in v1, as opposed
> > to giving the driver the flexibility of splitting the buttons and X/Y surfaces
> > into two separate input devices.
> > 
> > When we first discussed this with Peter, we agreed that splitting them into two
> > input devices grants the most flexibility, in case user space opts to inhibit
> > one but not the other, etc. However since the buttons and X/Y surfaces are all
> > part of the same physical substrate, it seems the chances of user space being
> > interested in one but not the other are low.
> > 
> > Furthermore, folding the buttons and X/Y surfaces back into the same input
> > device would remove the need for each touchscreen driver to preemptively
> > allocate a second input device, but then remove it later as in patch [4/4]
> > in case the helpers did not find any buttons.
> > 
> > What are your thoughts on evolving the approach in this way? It's obviously
> > another big change and carries some risk to the core, so I'm curious to hear
> > Dmitry's and others' thoughts as well. I appreciate that you've been iterating
> > on this for some time, and good is not the enemy of great; therefore, maybe
> > a compromise is to move forward with the current approach in support of the
> > hardware you have today, then work it into the input-mt core over time. But
> > it would be nice to avoid ripping up participating touchscreen drivers twice.
> > 
> > Thank you for your patience and continued effort. In the meantime, please note
> > some minor comments that are independent of this architectural decision.
> > 
> 
> Thanks again for your thorough reviews and proposals to improve the code.
> 
> I am basically open to any solution that improves the quality of the
> feature. If I get you right, moving everything to the input-mt core
> would hide the overlay stuff from the device drivers (which sounds good)
> and a bit of code could be simplified by using the existing infrastructure.

Yes, that is the idea.

> 
> On the other hand, adding this feature to the input-mt core right away
> increases the risk of breaking things that many users need. We are
> already using this feature in some prototypes since v1 without any issue
> so far, but it would be great if it could be tested under different
> circumstances (hardware, configurations, etc.) before it goes into the
> core, wouldn't it?

I agree with you. Thinking about this more, immediately introducing this
feature to the core is a relatively high risk that would be shared by all
users. I like your idea of introducing a preliminary version first before
making heavy-handed changes. That's the beauty of helper functions; they
only impact users who explicitly opt in.

> 
> I would also like to know what more experienced people think about it,
> if we should go all out and add it to the input-mt core now or as you
> also suggested, move forward with the current approach and let it
> "mature" first. The cost of that would be modifying device driver code
> twice, but I suppose that not so many drivers will add this feature in
> the next kernel iterations... maybe you? it sounds like that you might
> have a use case for it :)

I don't have an immediate use case, but I've been looking at this from
the perspective of a future customer of it. Maybe the right path forward
is as follows:

1. Stick with the same general architecture of v6 and its "limitations",
   which in practice are unlikely to be encountered. I imagine the overlay
   layout you have been using would be the most common use case.
2. Make the handful of small changes that have been suggested thus far.
3. Consider updating patch [4/4] to combine the touchscreen and buttons
   into the same input device as you had in v1. This sets a little simpler
   precedent for the first user of these helpers. If later these helpers
   do get absorbed into the core, thereby forcing a single input device,
   the st1232 would continue to appear the same to user space.

Does this seem reasonable?

> 
> >> +struct button {
> >> +	u32 key;
> >> +	bool pressed;
> >> +	int slot;
> >> +};
> >> +
> >> +struct touch_overlay_segment {
> >> +	struct list_head list;
> >> +	u32 x_origin;
> >> +	u32 y_origin;
> >> +	u32 x_size;
> >> +	u32 y_size;
> >> +	struct button *btn;
> > 
> > I think you can simply declare the struct itself here as opposed to a pointer to
> > one; this would avoid a second call to devm_kzalloc().
> > 
> 
> That was the other option I mentioned in my reply and I am fine with the
> modification you propose.
> 
> >> +	if (fwnode_property_present(segment_node, "linux,code")) {
> > 
> > Instead of walking the device tree twice by calling fwnode_property_present()
> > followed by fwnode_property_read_u32(), you can simply check whether the
> > latter returns -EINVAL, which indicates the optional property was absent.
> > 
> 
> Ack.
> 
> >> +		segment->btn = devm_kzalloc(dev, sizeof(*segment->btn),
> >> +					    GFP_KERNEL);
> >> +		if (!segment->btn)
> >> +			return -ENOMEM;
> >> +
> 
> 
> >> +	fwnode_for_each_child_node(overlay, fw_segment) {
> >> +		segment = devm_kzalloc(dev, sizeof(*segment), GFP_KERNEL);
> >> +		if (!segment) {
> >> +			error = -ENOMEM;
> >> +			goto put_overlay;
> > 
> > For this and the below case where you exit the loop early in case of an
> > error, you must call fwnode_handle_put(fw_segment) manually. The reference
> > count is handled automatically only when the loop iterates and terminates
> > naturally.
> > 
> > Since nothing else happens between the loop and the 'put_overlay' label,
> > you can also replace the goto with a break and remove the label altogether.
> > 
> 
> Ack.
> 
> > 
> > Kind regards,
> > Jeff LaBundy
> 
> 
> Thanks and best regards,
> Javier Carrasco

Kind regards,
Jeff LaBundy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ