lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C21C1C3.5050708@euromail.se>
Date:	Wed, 23 Jun 2010 10:11:47 +0200
From:	Henrik Rydberg <rydberg@...omail.se>
To:	Dmitry Torokhov <dmitry.torokhov@...il.com>
CC:	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
	Jiri Kosina <jkosina@...e.cz>,
	Mika Kuoppala <mika.kuoppala@...ia.com>,
	Benjamin Tissoires <tissoire@...a.fr>,
	Rafi Rubin <rafi@...s.upenn.edu>
Subject: Re: [PATCH] input: evdev: Use multi-reader buffer to save space (rev5)

Dmitry Torokhov wrote:
> Hi Henrik,
> 
> On Sun, Jun 20, 2010 at 08:48:54PM +0200, Henrik Rydberg wrote:
>> @@ -149,11 +148,29 @@ static int evdev_grab(struct evdev *evdev, struct evdev_client *client)
>>  
>>  static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
>>  {
>> +	struct input_dev *dev = evdev->handle.dev;
>> +	int head, tail;
>> +
>>  	if (evdev->grab != client)
>>  		return  -EINVAL;
>>  
>> +	spin_lock_irq(&dev->event_lock);
>> +
>> +	head = client->head;
>> +	tail = client->tail;
>> +
>> +	rcu_read_lock();
>> +	list_for_each_entry_rcu(client, &evdev->client_list, node) {
>> +		client->head = head;
>> +		client->tail = tail;
>> +	}
>> +	rcu_read_unlock();
>> +
>>  	rcu_assign_pointer(evdev->grab, NULL);
>>  	synchronize_rcu();
> 
> You may not call synchronize_rcu() while holding spinlock and with
> interrupts off.

Thanks.

>> +
>> +	spin_unlock_irq(&dev->event_lock);
>> +
>>  	input_release_device(&evdev->handle);
>>  
>>  	return 0;
>> @@ -162,6 +179,7 @@ static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
>>  static void evdev_attach_client(struct evdev *evdev,
>>  				struct evdev_client *client)
>>  {
>> +	client->head = client->tail = evdev->head;
>>  	spin_lock(&evdev->client_lock);
> 
> So what happens if we advance evdev->head here but before we added the
> new client to the list? I guess we'll have to wait till next event and
> then client will read both of them...

Right...

> Overall I am starting getting concerned about proper isolation between
> clients. Right now, if one client stops reading events and another one
> issues grab then the first client will only get events that were
> accumulated before grab tookm place. With the new shared buffer the
> first client may get "grabbed" events if it stop for long enough for
> buffer to wrap around.

Doing some research, the semantics of ioctl have obviously been discussed
before, and I believe this points to another such issue. When grabbing a device,
are we guaranteeing that the device no longer sends events to other clients, or
are we guaranteeing that other clients can no longer read the device? If the
latter, clearing all client buffers in conjunction with a grab would be
appropriate, and would solve this issue.

> Do we really same that much memory here? We normally do not have that
> many users connected to event devices at once...

Ok, let's scratch this. Although I think the idea of multi-reader buffers is
sound, it is obviously sufficiently incompatible with the current approach to
produce distastefully complex patches. I will return with a new set which only
fixes the buffer resize problem, and leaves the rest for later.

Thanks,
Henrik

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ