lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100623061939.GB11187@core.coreip.homeip.net>
Date:	Tue, 22 Jun 2010 23:19:39 -0700
From:	Dmitry Torokhov <dmitry.torokhov@...il.com>
To:	Henrik Rydberg <rydberg@...omail.se>
Cc:	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
	Jiri Kosina <jkosina@...e.cz>,
	Mika Kuoppala <mika.kuoppala@...ia.com>,
	Benjamin Tissoires <tissoire@...a.fr>,
	Rafi Rubin <rafi@...s.upenn.edu>
Subject: Re: [PATCH] input: evdev: Use multi-reader buffer to save space
 (rev5)

Hi Henrik,

On Sun, Jun 20, 2010 at 08:48:54PM +0200, Henrik Rydberg wrote:
> @@ -149,11 +148,29 @@ static int evdev_grab(struct evdev *evdev, struct evdev_client *client)
>  
>  static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
>  {
> +	struct input_dev *dev = evdev->handle.dev;
> +	int head, tail;
> +
>  	if (evdev->grab != client)
>  		return  -EINVAL;
>  
> +	spin_lock_irq(&dev->event_lock);
> +
> +	head = client->head;
> +	tail = client->tail;
> +
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(client, &evdev->client_list, node) {
> +		client->head = head;
> +		client->tail = tail;
> +	}
> +	rcu_read_unlock();
> +
>  	rcu_assign_pointer(evdev->grab, NULL);
>  	synchronize_rcu();

You may not call synchronize_rcu() while holding spinlock and with
interrupts off.

> +
> +	spin_unlock_irq(&dev->event_lock);
> +
>  	input_release_device(&evdev->handle);
>  
>  	return 0;
> @@ -162,6 +179,7 @@ static int evdev_ungrab(struct evdev *evdev, struct evdev_client *client)
>  static void evdev_attach_client(struct evdev *evdev,
>  				struct evdev_client *client)
>  {
> +	client->head = client->tail = evdev->head;
>  	spin_lock(&evdev->client_lock);

So what happens if we advance evdev->head here but before we added the
new client to the list? I guess we'll have to wait till next event and
then client will read both of them...

Overall I am starting getting concerned about proper isolation between
clients. Right now, if one client stops reading events and another one
issues grab then the first client will only get events that were
accumulated before grab tookm place. With the new shared buffer the
first client may get "grabbed" events if it stop for long enough for
buffer to wrap around.

Do we really same that much memory here? We normally do not have that
many users connected to event devices at once...

Thanks.

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ