lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <3371e349eded0cec5cd00bc5145ce10633369686.1292604060.git.lasaine@lvk.cs.msu.su>
Date:	Fri, 17 Dec 2010 22:54:27 +0300
From:	Alexander Gordeev <lasaine@....cs.msu.su>
To:	linux-kernel@...r.kernel.org
Cc:	"Nikita V\. Youshchenko" <yoush@...msu.su>,
	linuxpps@...enneenne.com, Rodolfo Giometti <giometti@...eenne.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Gordeev <lasaine@....cs.msu.su>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCHv6 03/16] pps: fix race in PPS_FETCH handler

There was a race in PPS_FETCH ioctl handler when several processes want
to obtain PPS data simultaneously using sleeping PPS_FETCH. They all
sleep most of the time in the system call.
With the old approach when the first process waiting on the pps queue
is waken up it makes new system call right away and zeroes pps->go. So
other processes continue to sleep. This is a clear race condition
because of the global 'go' variable.
With the new approach pps->last_ev holds some value increasing at each
PPS event. PPS_FETCH ioctl handler saves current value to the local
variable at the very beginning so it can safely check that there is a
new event by just comparing both variables.

Acked-by: Rodolfo Giometti <giometti@...ux.it>
Signed-off-by: Alexander Gordeev <lasaine@....cs.msu.su>
---
 drivers/pps/kapi.c         |    4 ++--
 drivers/pps/pps.c          |   10 +++++++---
 include/linux/pps_kernel.h |    2 +-
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c
index 55f3961..3f89f5eb 100644
--- a/drivers/pps/kapi.c
+++ b/drivers/pps/kapi.c
@@ -326,8 +326,8 @@ void pps_event(int source, struct pps_ktime *ts, int event, void *data)
 
 	/* Wake up if captured something */
 	if (captured) {
-		pps->go = ~0;
-		wake_up_interruptible(&pps->queue);
+		pps->last_ev++;
+		wake_up_interruptible_all(&pps->queue);
 
 		kill_fasync(&pps->async_queue, SIGIO, POLL_IN);
 	}
diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
index c76afb9..dc7e66c 100644
--- a/drivers/pps/pps.c
+++ b/drivers/pps/pps.c
@@ -136,6 +136,7 @@ static long pps_cdev_ioctl(struct file *file,
 
 	case PPS_FETCH: {
 		struct pps_fdata fdata;
+		unsigned int ev;
 
 		pr_debug("PPS_FETCH: source %d\n", pps->id);
 
@@ -143,11 +144,12 @@ static long pps_cdev_ioctl(struct file *file,
 		if (err)
 			return -EFAULT;
 
-		pps->go = 0;
+		ev = pps->last_ev;
 
 		/* Manage the timeout */
 		if (fdata.timeout.flags & PPS_TIME_INVALID)
-			err = wait_event_interruptible(pps->queue, pps->go);
+			err = wait_event_interruptible(pps->queue,
+					ev != pps->last_ev);
 		else {
 			unsigned long ticks;
 
@@ -159,7 +161,9 @@ static long pps_cdev_ioctl(struct file *file,
 
 			if (ticks != 0) {
 				err = wait_event_interruptible_timeout(
-						pps->queue, pps->go, ticks);
+						pps->queue,
+						ev != pps->last_ev,
+						ticks);
 				if (err == 0)
 					return -ETIMEDOUT;
 			}
diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
index c930d11..c3aed4b 100644
--- a/include/linux/pps_kernel.h
+++ b/include/linux/pps_kernel.h
@@ -55,7 +55,7 @@ struct pps_device {
 	struct pps_ktime clear_tu;
 	int current_mode;			/* PPS mode at event time */
 
-	int go;					/* PPS event is arrived? */
+	unsigned int last_ev;			/* last PPS event id */
 	wait_queue_head_t queue;		/* PPS event queue */
 
 	unsigned int id;			/* PPS source unique ID */
-- 
1.7.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ