[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1101151137590.6884@davide-lnx1>
Date: Sat, 15 Jan 2011 11:43:45 -0800 (PST)
From: Davide Libenzi <davidel@...ilserver.org>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Shawn Bohrer <shawn.bohrer@...il.com>
Subject: [patch 2/2] fix compiler warning and optimize the non-blocking
path
This patch adds a comment to ep_poll(), renames labels a bit clearly,
fixes a warning of unused variable from gcc, and optimizes the
non-blocking path a little.
From: Shawn Bohrer <shawn.bohrer@...il.com>
Hinted-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Davide Libenzi <davidel@...ilserver.org>
- Davide
---
fs/eventpoll.c | 32 ++++++++++++++++++++++++++++----
1 file changed, 28 insertions(+), 4 deletions(-)
Index: linux-2.6.mod/fs/eventpoll.c
===================================================================
--- linux-2.6.mod.orig/fs/eventpoll.c 2011-01-15 10:50:16.000000000 -0800
+++ linux-2.6.mod/fs/eventpoll.c 2011-01-15 11:01:33.000000000 -0800
@@ -1127,27 +1127,50 @@
return ep_scan_ready_list(ep, ep_send_events_proc, &esed);
}
+/**
+ * ep_poll - Retrieves ready events, and delivers them to the caller supplied
+ * event buffer.
+ *
+ * @ep: Pointer to the eventpoll context.
+ * @events: Pointer to the userspace buffer where the ready events should be
+ * stored.
+ * @maxevents: Size (in terms of number of events) of the caller event buffer.
+ * @timeout: Maximum timeout for the ready events fetch operation, in
+ * milliseconds. If the @timeout is zero, the function will not block,
+ * while if the @timeout is less than zero, the function will block
+ * until at least one event has been retrieved (or an error
+ * occurred).
+ *
+ * Returns: Returns the number of ready events which have been fetched, or an
+ * error code, in case of error.
+ */
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
int maxevents, long timeout)
{
int res, eavail, timed_out = 0;
unsigned long flags;
- long slack;
+ long slack = 0;
wait_queue_t wait;
struct timespec end_time;
ktime_t expires, *to = NULL;
if (timeout > 0) {
ktime_get_ts(&end_time);
- timespec_add_ns(&end_time, (u64)timeout * NSEC_PER_MSEC);
+ timespec_add_ns(&end_time, (u64) timeout * NSEC_PER_MSEC);
slack = select_estimate_accuracy(&end_time);
to = &expires;
*to = timespec_to_ktime(end_time);
} else if (timeout == 0) {
+ /*
+ * Avoid the unnecessary trip to the wait queue loop, if the
+ * caller specified a non blocking operation.
+ */
timed_out = 1;
+ spin_lock_irqsave(&ep->lock, flags);
+ goto check_events;
}
-retry:
+fetch_events:
spin_lock_irqsave(&ep->lock, flags);
res = 0;
@@ -1184,6 +1207,7 @@
set_current_state(TASK_RUNNING);
}
+check_events:
/* Is it worth to try to dig for events ? */
eavail = ep_events_available(ep);
@@ -1196,7 +1220,7 @@
*/
if (!res && eavail &&
!(res = ep_send_events(ep, events, maxevents)) && !timed_out)
- goto retry;
+ goto fetch_events;
return res;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists