[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090429102734.GC2373@elte.hu>
Date: Wed, 29 Apr 2009 12:27:34 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Eric Dumazet <dada1@...mosbay.com>
Cc: linux kernel <linux-kernel@...r.kernel.org>,
Andi Kleen <andi@...stfloor.org>,
David Miller <davem@...emloft.net>, cl@...ux.com,
jesse.brandeburg@...el.com, netdev@...r.kernel.org,
haoki@...hat.com, mchan@...adcom.com, davidel@...ilserver.org
Subject: Re: [PATCH] poll: Avoid extra wakeups in select/poll
* Eric Dumazet <dada1@...mosbay.com> wrote:
> Ingo Molnar a écrit :
> > * Eric Dumazet <dada1@...mosbay.com> wrote:
> >
> >> @@ -418,8 +429,16 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
> >> if (file) {
> >> f_op = file->f_op;
> >> mask = DEFAULT_POLLMASK;
> >> - if (f_op && f_op->poll)
> >> + if (f_op && f_op->poll) {
> >> + if (wait) {
> >> + wait->key = POLLEX_SET;
> >> + if (in & bit)
> >> + wait->key |= POLLIN_SET;
> >> + if (out & bit)
> >> + wait->key |= POLLOUT_SET;
> >> + }
> >> mask = (*f_op->poll)(file, retval ? NULL : wait);
> >> + }
> >> fput_light(file, fput_needed);
> >> if ((mask & POLLIN_SET) && (in & bit)) {
> >> res_in |= bit;
> >
> > Please factor this whole 'if (file)' branch out into a helper.
> > Typical indentation levels go from 1 to 3 tabs - 4 should be avoided
> > if possible and 5 is pretty excessive already. This goes to eight.
> >
>
> Thanks Ingo,
>
> Here is v3 of patch, with your Acked-by included :)
>
> This is IMHO clearer since helper immediatly follows POLLIN_SET / POLLOUT_SET /
> POLLEX_SET defines.
>
> [PATCH] poll: Avoid extra wakeups in select/poll
>
> After introduction of keyed wakeups Davide Libenzi did on epoll, we
> are able to avoid spurious wakeups in poll()/select() code too.
>
> For example, typical use of poll()/select() is to wait for incoming
> network frames on many sockets. But TX completion for UDP/TCP
> frames call sock_wfree() which in turn schedules thread.
>
> When scheduled, thread does a full scan of all polled fds and
> can sleep again, because nothing is really available. If number
> of fds is large, this cause significant load.
>
> This patch makes select()/poll() aware of keyed wakeups and
> useless wakeups are avoided. This reduces number of context
> switches by about 50% on some setups, and work performed
> by sofirq handlers.
>
> Signed-off-by: Eric Dumazet <dada1@...mosbay.com>
> Acked-by: David S. Miller <davem@...emloft.net>
> Acked-by: Andi Kleen <ak@...ux.intel.com>
> Acked-by: Ingo Molnar <mingo@...e.hu>
> ---
> fs/select.c | 40 ++++++++++++++++++++++++++++++++++++----
> include/linux/poll.h | 3 +++
> 2 files changed, 39 insertions(+), 4 deletions(-)
>
> diff --git a/fs/select.c b/fs/select.c
> index 0fe0e14..ba068ad 100644
> --- a/fs/select.c
> +++ b/fs/select.c
> @@ -168,7 +168,7 @@ static struct poll_table_entry *poll_get_entry(struct poll_wqueues *p)
> return table->entry++;
> }
>
> -static int pollwake(wait_queue_t *wait, unsigned mode, int sync, void *key)
> +static int __pollwake(wait_queue_t *wait, unsigned mode, int sync, void *key)
> {
> struct poll_wqueues *pwq = wait->private;
> DECLARE_WAITQUEUE(dummy_wait, pwq->polling_task);
> @@ -194,6 +194,16 @@ static int pollwake(wait_queue_t *wait, unsigned mode, int sync, void *key)
> return default_wake_function(&dummy_wait, mode, sync, key);
> }
>
> +static int pollwake(wait_queue_t *wait, unsigned mode, int sync, void *key)
> +{
> + struct poll_table_entry *entry;
> +
> + entry = container_of(wait, struct poll_table_entry, wait);
> + if (key && !((unsigned long)key & entry->key))
> + return 0;
> + return __pollwake(wait, mode, sync, key);
> +}
> +
> /* Add a new entry */
> static void __pollwait(struct file *filp, wait_queue_head_t *wait_address,
> poll_table *p)
> @@ -205,6 +215,7 @@ static void __pollwait(struct file *filp, wait_queue_head_t *wait_address,
> get_file(filp);
> entry->filp = filp;
> entry->wait_address = wait_address;
> + entry->key = p->key;
> init_waitqueue_func_entry(&entry->wait, pollwake);
> entry->wait.private = pwq;
> add_wait_queue(wait_address, &entry->wait);
> @@ -362,6 +373,18 @@ get_max:
> #define POLLOUT_SET (POLLWRBAND | POLLWRNORM | POLLOUT | POLLERR)
> #define POLLEX_SET (POLLPRI)
>
> +static void wait_key_set(poll_table *wait, unsigned long in,
> + unsigned long out, unsigned long bit)
> +{
> + if (wait) {
> + wait->key = POLLEX_SET;
> + if (in & bit)
> + wait->key |= POLLIN_SET;
> + if (out & bit)
> + wait->key |= POLLOUT_SET;
> + }
> +}
should be inline perhaps?
> +
> int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
> {
> ktime_t expire, *to = NULL;
> @@ -418,20 +441,25 @@ int do_select(int n, fd_set_bits *fds, struct timespec *end_time)
> if (file) {
> f_op = file->f_op;
> mask = DEFAULT_POLLMASK;
> - if (f_op && f_op->poll)
> - mask = (*f_op->poll)(file, retval ? NULL : wait);
> + if (f_op && f_op->poll) {
> + wait_key_set(wait, in, out, bit);
> + mask = (*f_op->poll)(file, wait);
> + }
> fput_light(file, fput_needed);
> if ((mask & POLLIN_SET) && (in & bit)) {
> res_in |= bit;
> retval++;
> + wait = NULL;
> }
> if ((mask & POLLOUT_SET) && (out & bit)) {
> res_out |= bit;
> retval++;
> + wait = NULL;
> }
> if ((mask & POLLEX_SET) && (ex & bit)) {
> res_ex |= bit;
> retval++;
> + wait = NULL;
> }
> }
> }
Looks much nicer now! [ I'd still suggest to factor out the guts of
do_select() as its nesting is excessive that hurts its reviewability
quite a bit - but now your patch does not make the situation any
worse. ]
Even-More-Acked-by: Ingo Molnar <mingo@...e.hu>
Ingo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists