[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090122161132.GA27250@redhat.com>
Date: Thu, 22 Jan 2009 17:11:32 +0100
From: Oleg Nesterov <oleg@...hat.com>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: Ingo Molnar <mingo@...e.hu>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] workqueue: don't alloc_percpu for single workqueue
On 01/22, Lai Jiangshan wrote:
>
> Oleg Nesterov wrote:
> > On 01/21, Lai Jiangshan wrote:
> >> @@ -906,6 +907,13 @@ void destroy_workqueue(struct workqueue_struct *wq)
> >> const struct cpumask *cpu_map = wq_cpu_map(wq);
> >> int cpu;
> >>
> >> + if (is_wq_single_threaded(wq)) {
> >> + cleanup_workqueue_thread(wq->cpu_wq);
> >> + kfree(wq->cpu_wq);
> >> + kfree(wq);
> >> + return;
> >> + }
> >
> > again, not sure I understand why this change is needed. Afaics we
> > only need to use kfree(wq->cpu_wq) instead of free_percpu() if
> > it is single-threaded.
> >
>
> I think this change is needed.
> In the single thread case, we don't need
> 1) cpu_maps_update_begin(). --> require cpu_add_remove_lock
> 2) remove workqueue from the list. (we did not inserted it)
>
> It is indeed that there is no bad result occurred when we do these
> things for single thread. But I think the destroying should not
> do things more than the creating.
I disagree.
Firstly, this path is rare and not time critical, it is better
to save a couple of bytes from .text.
But mostly I dislike the fact that we add another special case
for the single-threaded wqs which is not strictly needed.
Following your logic we can also change flush_workqueue(), it
doesn't need for_each_cpu_mask_nr() when single-threaded.
That said, I agree this is a matter of taste, I won't persist.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists