[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28f2fcbc0911240924r708202cdx8bc7b465d473f283@mail.gmail.com>
Date: Tue, 24 Nov 2009 09:24:26 -0800
From: Jason Garrett-Glaser <darkshikari@...il.com>
To: Nick Piggin <npiggin@...e.de>
Cc: Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: newidle balancing in NUMA domain?
> Quite a few being one test case, and on a program with a horrible
> parallelism design (rapid heavy weight forks to distribute small
> units of work).
> If x264 is declared dainbramaged, that's fine with me too.
We did multiple benchmarks using a thread pool and it did not help.
If you want to declare our app "braindamaged", feel free, but pooling
threads to avoid re-creation gave no benefit whatsoever. If you think
the parallelism methodology is wrong as a whole, you're basically
saying that Linux shouldn't be used for video compression, because
this is the exact same threading model used by almost every single
video encoder ever made. There are actually a few that use
slice-based threading, but those are actually even worse from your
perspective, because slice-based threading spawns mulitple threads PER
FRAME instead of one per frame.
Because of the inter-frame dependencies in video coding it is
impossible to efficiently get a granularity of more than one thread
per frame. Pooling threads doesn't change the fact that you are
conceptually creating a thread for each frame--it just eliminates the
pthread_create call. In theory you could do one thread per group of
frames, but that is completely unrealistic for real-time encoding
(e.g. streaming), requires a catastrophically large amount of memory,
makes it impossible to track the bit buffer, and all other sorts of
bad stuff.
Jason Garrett-Glaser
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists