[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110104083637.GB8066@mtj.dyndns.org>
Date: Tue, 4 Jan 2011 09:36:37 +0100
From: Tejun Heo <tj@...nel.org>
To: Andy Walls <awalls@...metrocast.net>
Cc: linux-kernel@...r.kernel.org, linux-media@...r.kernel.org
Subject: Re: [PATCH 11/32] v4l/cx18: update workqueue usage
Hello,
On Mon, Jan 03, 2011 at 07:54:56PM -0500, Andy Walls wrote:
> 2. To prevent work items being handled by keventd/n from being delayed
> too long, as the deferred work in question can involve a bit of sleeping
> due to contention, the workload of the CX23418's MPEG encoding engine,
> and the number of CX23418 devices in the system.
>
> Will all the sleeping that can happen, is the move to a system wq, under
> cmwq, going to have adverse affects on processing other work items in
> the system?
>
> I get the feeling it won't be a problem with cmwq, but I haven't paid
> enough attention to be sure.
It won't be a problem. Now the system_wq supports parallel execution
of multiple works and manages concurrency automatically. Work items
can sleep as necessary without worrying about other work items.
...
> It is not unusual for scheduled TV recording software to start nearly
> simultaneous DTV TS, MPEG, and VBI or MPEG Index streams on multiple
> cards. So 3 CX23418 cards with 3 streams each. Let's nominally
> estimate the timing of the CX18_CPU_DE_SET_MDL commands per stream at
> the PAL frame rate of 25 Hz; or 1 CX18_CPU_DE_SET_MDL mailbox command
> per stream per 40 milliseconds.
IIUC, if they spend any significant amount of time executing, they'll
be doing so by waiting for events (mutex, IRQ...), right? If so,
there's nothing to worry about. If it's gonna burn a lot of CPU
cycles, we'll need to use a workqueue marked CPU_INTENSIVE but I don't
think that's the case here.
Thank you.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists