[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070601110058.GA83@tv-sign.ru>
Date: Fri, 1 Jun 2007 15:00:58 +0400
From: Oleg Nesterov <oleg@...sign.ru>
To: Mark Hounschell <markh@...pro.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>
Subject: Re: floppy.c soft lockup
I hope Ingo will correct me if I am wrong,
On 05/31, Mark Hounschell wrote:
>
> Oleg Nesterov wrote:
> >
> > So, the main question is: is it possible that one of RT processes/threads pins itself
> > to some CPU and eats 100% cpu power?
> >
>
> The main process is pinned to a processor(2) with all _non-kernel_ processes/threads forced over to processor 1.
> Any already affinitized processes or kernel threads are left as is. Only user land stuff is moved. The main process
> is for sure _not_ relinquishing it's processor(2) intentionally.
This means that a non-rt kernel thread bound to CPU 2 can't run. In particular,
events/2. This means that the problem is not directly connected to floppy.c,
any flush_scheduled_work() (or schedule_on_each_cpu()) can't succeed.
You can change irq/X/smp_affinity, but smp_apic_timer_interrupt() still can
queue work_struct on CPU 2 (for example, mm/slab.c uses per-cpu reap_work).
Since events/2 is blocked by the main RT thread, such a work_struct can't be
executed, and so flush_scheduled_work() hangs.
> All the I/O threads, floppy included, are running
> on the other processor(1). During this failure only 1 or 2 of the I/O threads are actually doing anything.
> I assume that what ever is going on in the kernel/floppy driver on behalf of the floppy thread is being done on processor 1?
> Processor 1 has lots of CPU time available.
Yes, but see above. flush_scheduled_work() needs a cooperation from events/2
which is bound to CPU 2.
If you changed irq/X/smp_affinity, the patch I sent should help, because
floppy_work can't be scheduled on CPU 2, but still I don't think it is right
to run 100% cpu-bound RT-process.
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists