[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1205969039.6437.44.camel@lappy>
Date: Thu, 20 Mar 2008 00:23:59 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: "Yu, Fenghua" <fenghua.yu@...el.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
"Luck, Tony" <tony.luck@...el.com>,
LKML <linux-kernel@...r.kernel.org>, linux-ia64@...r.kernel.org,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: RE: [2.6.25-rc5-mm1][regression] ia64: hackbench doesn't finish
>12 hour
On Tue, 2008-03-18 at 17:14 -0700, Yu, Fenghua wrote:
> >this paramter mean use all physical memory and about 1GB swap space.
> >Could you expand swap space?
>
> We can reproduce the soft lockup issue now and root cause the issue as
> well.
>
> Since the ptc.g patch uses semaphore ptcg_sem to serialize multiple
> ptc.g instructions in ia64_global_tlb_purge(). This requires the code
> path should be safe to sleep in down(). But the code path can not sleep
> during swap because it holds some spin locks (e.g. anon_vma_lock). Going
> to sleep finally causes soft lockup.
>
> Actually we though of this issue before releasing the ptcg patch and
> wrote some non-sleeping versions of ptcg patches. But since we couldn't
> see the sleeping issue during our testing, we didn't release a
> non-sleeping ptcg patch. If replacing the ptcg patch in -mm1 tree with
> one of our non-sleeping ptcg patches, the issue goes away.
>
> Tony and I are working on releasing a final ptcg patch to solve the
> issue.
Which makes me wonder, why did you ever use a semaphore here? Looking at
the code its a straight forward mutex. And when you would have used a
mutex lockdep would have warned about this.
There is hardly ever a good reason to use semaphores in new code, we're
trying very hard to get rid of them.
Hmm, then again, does ia64 have lockdep?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists