[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130201021628.GC12678@yliu-dev.sh.intel.com>
Date: Fri, 1 Feb 2013 10:16:28 +0800
From: Yuanhan Liu <yuanhan.liu@...ux.intel.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Michel Lespinasse <walken@...gle.com>,
linux-kernel@...r.kernel.org, David Howells <dhowells@...hat.com>
Subject: Re: [PATCH] rwsem-spinlock: let rwsem write lock stealable
On Thu, Jan 31, 2013 at 10:18:18PM +0100, Ingo Molnar wrote:
>
> * Yuanhan Liu <yuanhan.liu@...ux.intel.com> wrote:
>
> > On Thu, Jan 31, 2013 at 02:12:28PM +0100, Ingo Molnar wrote:
> > >
> > > * Yuanhan Liu <yuanhan.liu@...ux.intel.com> wrote:
> > >
> > > > BTW, mind to tell a nice test case for mmap_sem?
> > >
> > > this one was write-hitting on mmap_sem pretty hard, last I
> > > checked:
> > >
> > > http://people.redhat.com/mingo/threaded-mmap-stresstest/
> >
> > Thanks!
> >
> > Is there any pass condition? I tested a while, at least I
> > found no oops or any noisy from dmesg output. Is that OK?
>
> Yeah, not crashing and not hanging is the expected behavior.
Good to know.
>
> > Well, sometimes, it will quit peacefully. Sometimes it will
> > not. ps -eo 'pid, state,wchan,comm' shows that it is sleeping
> > at futex_wait_queue_me().
> >
> > NOTE: this happens both with or w/o this patch. Thus it may
> > not an issue introduced by this patch?
>
> hm, that's unexpected - it's expected to loop infinitely.
Reall sorry about that. My bad. I modify the code a bit: removed the two
//, so that thread will exit after count > 1000000.
Sorry again :(
--yliu
> I have
> a newer version (attached) - is that exiting too?
>
> Maybe this triggers spuriously:
>
> if (!info->si_addr)
> raise(SIGABRT); /* Allow GDB backtrace */
>
> although then you should see the SIGABRT as an irregular exit
> IIRC.
>
> Thanks,
>
> Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists