[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130712175028.GA11036@redhat.com>
Date: Fri, 12 Jul 2013 13:50:28 -0400
From: Dave Jones <davej@...hat.com>
To: David Ahern <dsahern@...il.com>
Cc: Dave Hansen <dave.hansen@...el.com>,
Ingo Molnar <mingo@...nel.org>,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Peter Anvin <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: Yet more softlockups.
On Fri, Jul 12, 2013 at 11:40:06AM -0600, David Ahern wrote:
> On 7/12/13 11:18 AM, Dave Jones wrote:
> > On Fri, Jul 12, 2013 at 11:12:13AM -0600, David Ahern wrote:
> > > On 7/12/13 9:45 AM, Dave Jones wrote:
> > > > Here's a fun trick:
> > > >
> > > > trinity -c perf_event_open -C4 -q -l off
> > > >
>
> In _get_address, case 8 must be happening a lot and I don't see a free
> when that address comes from malloc. Perhaps all of the rand() calls are
> breaking down in the VM. If I change that case from malloc to something
> static - like page_rand - memory stays flat.
Oh right, yeah, that's a known bug. I forgot about the ARG_ADDRESS case.
Another hacky workaround is to bound the trinity runs with -N1000000 or something
and do that in a shell loop so it frees it up on exit.
I need to find the time to add proper callbacks to release allocated memory.
Given you can run trinity long enough that you hit this however, makes me
think you won't be able to triger the bug I'm talking about.
Perhaps virtualised perf counters are somehow immune to this problem, because
on bare-metal, it literally takes seconds.
Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists