[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0811261711130.3325@localhost.localdomain>
Date: Wed, 26 Nov 2008 17:15:46 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: eranian@...il.com
cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
mingo@...e.hu, x86@...nel.org, andi@...stfloor.org,
sfr@...b.auug.org.au
Subject: Re: [patch 06/24] perfmon: generic x86 definitions (x86)
Stephane,
On Wed, 26 Nov 2008, stephane eranian wrote:
> > There is no harm, when the context is kept around, right ?
> >
>
> Well, there are possibly PMU interrupts. If the monitored thread is
> active on the CPU
> by the time the tool dies, then it will keep on running with
> monitoring on, until it is
> context switched out or dies.
If the interrupt detects that the context is dead, then it can disable
the counters and be done with it. And when the thread is switched in
again it just does not enable the counters when the context is dead.
> With the approach currently implemented, the TIF bit will be set and
> as soon as the thread
> leaves the kernel for any reason, it will execute the cleanup
> function which will stop
> monitoring and free the context.
Well, this does not guarantee that no PMU interrupts happen before it
can process the TIF bit.
> >> Another possible solution (which is not implemented):
> >> - just let the context attached and run the thread to completion.
> >> If another tool wants to
> >> attach to the same thread, it will detect there is already a
> >> context attached, and that it is
> >> marked ZOMBIE, so it will clean it up. This is a lazy cleanup approach.
> >
> > Looks like ctx is a couple of hundred bytes, so just keep it around
> > until thread exit time or until the other tool does the cleanup
> > possibly by recycling the context.
> >
> That's true except for the caveat described above.
Which is fine.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists