lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100119085811.GA5145@nowhere>
Date:	Tue, 19 Jan 2010 09:58:15 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Xiao Guangrong <xiaoguangrong@...fujitsu.com>
Cc:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Paul Mackerras <paulus@...ba.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/3] perf_event: fix race in
	perf_swevent_get_recursion_context()

On Tue, Jan 19, 2010 at 09:19:43AM +0800, Xiao Guangrong wrote:
> 
> 
> Frederic Weisbecker wrote:
> > On Mon, Jan 18, 2010 at 09:42:34PM +0800, Xiao Guangrong wrote:
> >> It only disable preemption in perf_swevent_get_recursion_context()
> >> it can't avoid race of hard-irq and NMI
> >>
> >> In this patch, we use atomic operation to avoid it and reduce
> >> cpu_ctx->recursion size, it also make this patch no need diable
> >> preemption
> >>
> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
> > 
> > 
> > 
> > I don't understand what is racy in what we have currently.
> > 
> 
> It's because hard-irq(we can handle interruption with interruption enabled)
> and NMI are nested, for example:
> 
> int perf_swevent_get_recursion_context(void)
> {
> 	......
> 	if (cpuctx->recursion[rctx]) {
> 		put_cpu_var(perf_cpu_context);
> 		return -1;
> 	}
> 
> 	/*
> 	 * Another interruption handler/NMI will re-enter there if it
> 	 * happed, it make the recursion value chaotic
> 	 */
> 	cpuctx->recursion[rctx]++;
> 	......




I still don't understand the problem.

It's not like a fight between different cpus, it's a local per cpu
fight.

NMIs can't nest other NMIs but hardirq can nest another hardirqs,
we don't care much about these though.
So let's imagine the following sequence, a fight between nested
hardirqs:

cpuctx->recursion[irq] initially = 0

Interrupt (level 0):

       if (cpuctx->recursion[rctx]) {
               put_cpu_var(perf_cpu_context);
               return -1;
       }

Interrupt (level 1):


	cpuctx->recursion[rctx]++; // = 1

	...
	do something
	...
	cpuctx->recursion[rctx]--; // = 0

End Interrupt (level 1)

	cpuctx->recursion[rctx]++; // = 1

        ...
        do something
        ...
        cpuctx->recursion[rctx]--; // = 0

End interrupt (level 0)

Another sequence could be Interrupt level 0 has
already incremented recursion and we are interrupted by
irq level 1 which then won't be able to get the recursion
context. But that's not a big deal I think.


> > This looks broken. We don't call back perf_swevent_put_recursion_context
> > in fail case, so the bit won't ever be cleared once we recurse.
> > 
> 
> Um, i think we can't clear the bit in this fail case, consider below
> sequence:
> 
>  path A:                                path B
> 
>                                 set bit but find the bit already set
>  atomic set bit                                 |
>     |                                           |
>     V                                           |
>  handle SW event                                | 
>     |                                           V
>     V                               exit and not clear the bit 
>  atomic clear bit
> 
> After A and B, the bit is still zero
> 
> Right? :-)


Ah indeed, it will be cleared by the interrupted path.
I still don't understand what this patch brings us though.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ