lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54D7A3A4.1030609@linux.vnet.ibm.com>
Date:	Sun, 08 Feb 2015 23:27:56 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	Sasha Levin <sasha.levin@...cle.com>
CC:	tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
	peterz@...radead.org, torvalds@...ux-foundation.org,
	konrad.wilk@...cle.com, pbonzini@...hat.com,
	paulmck@...ux.vnet.ibm.com, waiman.long@...com, davej@...hat.com,
	oleg@...hat.com, x86@...nel.org, jeremy@...p.org,
	paul.gortmaker@...driver.com, ak@...ux.intel.com,
	jasowang@...hat.com, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
	xen-devel@...ts.xenproject.org, riel@...hat.com,
	borntraeger@...ibm.com, akpm@...ux-foundation.org,
	a.ryabinin@...sung.com
Subject: Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions

On 02/07/2015 12:27 AM, Sasha Levin wrote:
> On 02/06/2015 09:49 AM, Raghavendra K T wrote:
>> Paravirt spinlock clears slowpath flag after doing unlock.
>> As explained by Linus currently it does:
>>                  prev = *lock;
>>                  add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>>
>>                  /* add_smp() is a full mb() */
>>
>>                  if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
>>                          __ticket_unlock_slowpath(lock, prev);
>>
>>
>> which is *exactly* the kind of things you cannot do with spinlocks,
>> because after you've done the "add_smp()" and released the spinlock
>> for the fast-path, you can't access the spinlock any more.  Exactly
>> because a fast-path lock might come in, and release the whole data
>> structure.
>>
>> Linus suggested that we should not do any writes to lock after unlock(),
>> and we can move slowpath clearing to fastpath lock.
>>
>> However it brings additional case to be handled, viz., slowpath still
>> could be set when somebody does arch_trylock. Handle that too by ignoring
>> slowpath flag during lock availability check.
>>
>> Reported-by: Sasha Levin <sasha.levin@...cle.com>
>> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
>> Signed-off-by: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
>
> With this patch, my VMs lock up quickly after boot with:

Tried to reproduce the hang myself, and there seems to be still a
barrier (or logic I miss).

Looking closely below, unlock_kick got missed though we see
that SLOWPATH_FLAG is still set:

/me goes back to look closely

(gdb) bt
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81037c27 in halt () at ./arch/x86/include/asm/paravirt.h:116
#2  kvm_lock_spinning (lock=0xffff88023ffe8240, want=52504) at 
arch/x86/kernel/kvm.c:786
#3  0xffffffff81037251 in __raw_callee_save_kvm_lock_spinning ()
#4  0xffff88023fc0edb0 in ?? ()
#5  0x0000000000000000 in ?? ()

(gdb) p *(arch_spinlock_t *)0xffff88023ffe8240
$1 = {{head_tail = 3441806612, tickets = {head = 52500, tail = 52517}}}
(gdb) t 2
[Switching to thread 2 (Thread 2)]
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
55	}
(gdb) bt
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81037c27 in halt () at ./arch/x86/include/asm/paravirt.h:116
#2  kvm_lock_spinning (lock=0xffff88023ffe8240, want=52502) at 
arch/x86/kernel/kvm.c:786
#3  0xffffffff81037251 in __raw_callee_save_kvm_lock_spinning ()
#4  0x0000000000000246 in irq_stack_union ()
#5  0x0000000000080750 in ?? ()
#6  0x0000000000020000 in ?? ()
#7  0x0000000000000004 in irq_stack_union ()
#8  0x000000000000cd16 in nmi_print_seq ()
Cannot access memory at address 0xbfc0
(gdb) t 3
[Switching to thread 3 (Thread 3)]
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
55	}
(gdb) bt
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81037c27 in halt () at ./arch/x86/include/asm/paravirt.h:116
#2  kvm_lock_spinning (lock=0xffff88023ffe8240, want=52512) at 
arch/x86/kernel/kvm.c:786
#3  0xffffffff81037251 in __raw_callee_save_kvm_lock_spinning ()
#4  0xffff88023fc8edb0 in ?? ()
#5  0x0000000000000000 in ?? ()

[...] //other threads with similar output

(gdb) t 8
[Switching to thread 8 (Thread 8)]
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
55	}
(gdb) bt
#0  native_halt () at ./arch/x86/include/asm/irqflags.h:55
#1  0xffffffff81037c27 in halt () at ./arch/x86/include/asm/paravirt.h:116
#2  kvm_lock_spinning (lock=0xffff88023ffe8240, want=52500) at 
arch/x86/kernel/kvm.c:786
#3  0xffffffff81037251 in __raw_callee_save_kvm_lock_spinning ()
#4  0xffff88023fdcedb0 in ?? ()
#5  0x0000000000000000 in ?? ()

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ