lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <c4e36d110802230240l23322628t3b437e62f5561820@mail.gmail.com>
Date:	Sat, 23 Feb 2008 11:40:42 +0100
From:	"Zdenek Kabelac" <zdenek.kabelac@...il.com>
To:	linux-kernel@...r.kernel.org
Subject: copy_user_generic_string and trace_hardirqs_on_thunk loop in qemu ??

Hi

I'm looking for some help with this problem - while doing some other
work my testscript seem to get looped in a really weird place.

Everything happens with kernel running in qemu -

When I run dmsetup status - occasionally dmsetup starts to take
100%CPU and cannot be killed.  Happens only when there is high disk
load generated by some other code. This code has some bugs in locks
and semaphores.

But from the task trace I always get this Call Trace: for runnig dmsetup task

[  140.106102] Call Trace:
[  140.106102]  [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[  140.106102]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[  140.106102]  [<ffffffff8100cdfc>] ? restore_args+0x0/0x30
[  140.106102]  [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[  140.106102]  [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[  140.106102]  [<ffffffff81047e21>] ? __capable+0x11/0x30
[  140.106102]  [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[  140.106102]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[  140.106102]  [<ffffffff8804481d>] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20

[  141.324472] Call Trace:
[  141.324472]  [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[  141.324472]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[  141.324472]  [<ffffffff880433e0>] ? :dm_mod:table_status+0x0/0x90
[  141.324472]  [<ffffffff8100cdfc>] ? restore_args+0x0/0x30
[  141.324472]  [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[  141.324472]  [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[  141.324472]  [<ffffffff81047e21>] ? __capable+0x11/0x30
[  141.324472]  [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[  141.324472]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[  141.324472]  [<ffffffff8804481d>] ? :dm_mod:dm_compat_ctl_ioctl+0xd/0x20
[  141.324472]  [<ffffffff810f63b2>] ? compat_sys_ioctl+0x182/0x3d0

[  417.993881] Call Trace:
[  417.993881]  [<ffffffff812eda0e>] ? thread_return+0x8a/0x50c
[  417.993881]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a
[  417.993881]  [<ffffffff812f1859>] ? error_exit+0x0/0xb8
[  417.993881]  [<ffffffff8117b677>] ? copy_user_generic_string+0x17/0x40
[  417.993881]  [<ffffffff88044534>] ? :dm_mod:copy_params+0x94/0xf0
[  417.993881]  [<ffffffff81047e21>] ? __capable+0x11/0x30
[  417.993881]  [<ffffffff880446f9>] ? :dm_mod:ctl_ioctl+0x169/0x260
[  417.993881]  [<ffffffff812f0561>] ? trace_hardirqs_on_thunk+0x35/0x3a

When I put there printk before calling copy_params in dm-ioctl.c to
print passed parameters (for copy there is 16KB) and then I add printk
after the call is fnished it really stops between these two printk the
call of copy_from_user.

When I run second dmsetup command in parallel - the first copy is
finished and I can see 'done' printed and I could start running
dmsetup status loop again.

Machine is 64bit - qemu kernel is 64bit - dmsetup application is 32bit
(but checked with 64bit - no difference)

Kernel is compiled with
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PREEMPT_RCU=y
CONFIG_PREEMPT=y
CONFIG_DEBUG_PREEMPT=y

Does anyone have any explanation for this weird behaviour?
Is this problem in qemu-kvm,  kvm, or linux kernel ?

Zdenek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ