lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 07 Sep 2009 19:21:44 +0200 From: Eric Dumazet <eric.dumazet@...il.com> To: Patrick McHardy <kaber@...sh.net> CC: David Miller <davem@...emloft.net>, netdev@...r.kernel.org Subject: Re: net_sched 00/07: classful multiqueue dummy scheduler Patrick McHardy a écrit : > Patrick McHardy wrote: >> Eric Dumazet wrote: >>> Had very litle time to test this, but got problems very fast, if rate estimator configured. >> I didn't test that, but I'll look into it. >> >>> qdisc mq 1: root >>> Sent 528702 bytes 3491 pkt (dropped 0, overlimits 0 requeues 0) >>> rate 177925Kbit 49pps backlog 0b 0p requeues 0 >>> qdisc pfifo 8001: parent 1:1 limit 1000p >>> Sent 528702 bytes 3491 pkt (dropped 0, overlimits 0 requeues 0) >>> rate 25400bit 21pps backlog 0b 0p requeues 0 >>> >>> <<<crash>>> >> Did you capture the crash? No, in fact it was a freeze. >> >>> (On another term I had a "ping -i 0.1 192.168.20.120" that gave : >>> >>> 2009/08/07 14:53:42.498 64 bytes from 192.168.20.120: icmp_seq=1982 ttl=64 time=0.126 ms >>> 2009/08/07 14:53:42.598 64 bytes from 192.168.20.120: icmp_seq=1983 ttl=64 time=0.118 ms >>> 2009/08/07 14:53:42.698 64 bytes from 192.168.20.120: icmp_seq=1984 ttl=64 time=0.114 ms >>> 2009/08/07 14:53:42.798 64 bytes from 192.168.20.120: icmp_seq=1985 ttl=64 time=0.123 ms >>> 2009/08/07 14:53:42.898 64 bytes from 192.168.20.120: icmp_seq=1986 ttl=64 time=0.126 ms >>> 2009/08/07 14:53:42.998 64 bytes from 192.168.20.120: icmp_seq=1987 ttl=64 time=0.119 ms >>> 2009/08/07 14:53:43.098 64 bytes from 192.168.20.120: icmp_seq=1988 ttl=64 time=0.122 ms >>> 2009/08/07 14:53:43.198 64 bytes from 192.168.20.120: icmp_seq=1989 ttl=64 time=0.119 ms >>> 2009/08/07 14:53:43.298 64 bytes from 192.168.20.120: icmp_seq=1990 ttl=64 time=0.117 ms >>> 2009/08/07 14:53:43.398 64 bytes from 192.168.20.120: icmp_seq=1991 ttl=64 time=0.117 ms >>> ping: sendmsg: No buffer space available >> Was this also with rate estimators? No buffer space available >> indicates that some class/qdisc isn't dequeued or the packets >> are leaking, so the output of tc -s -d qdisc show ... might be >> helpful. > > I figured out the bug, which is likely responsible for both > problems. When grafting a mq class and creating a rate estimator, > the new qdisc is not attached to the device queue yet and also > doesn't have TC_H_ROOT as parent, so qdisc_create() selects > qdisc_root_sleeping_lock() for the estimator, which belongs to > the qdisc that is getting replaced. > > This is a patch I used for testing, but I'll come up with > something more elegant (I hope) as a final fix :) Yes, this was the problem, and your patch fixed it. Now adding CONFIG_SLUB_DEBUG_ON=y for next tries :) Sep 7 16:37:55 erd kernel: [ 217.056813] ============================================================================= Sep 7 16:37:55 erd kernel: [ 217.056865] BUG kmalloc-256: Poison overwritten Sep 7 16:37:55 erd kernel: [ 217.056910] ----------------------------------------------------------------------------- Sep 7 16:37:55 erd kernel: [ 217.056911] Sep 7 16:37:55 erd kernel: [ 217.056990] INFO: 0xf6e622bc-0xf6e622bd. First byte 0x76 instead of 0x6b Sep 7 16:37:55 erd kernel: [ 217.057049] INFO: Allocated in qdisc_alloc+0x1b/0x80 age=154593 cpu=2 pid=5165 Sep 7 16:37:55 erd kernel: [ 217.057094] INFO: Freed in qdisc_destroy+0x88/0xa0 age=139186 cpu=4 pid=5173 Sep 7 16:37:55 erd kernel: [ 217.057139] INFO: Slab 0xc16ddc40 objects=26 used=6 fp=0xf6e62260 flags=0x28040c3 Sep 7 16:37:55 erd kernel: [ 217.057184] INFO: Object 0xf6e62260 @offset=608 fp=0xf6e62850 Sep 7 16:37:55 erd kernel: [ 217.057184] Sep 7 16:37:55 erd kernel: [ 217.057259] Bytes b4 0xf6e62250: d9 04 00 00 fc 6f fb ff 5a 5a 5a 5a 5a 5a 5a 5a Ù...üoûÿZZZZZZZZ Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62260: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62270: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62280: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62290: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622a0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622b0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 76 76 6b 6b kkkkkkkkkkkkvvkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622c0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622d0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622e0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e622f0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62300: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62310: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62320: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62330: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62340: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk Sep 7 16:37:55 erd kernel: [ 217.057771] Object 0xf6e62350: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk¥ Sep 7 16:37:55 erd kernel: [ 217.057771] Redzone 0xf6e62360: bb bb bb bb »»»» Sep 7 16:37:55 erd kernel: [ 217.057771] Padding 0xf6e62388: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ Sep 7 16:37:55 erd kernel: [ 217.057771] Pid: 5334, comm: bash Not tainted 2.6.31-rc5-04006-gedfbc1d-dirty #188 Sep 7 16:37:55 erd kernel: [ 217.057771] Call Trace: Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02a6d5f>] print_trailer+0xcf/0x120 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02a6e69>] check_bytes_and_report+0xb9/0xe0 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02a7097>] check_object+0x1b7/0x200 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02a89b6>] __slab_alloc+0x3d6/0x5a0 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02a9602>] __kmalloc+0x172/0x180 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02e4c02>] ? load_elf_binary+0x122/0x1550 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02e4c02>] load_elf_binary+0x122/0x1550 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c035655e>] ? strrchr+0xe/0x30 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02e2644>] ? load_misc_binary+0x64/0x420 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c029190f>] ? page_address+0xcf/0xf0 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c0291aac>] ? kmap_high+0x1c/0x1e0 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c029190f>] ? page_address+0xcf/0xf0 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c029194a>] ? kunmap_high+0x1a/0x90 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02b20d7>] search_binary_handler+0xa7/0x240 Sep 7 16:37:55 erd kernel: [ 217.057771] [<c02b3686>] do_execve+0x2e6/0x3c0 Sep 7 16:37:56 erd kernel: [ 217.057771] [<c0201638>] sys_execve+0x28/0x60 Sep 7 16:37:56 erd kernel: [ 217.057771] [<c0202d08>] sysenter_do_call+0x12/0x26 Sep 7 16:37:56 erd kernel: [ 217.057771] FIX kmalloc-256: Restoring 0xf6e622bc-0xf6e622bd=0x6b Sep 7 16:37:56 erd kernel: [ 217.057771] Sep 7 16:37:56 erd kernel: [ 217.057771] FIX kmalloc-256: Marking all objects used -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists