lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 15 Apr 2024 12:56:52 +0200
From: Mike Galbraith <efault@....de>
To: K Prateek Nayak <kprateek.nayak@....com>, Peter Zijlstra
 <peterz@...radead.org>, mingo@...hat.com, juri.lelli@...hat.com, 
 vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org, 
 bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
 vschneid@...hat.com,  linux-kernel@...r.kernel.org
Cc: wuyun.abel@...edance.com, tglx@...utronix.de
Subject: Re: [RFC][PATCH 08/10] sched/fair: Implement delayed dequeue

On Fri, 2024-04-12 at 16:12 +0530, K Prateek Nayak wrote:
>
> I ran into a few issues when testing the series on top of tip:sched/core
> at commit 4475cd8bfd9b ("sched/balancing: Simplify the sg_status bitmask
> and use separate ->overloaded and ->overutilized flags"). All of these
> splats surfaced when running unixbench with Delayed Dequeue (echoing
> NO_DELAY_DEQUEUE to /sys/kernel/debug/sched/features seems to make the
> system stable when running Unixbench spawn)
>
> Unixbench (https://github.com/kdlucas/byte-unixbench.git) command:
>
>         ./Run spawn -c 512

That plus a hackbench loop works a treat.

>
> Splats appear soon into the run. Following are the splats and their
> corresponding code blocks from my 3rd Generation EPYC system
> (2 x 64C/128T):

Seems a big box is not required. With a low fat sched config (no group
sched), starting ./Run spawn -c 16 (cpus*2) along with a hackbench loop
reliably blows my old i7-4790 box out of the water nearly instantly.

    DUMPFILE: vmcore
        CPUS: 8
        DATE: Mon Apr 15 07:20:29 CEST 2024
      UPTIME: 00:07:23
LOAD AVERAGE: 1632.20, 684.99, 291.84
       TASKS: 1401
    NODENAME: homer
     RELEASE: 6.9.0.g0bbac3f-master
     VERSION: #7 SMP Mon Apr 15 06:40:05 CEST 2024
     MACHINE: x86_64  (3591 Mhz)
      MEMORY: 16 GB
       PANIC: "Oops: 0000 [#1] SMP NOPTI" (check log for details)
         PID: 22664
     COMMAND: "hackbench"
        TASK: ffff888100acbf00  [THREAD_INFO: ffff888100acbf00]
         CPU: 5
       STATE: TASK_WAKING (PANIC)

crash> bt -sx
PID: 22664    TASK: ffff888100acbf00  CPU: 5    COMMAND: "hackbench"
 #0 [ffff88817cc17920] machine_kexec+0x156 at ffffffff810642d6
 #1 [ffff88817cc17970] __crash_kexec+0xd7 at ffffffff81153147
 #2 [ffff88817cc17a28] crash_kexec+0x23 at ffffffff811535f3
 #3 [ffff88817cc17a38] oops_end+0xbe at ffffffff810329be
 #4 [ffff88817cc17a58] page_fault_oops+0x81 at ffffffff81071951
 #5 [ffff88817cc17ab8] exc_page_fault+0x62 at ffffffff8194f6f2
 #6 [ffff88817cc17ae0] asm_exc_page_fault+0x22 at ffffffff81a00ba2
    [exception RIP: pick_task_fair+71]
    RIP: ffffffff810d5b57  RSP: ffff88817cc17b90  RFLAGS: 00010046
    RAX: 0000000000000000  RBX: ffff88840ed70ec0  RCX: 00000001d7ec138c
    RDX: ffffffffe7a7f400  RSI: 0000000000000000  RDI: 0000000000000000
    RBP: ffff88840ed70ec0   R8: 0000000000000c00   R9: 000000675402f79e
    R10: ffff88817cc17b30  R11: 00000000000000bb  R12: ffff88840ed70f40
    R13: ffffffff81f64f16  R14: ffff888100acc560  R15: ffff888100acbf00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #7 [ffff88817cc17bb0] pick_next_task_fair+0x42 at ffffffff810d92c2
 #8 [ffff88817cc17be0] __schedule+0x10d at ffffffff8195936d
 #9 [ffff88817cc17c50] schedule+0x1c at ffffffff81959ddc
#10 [ffff88817cc17c60] schedule_timeout+0x18c at ffffffff8195fc4c
#11 [ffff88817cc17cc8] unix_stream_read_generic+0x2b7 at ffffffff81869917
#12 [ffff88817cc17da8] unix_stream_recvmsg+0x68 at ffffffff8186a2d8
#13 [ffff88817cc17de0] sock_read_iter+0x159 at ffffffff8170bd69
#14 [ffff88817cc17e70] vfs_read+0x2ce at ffffffff812f195e
#15 [ffff88817cc17ef8] ksys_read+0x40 at ffffffff812f21d0
#16 [ffff88817cc17f30] do_syscall_64+0x57 at ffffffff8194b947
#17 [ffff88817cc17f50] entry_SYSCALL_64_after_hwframe+0x76 at ffffffff81a0012b
    RIP: 00007f625660871e  RSP: 00007ffc75d48188  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007ffc75d48200  RCX: 00007f625660871e
    RDX: 0000000000000064  RSI: 00007ffc75d48190  RDI: 0000000000000007
    RBP: 00007ffc75d48260   R8: 00007ffc75d48140   R9: 00007f6256612010
    R10: 00007f62565f5070  R11: 0000000000000246  R12: 0000000000000064
    R13: 0000000000000000  R14: 0000000000000064  R15: 0000000000000000
    ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b
crash> dis pick_task_fair+71
0xffffffff810d5b57 <pick_task_fair+71>:	cmpb   $0x0,0x4c(%rax)
crash> gdb list *pick_task_fair+71
0xffffffff810d5b57 is in pick_task_fair (kernel/sched/fair.c:5498).
5493			SCHED_WARN_ON(cfs_rq->next->sched_delayed);
5494			return cfs_rq->next;
5495		}
5496
5497		struct sched_entity *se = pick_eevdf(cfs_rq);
5498		if (se->sched_delayed) {
5499			dequeue_entities(rq, se, DEQUEUE_SLEEP | DEQUEUE_DELAYED);
5500			SCHED_WARN_ON(se->sched_delayed);
5501			SCHED_WARN_ON(se->on_rq);
5502			if (sched_feat(DELAY_ZERO) && se->vlag > 0)
crash> struct -ox sched_entity
struct sched_entity {
    [0x0] struct load_weight load;
   [0x10] struct rb_node run_node;
   [0x28] u64 deadline;
   [0x30] u64 min_vruntime;
   [0x38] struct list_head group_node;
   [0x48] unsigned int on_rq;
   [0x4c] unsigned char sched_delayed;
   [0x4d] unsigned char custom_slice;
   [0x50] u64 exec_start;
   [0x58] u64 sum_exec_runtime;
   [0x60] u64 prev_sum_exec_runtime;
   [0x68] u64 vruntime;
   [0x70] s64 vlag;
   [0x78] u64 slice;
   [0x80] u64 nr_migrations;
   [0xc0] struct sched_avg avg;
}
SIZE: 0x100
crash>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ