lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250411105134.1f316982@fangorn>
Date: Fri, 11 Apr 2025 10:51:34 -0400
From: Rik van Riel <riel@...riel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Pat Cody <pat@...cody.io>, mingo@...hat.com, juri.lelli@...hat.com,
 vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org,
 bsegall@...gle.com, mgorman@...e.de, vschneid@...hat.com,
 linux-kernel@...r.kernel.org, patcody@...a.com, kernel-team@...a.com,
 stable@...r.kernel.org, Breno Leitao <leitao@...ian.org>
Subject: Re: [PATCH] sched/fair: Add null pointer check to
 pick_next_entity()

On Wed, 9 Apr 2025 17:27:03 +0200
Peter Zijlstra <peterz@...radead.org> wrote:
> On Wed, Apr 09, 2025 at 10:29:43AM -0400, Rik van Riel wrote:
> > Our trouble workload still makes the scheduler crash
> > with this patch.
> > 
> > I'll go put the debugging patch on our kernel.
> > 
> > Should I try to get debugging data with this patch
> > part of the mix, or with the debugging patch just
> > on top of what's in 6.13 already?  
> 
> Whatever is more convenient I suppose.
> 
> If you can dump the full tree that would be useful. Typically the
> se::{vruntime,weight} and cfs_rq::{zero_vruntime,avg_vruntime,avg_load}
> such that we can do full manual validation of the numbers.

Here is a dump of the scheduler tree of the crashing CPU.

Unfortunately the CPU crashed in pick_next_entity, and not in your
debugging code. I'll add two more calls to avg_vruntime_validate(),
one from avg_vruntime_update(), and one rfom __update_min_vruntime()
when we skip the call to avg_vruntime_update(). The line numbers in
the backtrace could be a clue.

I have edited the cgroup names to make things more readable, but everything
else is untouched.

One thing that stands out to me is how the vruntime of each of the
entities on the CPU's cfs_rq are really large negative numbers.

vruntime = 18429030910682621789 equals 0xffc111f8d9ee675d

I do not know how those se->vruntime numbers got to that point,
but they are a suggestive cause of the overflow.

I'll go comb through the se->vruntime updating code to see how those
large numbers could end up as the vruntime for these sched entities.


nr_running = 3
min_vruntime = 107772371139014
avg_vruntime = -1277161882867784752
avg_load = 786
tasks_timeline = [
  {
    cgroup /A
    weight = 10230 => 9
    rq = {
      nr_running = 0
      min_vruntime = 458975898004
      avg_vruntime = 0
      avg_load = 0
      tasks_timeline = [
      ]
    }
  },
  {
    cgroup /B
    vruntime = 18445226958208703357
    weight = 319394 => 311
    rq = {
      nr_running = 2
      min_vruntime = 27468255210769
      avg_vruntime = 0
      avg_load = 93
      tasks_timeline = [
        {
          cgroup /B/a
          vruntime = 27468255210769
          weight = 51569 => 50
          rq = {
            nr_running = 1
            min_vruntime = 820449693961
            avg_vruntime = 0
            avg_load = 15
            tasks_timeline = [
              {
                task = 3653382 (fc0)
                vruntime = 820449693961
                weight = 15360 => 15
              },
            ]
          }
        },
        {
          cgroup /B/b
          vruntime = 27468255210769
          weight = 44057 => 43
          rq = {
            nr_running = 1
            min_vruntime = 563178567930
            avg_vruntime = 0
            avg_load = 15
            tasks_timeline = [
              {
                task = 3706454 (fc0)
                vruntime = 563178567930
                weight = 15360 => 15
              },
            ]
          }
        },
      ]
    }
  },
  {
    cgroup /C
    vruntime = 18445539757376619550
    weight = 477855 => 466
    rq = {
      nr_running = 0
      min_vruntime = 17163581720739
      avg_vruntime = 0
      avg_load = 0
      tasks_timeline = [
      ]
    }
  },
]


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ