[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAE40pddfowQWus-6u9HEYawtUCOycK--3xBHb=AGaOj-tiMekg@mail.gmail.com>
Date: Sat, 23 Apr 2016 18:38:25 -0700
From: Brendan Gregg <brendan.d.gregg@...il.com>
To: Jeff Merkey <linux.mdb@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] The Linux Scheduler: a Decade of Wasted Cores Report
On Sat, Apr 23, 2016 at 11:20 AM, Jeff Merkey <linux.mdb@...il.com> wrote:
>
> Interesting read.
>
> http://www.ece.ubc.ca/~sasha/papers/eurosys16-final29.pdf
>
> "... The Linux kernel scheduler has deficiencies that prevent a
> multicore system from making proper use of all cores for heavily
> multithreaded loads, according to a lecture and paper delivered
> earlier this month at the EuroSys '16 conference in London, ..."
>
> Any plans to incorporate these fixes?
While this paper analyzes and proposes fixes for four bugs, it has
been getting a lot of attention for broader claims about Linux being
fundamentally broken:
"As a central part of resource management, the OS thread scheduler
must maintain the following, simple, invariant: make sure that ready
threads are scheduled on available cores. As simple as it may seem, we
found that this invariant is often broken in Linux. Cores may stay
idle for seconds while ready threads are waiting in runqueues."
Then states that the problems in the Linux scheduler that they found
cause degradations of "13-24% for typical Linux workloads".
Their proof of concept patches are online[1]. I tested them and saw 0%
improvements on the systems I tested, for some simple workloads[2]. I
tested 1 and 2 node NUMA, as that is typical for my employer (Netflix,
and our tens of thousands of Linux instances in the AWS/EC2 cloud),
even though I wasn't expecting any difference on 1 node. I've used
synthetic workloads so far.
I should note I do check run queue latency having hit scheduler bugs
in the past (especially on other kernels) and haven't noticed the
issues they describe, on our systems, for various workloads. I've also
written a new tool for this (runqlat using bcc/BPF[3]) to print run
queue latency as a histogram.
The bugs they found seem real, and their analysis is great (although
using visualizations to find and fix scheduler bugs isn't new), and it
would be good to see these fixed. However, it would also be useful to
double check how widespread these issues really are. I suspect many on
this list can test these patches in different environments.
Have we really had a decade of wasted cores, losing 13-24% for typical
Linux workloads? I don't think it's that widespread, but I'm only
testing one environment.
Brendan
[1] https://github.com/jplozi/wastedcores
[2] https://gist.github.com/brendangregg/588b1d29bcb952141d50ccc0e005fcf8
[3] https://github.com/iovisor/bcc/blob/master/tools/runqlat_example.txt
Powered by blists - more mailing lists