[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170608075936.426zbwb63drsemgh@gmail.com>
Date: Thu, 8 Jun 2017 09:59:36 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Nicolas Pitre <nicolas.pitre@...aro.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 0/8] scheduler tinification
Also, let me make it clear at the outset that we do care about RAM footprint all
the time, and I've applied countless data structure and .text reducing patches to
the kernel. But there's a cost/benefit analysis to be made, and this series fails
that test in my view, because it increases the complexity of an already complex
code base:
* Nicolas Pitre <nicolas.pitre@...aro.org> wrote:
> Most IOT targets are so small that people are rewriting new operating systems
> from scratch for them. Lots of fragmentation already exists.
Let me offer a speculative if somewhat cynical prediction: 90% of those ghastly
IOT hardware hacks won't survive the market. The remaining 10% will be successful
financially, despite being ghastly hardware hacks and will eventually, in the next
iteration or so, get a proper OS.
As users ask for more features the the hardware capabilities will increase
dramatically and home-grown microcontroller derived code plus minimal OSes will be
replaced by a 'real' OS. Because both developers and users will demand IPv6
compatibility, or Bluetooth connectivity, or storage support, or any random range
of features we have in the Linux kernel.
With the stroke of a pen from the CFO: "yes, we can spend more on our next
hardware design!" the problem goes away, overnight, and nobody will look back at
the hardware hack that had only 1MB of RAM.
> [...] We're talking about systems with less than one megabyte of RAM, sometimes
> much less.
Two data points:
Firstly, by the time any Linux kernel change I commit today gets to a typical
distro it's at least 0.5-1 years, 2 years for it to get widely used by hardware
shops - 5 years to get used by enterprises. More latency in more conservative
places.
Secondly, I don't see Moore's Law reversing:
http://nerdfever.com/wp-content/uploads/2015/06/2015-06_Moravec_MIPS.png
If you combine those two time frames, the consequence of this:
Even taking the 1MB size at face value (which I don't: a networking enabled system
can probably not function very well with just 1MB of RAM) - the RAM-starved 1 MB
system today will effectively be a 2 MB system in 2 years.
And yes, I don't claim Moore's law will go on forever and I'm oversimplifying -
maybe things are slowing down and it will only be 1.5 MB, but the point remains:
the importance of your 20kb .text savings will become a 10-15k .text savings in
just 2 years. In 8 years today's 1 MB system will be a 32 MB system if that trend
holds up.
You can already fit a mostly full Linux system into 32 MB just fine, i.e. the
problem has solved itself just by waiting a bit or by increasing the hardware
capabilities a bit.
But the kernel complexity you introduce with this series stays with us! It will be
an additional cost added to many scheduler commits going forward. It's an added
cost for all the other usecases.
Also, it's not like 20k .text savings will magically enable Linux to fit into 1MB
of RAM - it won't. The smallest still practical more or less generic Linux system
in existence today is around 16 MB. You can shrink it more, but the effort
increases exponentially once you go below a natural minimum size.
> [...] Still, those things are being connected to the internet. [...]
So while I believe small size has its value, I think it's far more important to be
able to _trust_ those devices than to squeeze the last kilobyte out of the kernel.
In that sense these qualities:
- reducing complexity,
- reducing actual line count,
- increasing testability,
- increasing reviewability,
- offering behavioral and ABI uniformity
are more important than 1% of RAM of very, very RAM starved system which likely
won't use Linux to begin with...
So while it obviously the "complexity vs. kernel size" trade-off will always be a
judgement call, for the scheduler it's not really an open question what we need to
do at this stage: we need to reduce complexity and #ifdef variants, not increase
it.
Thanks,
Ingo
Powered by blists - more mailing lists