[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwJdn8Kz9UByuRfGNtf9Hkv-=8xB+WRd47uHZU1YMagZw@mail.gmail.com>
Date: Thu, 25 Oct 2012 13:53:05 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Rik van Riel <riel@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 26/31] sched, numa, mm: Add fault driven placement and
migration policy
On Thu, Oct 25, 2012 at 5:16 AM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> + /*
> + * Using runtime rather than walltime has the dual advantage that
> + * we (mostly) drive the selection from busy threads and that the
> + * task needs to have done some actual work before we bother with
> + * NUMA placement.
> + */
That explanation makes sense..
> + now = curr->se.sum_exec_runtime;
> + period = (u64)curr->numa_scan_period * NSEC_PER_MSEC;
> +
> + if (now - curr->node_stamp > period) {
> + curr->node_stamp = now;
> +
> + if (!time_before(jiffies, curr->mm->numa_next_scan)) {
.. but then the whole "numa_next_scan" thing ends up being about
real-time anyway?
So 'numa_scan_period' in in CPU time (msec, converted to nsec at
runtime rather than when setting it), but 'numa_next_scan' is in
wallclock time (jiffies)?
But *both* of them are based on the same 'numa_scan_period' thing that
the user sets in ms.
So numa_scan_period is interpreted as both wallclock *and* as runtime?
Maybe this works, but it doesn't really make much sense. And what is
the impact of this on machines that run lots of loads with delays
(whether due to IO or timers)?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists