[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <563B7C2D.90008@surriel.com>
Date: Thu, 5 Nov 2015 10:56:29 -0500
From: Rik van Riel <riel@...riel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, mgorman@...e.de,
jstancek@...hat.com
Subject: Re: [PATCH] sched,numa cap pte scanning overhead to 3% of run time
On 11/05/2015 10:34 AM, Peter Zijlstra wrote:
> On Wed, Nov 04, 2015 at 01:25:15PM -0500, Rik van Riel wrote:
>> +++ b/kernel/sched/fair.c
>> @@ -2155,6 +2155,7 @@ void task_numa_work(struct callback_head *work)
>> unsigned long migrate, next_scan, now = jiffies;
>> struct task_struct *p = current;
>> struct mm_struct *mm = p->mm;
>> + u64 runtime = p->se.sum_exec_runtime;
>> struct vm_area_struct *vma;
>> unsigned long start, end;
>> unsigned long nr_pte_updates = 0;
>> @@ -2277,6 +2278,20 @@ void task_numa_work(struct callback_head *work)
>> else
>> reset_ptenuma_scan(p);
>> up_read(&mm->mmap_sem);
>> +
>> + /*
>> + * There is a fundamental mismatch between the runtime based
>> + * NUMA scanning at the task level, and the wall clock time
>> + * NUMA scanning at the mm level. On a severely overloaded
>> + * system, with very large processes, this mismatch can cause
>> + * the system to spend all of its time in change_prot_numa().
>> + * Limit NUMA PTE scanning to 3% of the task's run time, if
>> + * we spent so much time scanning we got rescheduled.
>> + */
>> + if (unlikely(p->se.sum_exec_runtime != runtime)) {
>> + u64 diff = p->se.sum_exec_runtime - runtime;
>> + p->node_stamp += 32 * diff;
>> + }
>
> I don't actually see how this does what it says it does
If we got rescheduled during the assigning of runtime
above, and this point, the scheduler should have
updated the p->se.sum_exec_runtime statistic, given
that update_curr is called from both dequeue_entity
and enqueue_entity in fair.c
Advancing the node_stamp by 32x the amount of time
the task consumed between entering task_numa_work and
this point should ensure task_numa_work does not get
queued again until we have used 32x as much time doing
something else.
That should limit the CPU time used by task_numa_work.
What am I missing?
>> @@ -2302,7 +2317,7 @@ void task_tick_numa(struct rq *rq, struct task_struct *curr)
>> now = curr->se.sum_exec_runtime;
>> period = (u64)curr->numa_scan_period * NSEC_PER_MSEC;
>>
>> - if (now - curr->node_stamp > period) {
>> + if (now > curr->node_stamp + period) {
>> if (!curr->node_stamp)
>> curr->numa_scan_period = task_scan_min(curr);
>> curr->node_stamp += period;
>
> And this really should be an independent patch. Although the fix I had
> in mind looked like:
>
> if ((s64)(now - curr->node_stamp) > period)
>
> But I suppose this works too.
I can resend this as a separate patch if you prefer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists