[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50BD2BB9.7010808@redhat.com>
Date: Mon, 03 Dec 2012 17:46:17 -0500
From: Rik van Riel <riel@...hat.com>
To: Ingo Molnar <mingo@...nel.org>
CC: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Turner <pjt@...gle.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Christoph Lameter <cl@...ux.com>, Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH 32/52] sched: Track groups of shared tasks
On 12/02/2012 01:43 PM, Ingo Molnar wrote:
> This is not entirely correct as this task might have scheduled or
> migrate ther - but statistically there will be correlation to the
^^^^ there?
> tasks that we share memory with, and correlation is all we need.
>
> We map out the relation itself by filtering out the highest address
> ask that is below our own task address, per working set scan
^^^ task?
> iteration.
> @@ -906,23 +945,122 @@ out_backoff:
> }
>
> /*
> + * Track our "memory buddies" the tasks we actively share memory with.
> + *
> + * Firstly we establish the identity of some other task that we are
> + * sharing memory with by looking at rq[page::last_cpu].curr - i.e.
> + * we check the task that is running on that CPU right now.
> + *
> + * This is not entirely correct as this task might have scheduled or
> + * migrate ther - but statistically there will be correlation to the
^^^^ there
> + * tasks that we share memory with, and correlation is all we need.
> + *
> + * We map out the relation itself by filtering out the highest address
> + * ask that is below our own task address, per working set scan
^^^ task?
If that word is "task", the comment makes sense. If it is
something else, I'm back to square one on what the code does :)
> void task_numa_fault(int node, int last_cpu, int pages)
> {
> struct task_struct *p = current;
> int priv = (task_cpu(p) == last_cpu);
> + int idx = 2*node + priv;
>
> if (unlikely(!p->numa_faults)) {
> - int size = sizeof(*p->numa_faults) * 2 * nr_node_ids;
> + int entries = 2*nr_node_ids;
> + int size = sizeof(*p->numa_faults) * entries;
>
> - p->numa_faults = kzalloc(size, GFP_KERNEL);
> + p->numa_faults = kzalloc(2*size, GFP_KERNEL);
So we multiply nr_node_ids by 2. Twice.
That kind of magic deserves a comment explaining how
and why. How about:
/*
* We track two arrays with private and shared faults
* for each NUMA node. The p->numa_faults_curr array
* is allocated at the same time as the p->numa_faults
* array.
*/
int size = sizeof(*p->numa_faults) * 4 * nr_node_ids;
> if (!p->numa_faults)
> return;
> + /*
> + * For efficiency reasons we allocate ->numa_faults[]
> + * and ->numa_faults_curr[] at once and split the
> + * buffer we get. They are separate otherwise.
> + */
> + p->numa_faults_curr = p->numa_faults + entries;
> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists