[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121011171519.GQ1818@redhat.com>
Date: Thu, 11 Oct 2012 19:15:20 +0200
From: Andrea Arcangeli <aarcange@...hat.com>
To: Mel Gorman <mel@....ul.ie>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <pzijlstr@...hat.com>,
Ingo Molnar <mingo@...e.hu>, Hugh Dickins <hughd@...gle.com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Hillf Danton <dhillf@...il.com>,
Andrew Jones <drjones@...hat.com>,
Dan Smith <danms@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Paul Turner <pjt@...gle.com>, Christoph Lameter <cl@...ux.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 07/33] autonuma: mm_autonuma and task_autonuma data
structures
On Thu, Oct 11, 2012 at 01:28:27PM +0100, Mel Gorman wrote:
> s/togehter/together/
Fixed.
>
> > + * knumad_scan structure.
> > + */
> > +struct mm_autonuma {
>
> Nit but this is very similar in principle to mm_slot for transparent
> huge pages. It might be worth renaming both to mm_thp_slot and
> mm_autonuma_slot to set the expectation they are very similar in nature.
> Could potentially be made generic but probably overkill.
Agreed. A plain rename to mm_autonuma_slot would have the only cons of
making some code spill over 80 col ;).
> > + /* link for knuma_scand's list of mm structures to scan */
> > + struct list_head mm_node;
> > + /* Pointer to associated mm structure */
> > + struct mm_struct *mm;
> > +
> > + /*
> > + * Zeroed from here during allocation, check
> > + * mm_autonuma_reset() if you alter the below.
> > + */
> > +
> > + /*
> > + * Pass counter for this mm. This exist only to be able to
> > + * tell when it's time to apply the exponential backoff on the
> > + * task_autonuma statistics.
> > + */
> > + unsigned long mm_numa_fault_pass;
> > + /* Total number of pages that will trigger NUMA faults for this mm */
> > + unsigned long mm_numa_fault_tot;
> > + /* Number of pages that will trigger NUMA faults for each [nid] */
> > + unsigned long mm_numa_fault[0];
> > + /* do not add more variables here, the above array size is dynamic */
> > +};
>
> How cache hot is this structure? nodes are sharing counters in the same
> cache lines so if updates are frequent this will bounce like a mad yoke.
> Profiles will tell for sure but it's possible that some sort of per-cpu
> hilarity will be necessary here in the future.
On autonuma27 this is only written by knuma_scand so it won't risk to
bounce.
On autonuma28 however it's updated by the numa hinting page fault
locklessy and so your concern is very real, and the cacheline bounces
will materialize. It'll cause more interconnect traffic before the
workload converges too. I thought about that, but I wanted the
mm_autonuma updated in real time as migration happens otherwise it
converges more slowly if we have to wait until the next pass to bring
mm_autonuma statistical data in sync with the migration
activities. Converging more slowly looked worse than paying more
cacheline bounces.
It's a tradeoff. And if it's not a good one, we can go back to
autonuma27 mm_autonuma stat gathering method and converge slower but
without any cacheline bouncing in the NUMA hinting page faults. At
least it's lockless.
> > + unsigned long task_numa_fault_pass;
> > + /* Total number of eligible pages that triggered NUMA faults */
> > + unsigned long task_numa_fault_tot;
> > + /* Number of pages that triggered NUMA faults for each [nid] */
> > + unsigned long task_numa_fault[0];
> > + /* do not add more variables here, the above array size is dynamic */
> > +};
> > +
>
> Same question about cache hotness.
Here it's per-thread, so there won't be risk of accesses interleaved
by different CPUs.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists