[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53AC4182.3020504@redhat.com>
Date: Thu, 26 Jun 2014 11:51:30 -0400
From: Rik van Riel <riel@...hat.com>
To: Luiz Capitulino <lcapitulino@...hat.com>
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
isimatu.yasuaki@...fujitsu.com, yinghai@...nel.org,
andi@...stfloor.org, akpm@...ux-foundation.org, rientjes@...gle.com
Subject: Re: [PATCH] x86: numa: setup_node_data(): drop dead code and rename
function
On 06/26/2014 11:05 AM, Luiz Capitulino wrote:
> On Thu, 26 Jun 2014 10:51:11 -0400
> Rik van Riel <riel@...hat.com> wrote:
>
> On 06/19/2014 10:20 PM, Luiz Capitulino wrote:
>
>>>> @@ -523,8 +508,17 @@ static int __init numa_register_memblks(struct
>>>> numa_meminfo *mi) end = max(mi->blk[i].end, end); }
>>>>
>>>> - if (start < end) - setup_node_data(nid, start, end); + if
>>>> (start >= end) + continue; + + /* + * Don't confuse VM with a
>>>> node that doesn't have the + * minimum amount of memory: + */ +
>>>> if (end && (end - start) < NODE_MIN_SIZE) + continue; + +
>>>> alloc_node_data(nid); }
>
> Minor nit. If we skip a too-small node, should we remember that we
> did so, and add its memory to another node, assuming it is physically
> contiguous memory?
>
>> Interesting point. Honest question, please disregard if this doesn't
>> make sense: but won't this affect automatic numa performance? Because
>> the kernel won't know that that extra memory actually pertains to another
>> node and hence that extra memory will have a difference distance of the
>> node that's making use it of it.
If there is so little memory the kernel is unwilling to turn
it into its own zone or node, it should not be enough to
affect placement policy at all.
Whether or not we use that last little bit of memory is probably
not very important, either :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists