[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1386256309.1791.253.camel@misato.fc.hp.com>
Date: Thu, 05 Dec 2013 08:11:49 -0700
From: Toshi Kani <toshi.kani@...com>
To: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
Cc: akpm@...ux-foundation.org, mingo@...nel.org, hpa@...or.com,
tglx@...utronix.de, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH] mm, x86: Skip NUMA_NO_NODE while parsing SLIT
On Thu, 2013-12-05 at 19:25 +0900, Yasuaki Ishimatsu wrote:
> (2013/12/05 6:09), Toshi Kani wrote:
> > When ACPI SLIT table has an I/O locality (i.e. a locality unique
> > to an I/O device), numa_set_distance() emits the warning message
> > below.
> >
> > NUMA: Warning: node ids are out of bound, from=-1 to=-1 distance=10
> >
> > acpi_numa_slit_init() calls numa_set_distance() with pxm_to_node(),
> > which assumes that all localities have been parsed with SRAT previously.
> > SRAT does not list I/O localities, where as SLIT lists all localities
>
> > including I/Os. Hence, pxm_to_node() returns NUMA_NO_NODE (-1) for
> > an I/O locality. I/O localities are not supported and are ignored
> > today, but emitting such warning message leads unnecessary confusion.
>
> In this case, the warning message should not be shown. But if SLIT table
> is really broken, the message should be shown. Your patch seems to not care
> for second case.
In the second case, I assume you are worrying about the case of SLIT
table with bad locality numbers. Since SLIT is a matrix of the number
of localities, it is only possible by making the table bigger than
necessary. Such excessive localities are safe to ignore (as they are
ignored today) and regular users have nothing to concern about them.
The warning message in this case may be helpful for platform vendors to
test their firmware, but they have plenty of other methods to verify
their SLIT table.
Thanks,
-Toshi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists