[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080531075242.GC108600@sgi.com>
Date: Sat, 31 May 2008 00:52:42 -0700
From: Jeremy Higdon <jeremy@....com>
To: Jes Sorensen <jes@....com>
Cc: Roland Dreier <rdreier@...co.com>, benh@...nel.crashing.org,
Arjan van de Ven <arjan@...radead.org>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
tpiepho@...escale.com, linuxppc-dev@...abs.org,
scottwood@...escale.com, torvalds@...ux-foundation.org,
David Miller <davem@...emloft.net>, alan@...rguk.ukuu.org.uk
Subject: Re: MMIO and gcc re-ordering issue
On Thu, May 29, 2008 at 10:47:18AM -0400, Jes Sorensen wrote:
> Thats not going to solve the problem on Altix. On Altix the issue is
> that there can be multiple paths through the NUMA fabric from cpuX to
> PCI bridge Y.
>
> Consider this uber-cool<tm> ascii art - NR is my abbrevation for NUMA
> router:
>
> ------- -------
> |cpu X| |cpu Y|
> ------- -------
> | \____ ____/ |
> | \/ |
> | ____/\____ |
> | / \ |
> ----- ------
> |NR 1| |NR 2|
> ------ ------
> \ /
> \ /
> -------
> | PCI |
> -------
>
> The problem is that your two writel's, despite being both issued on
> cpu X, due to the spin lock, in your example, can end up with the
> first one going through NR 1 and the second one going through NR 2. If
> there's contention on NR 1, the write going via NR 2 may hit the PCI
> bridge prior to the one going via NR 1.
We don't actually have that problem on the Altix. All writes issued
by CPU X will be ordered with respect to each other. But writes by
CPU X and CPU Y will not be, unless an mmiowb() is done by the
original CPU before the second CPU writes. I.e.
CPU X writel
CPU X writel
CPU X mmiowb
CPU Y writel
...
Note that this implies some sort of locking. Also note that if in
the above, CPU Y did the mmiowb, that would not work.
jeremy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists