[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20080604.113317.213189757.davem@davemloft.net>
Date: Wed, 04 Jun 2008 11:33:17 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: kkeil@...e.de
Cc: akpm@...ux-foundation.org, netdev@...r.kernel.org,
bugme-daemon@...zilla.kernel.org, f0h3a-kernel@...oo.com
Subject: Re: [Bugme-new] [Bug 10790] New: driver "sunhme" experiences
corrupt packets if machine has more than 2GB of memory
From: Karsten Keil <kkeil@...e.de>
Date: Wed, 4 Jun 2008 10:21:21 +0200
> Yes it is x86_64 and I think, that if you put more as 2G in this machine,
> some physical addresses are behind the 4 GB border.
If that is the case then I think there might be some bug in the
dma_sync_single_for_cpu() implementation on whatever IOMMU
implementation is being used on this x86_64 system.
sunhme has two cases:
1) If the packet is greater than 256 bytes the original packet
is unmapped. So, if soft-iommu uses a bounce buffer,
for example, this should do the necessary copy from
the bounce buffer to the actual SKB data area.
2) If the packet is less than or equal to 256 bytes, the
packet is DMA sync'd using pci_dma_sync_single_for_cpu(),
which should cause soft-iommu or similar to copy from any
bounce buffer to the real buffer, and then we copy the
SKB data into a new freshly allocated SKB and reuse the original
one after calling pci_dma_sync_single_for_device().
I've validated that the DMA address and lengths used by those
calls in the sunmhe driver are correct, so really my top suspect
would be the IOMMU implementation being used on this system for
these specific kinds of cases.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists