[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR12MB3269C5766F553438ECFF2C9BD3C60@BYAPR12MB3269.namprd12.prod.outlook.com>
Date: Wed, 24 Jul 2019 10:04:07 +0000
From: Jose Abreu <Jose.Abreu@...opsys.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Jose Abreu <Jose.Abreu@...opsys.com>
CC: David Miller <davem@...emloft.net>,
"jonathanh@...dia.com" <jonathanh@...dia.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"lists@...h.nu" <lists@...h.nu>,
"Joao.Pinto@...opsys.com" <Joao.Pinto@...opsys.com>,
"alexandre.torgue@...com" <alexandre.torgue@...com>,
"maxime.ripard@...tlin.com" <maxime.ripard@...tlin.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-stm32@...md-mailman.stormreply.com"
<linux-stm32@...md-mailman.stormreply.com>,
"wens@...e.org" <wens@...e.org>,
"mcoquelin.stm32@...il.com" <mcoquelin.stm32@...il.com>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"peppe.cavallaro@...com" <peppe.cavallaro@...com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: RE: [PATCH net-next 3/3] net: stmmac: Introducing support for Page
Pool
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Date: Jul/24/2019, 10:53:10 (UTC+00:00)
> Jose,
> > From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> > Date: Jul/24/2019, 09:54:27 (UTC+00:00)
> >
> > > Hi David,
> > >
> > > > From: Jon Hunter <jonathanh@...dia.com>
> > > > Date: Tue, 23 Jul 2019 13:09:00 +0100
> > > >
> > > > > Setting "iommu.passthrough=1" works for me. However, I am not sure where
> > > > > to go from here, so any ideas you have would be great.
> > > >
> > > > Then definitely we are accessing outside of a valid IOMMU mapping due
> > > > to the page pool support changes.
> > >
> > > Yes. On the netsec driver i did test with and without SMMU to make sure i am not
> > > breaking anything.
> > > Since we map the whole page on the API i think some offset on the driver causes
> > > that. In any case i'll have another look on page_pool to make sure we are not
> > > missing anything.
> >
> > Ilias, can it be due to this:
> >
> > stmmac_main.c:
> > pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
> >
> > page_pool.c:
> > dma = dma_map_page_attrs(pool->p.dev, page, 0,
> > (PAGE_SIZE << pool->p.order),
> > pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> >
> > "order", will be at least 1 and then mapping the page can cause overlap
> > ?
>
> well the API is calling the map with the correct page, page offset (0) and size
> right? I don't see any overlapping here. Aren't we mapping what we allocate?
>
> Why do you need higher order pages? Jumbo frames? Can we do a quick test with
> the order being 0?
Yes, it's for Jumbo frames that can be as large as 16k.
>From Jon logs it can be seen that buffers are 8k but frames are 1500 max
so it is using order = 1.
Jon, I was able to replicate (at some level) your setup:
# dmesg | grep -i arm-smmu
[ 1.337322] arm-smmu 70040000.iommu: probing hardware
configuration...
[ 1.337330] arm-smmu 70040000.iommu: SMMUv2 with:
[ 1.337338] arm-smmu 70040000.iommu: stage 1 translation
[ 1.337346] arm-smmu 70040000.iommu: stage 2 translation
[ 1.337354] arm-smmu 70040000.iommu: nested translation
[ 1.337363] arm-smmu 70040000.iommu: stream matching with 128
register groups
[ 1.337374] arm-smmu 70040000.iommu: 1 context banks (0
stage-2 only)
[ 1.337383] arm-smmu 70040000.iommu: Supported page sizes:
0x61311000
[ 1.337393] arm-smmu 70040000.iommu: Stage-1: 48-bit VA ->
48-bit IPA
[ 1.337402] arm-smmu 70040000.iommu: Stage-2: 48-bit IPA ->
48-bit PA
# dmesg | grep -i stmmac
[ 1.344106] stmmaceth 70000000.ethernet: Adding to iommu group 0
[ 1.344233] stmmaceth 70000000.ethernet: no reset control found
[ 1.348276] stmmaceth 70000000.ethernet: User ID: 0x10, Synopsys ID:
0x51
[ 1.348285] stmmaceth 70000000.ethernet: DWMAC4/5
[ 1.348293] stmmaceth 70000000.ethernet: DMA HW capability register
supported
[ 1.348302] stmmaceth 70000000.ethernet: RX Checksum Offload Engine
supported
[ 1.348311] stmmaceth 70000000.ethernet: TX Checksum insertion
supported
[ 1.348320] stmmaceth 70000000.ethernet: TSO supported
[ 1.348328] stmmaceth 70000000.ethernet: Enable RX Mitigation via HW
Watchdog Timer
[ 1.348337] stmmaceth 70000000.ethernet: TSO feature enabled
[ 1.348409] libphy: stmmac: probed
[ 4159.140990] stmmaceth 70000000.ethernet eth0: PHY [stmmac-0:01]
driver [Generic PHY]
[ 4159.141005] stmmaceth 70000000.ethernet eth0: phy: setting supported
00,00000000,000062ff advertising 00,00000000,000062ff
[ 4159.142359] stmmaceth 70000000.ethernet eth0: No Safety Features
support found
[ 4159.142369] stmmaceth 70000000.ethernet eth0: IEEE 1588-2008 Advanced
Timestamp supported
[ 4159.142429] stmmaceth 70000000.ethernet eth0: registered PTP clock
[ 4159.142439] stmmaceth 70000000.ethernet eth0: configuring for
phy/gmii link mode
[ 4159.142452] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
mode=phy/gmii/Unknown/Unknown adv=00,00000000,000062ff pause=10 link=0
an=1
[ 4159.142466] stmmaceth 70000000.ethernet eth0: phy link up
gmii/1Gbps/Full
[ 4159.142475] stmmaceth 70000000.ethernet eth0: phylink_mac_config:
mode=phy/gmii/1Gbps/Full adv=00,00000000,00000000 pause=0f link=1 an=0
[ 4159.142481] stmmaceth 70000000.ethernet eth0: Link is Up - 1Gbps/Full
- flow control rx/tx
The only missing point is the NFS boot that I can't replicate with this
setup. But I did some sanity checks:
Remote Enpoint:
# dd if=/dev/urandom of=output.dat bs=128M count=1
# nc -c 192.168.0.2 1234 < output.dat
# md5sum output.dat
fde9e0818281836e4fc0edfede2b8762 output.dat
DUT:
# nc -l -c -p 1234 > output.dat
# md5sum output.dat
fde9e0818281836e4fc0edfede2b8762 output.dat
---
Thanks,
Jose Miguel Abreu
Powered by blists - more mailing lists