[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMAG_efOD03wLtetfNAfymsCs6kVEqoJx6e7qY2txBBMDgbmfQ@mail.gmail.com>
Date: Fri, 17 Jan 2014 22:16:06 +0200
From: saeed bishara <saeed.bishara@...il.com>
To: Dan Williams <dan.j.williams@...el.com>
Cc: "dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
Vinod Koul <vinod.koul@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
David Whipple <whipple@...uredatainnovations.ch>,
lkml <linux-kernel@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH v3 1/4] net_dma: simple removal
Dan,
isn't this issue similar to direct io case?
can you please look at the following article
http://lwn.net/Articles/322795/
regarding performance improvement using NET_DMA, I don't have concrete
numbers, but it should be around 15-20%. my system is i/o coherent.
saeed
On Wed, Jan 15, 2014 at 11:33 PM, Dan Williams <dan.j.williams@...el.com> wrote:
> On Wed, Jan 15, 2014 at 1:31 PM, Dan Williams <dan.j.williams@...el.com> wrote:
>> On Wed, Jan 15, 2014 at 1:20 PM, saeed bishara <saeed.bishara@...il.com> wrote:
>>> Hi Dan,
>>>
>>> I'm using net_dma on my system and I achieve meaningful performance
>>> boost when running Iperf receive.
>>>
>>> As far as I know the net_dma is used by many embedded systems out
>>> there and might effect their performance.
>>> Can you please elaborate on the exact scenario that cause the memory corruption?
>>>
>>> Is the scenario mentioned here caused by "real life" application or
>>> this is more of theoretical issue found through manual testing, I was
>>> trying to find the thread describing the failing scenario and couldn't
>>> find it, any pointer will be appreciated.
>>
>> Did you see the referenced commit?
>>
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=77873803363c
>>
>> This is a real issue in that any app that forks() while receiving data
>> can cause the dma data to be lost. The problem is that the copy
>> operation falls back to cpu at many locations. Any one of those
>> instance could touch a mapped page and trigger a copy-on-write event.
>> The dma completes to the wrong location.
>>
>
> Btw, do you have benchmark data showing that NET_DMA is beneficial on
> these platforms? I would have expected worse performance on platforms
> without i/o coherent caches.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists