lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4h10vAw3rwNo25+RaZwFsnrjWrpoktj+Yg8eR1QDDW2tw@mail.gmail.com>
Date:	Tue, 21 Jan 2014 01:44:34 -0800
From:	Dan Williams <dan.j.williams@...el.com>
To:	saeed bishara <saeed.bishara@...il.com>
Cc:	"dmaengine@...r.kernel.org" <dmaengine@...r.kernel.org>,
	Alexander Duyck <alexander.h.duyck@...el.com>,
	Dave Jiang <dave.jiang@...el.com>,
	Vinod Koul <vinod.koul@...el.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	David Whipple <whipple@...uredatainnovations.ch>,
	lkml <linux-kernel@...r.kernel.org>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH v3 1/4] net_dma: simple removal

On Fri, Jan 17, 2014 at 12:16 PM, saeed bishara <saeed.bishara@...il.com> wrote:
> Dan,
>
> isn't this issue similar to direct io case?
> can you please look at the following article
> http://lwn.net/Articles/322795/

I guess it's similar, but the NET_DMA dma api violation is more
blatant.  The same thread that requested DMA is also writing to those
same pages with the cpu.  The fix is either guaranteeing that only the
dma engine ever touches the gup'd pages or synchronizing dma before
every cpu fallback.

> regarding performance improvement using NET_DMA, I don't have concrete
> numbers, but it should be around 15-20%. my system is i/o coherent.

That sounds too high... is that throughput or cpu utilization?  It
sounds high because NET_DMA also makes the data cache cold while the
cpu copy warms the data before handing it to the application.

Can you measure relative numbers and share your testing details?  You
will need to fix the data corruption and verify that the performance
advantage is still there before proposing NET_DMA be restored.

I have a new dma_debug capability in Andrew's tree that can you help
you identify holes in the implementation.

http://ozlabs.org/~akpm/mmots/broken-out/dma-debug-introduce-debug_dma_assert_idle.patch

--
Dan

>
> saeed
>
> On Wed, Jan 15, 2014 at 11:33 PM, Dan Williams <dan.j.williams@...el.com> wrote:
>> On Wed, Jan 15, 2014 at 1:31 PM, Dan Williams <dan.j.williams@...el.com> wrote:
>>> On Wed, Jan 15, 2014 at 1:20 PM, saeed bishara <saeed.bishara@...il.com> wrote:
>>>> Hi Dan,
>>>>
>>>> I'm using net_dma on my system and I achieve meaningful performance
>>>> boost when running Iperf receive.
>>>>
>>>> As far as I know the net_dma is used by many embedded systems out
>>>> there and might effect their performance.
>>>> Can you please elaborate on the exact scenario that cause the memory corruption?
>>>>
>>>> Is the scenario mentioned here caused by "real life" application or
>>>> this is more of theoretical issue found through manual testing, I was
>>>> trying to find the thread describing the failing scenario and couldn't
>>>> find it, any pointer will be appreciated.
>>>
>>> Did you see the referenced commit?
>>>
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=77873803363c
>>>
>>> This is a real issue in that any app that forks() while receiving data
>>> can cause the dma data to be lost.  The problem is that the copy
>>> operation falls back to cpu at many locations.  Any one of those
>>> instance could touch a mapped page and trigger a copy-on-write event.
>>> The dma completes to the wrong location.
>>>
>>
>> Btw, do you have benchmark data showing that NET_DMA is beneficial on
>> these platforms?  I would have expected worse performance on platforms
>> without i/o coherent caches.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ