lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <276e29fa-1252-4c91-a5e4-eaf367b4fbb0@vivo.com>
Date: Wed, 24 Jul 2024 15:12:55 +0800
From: Huan Yang <link@...o.com>
To: Christoph Hellwig <hch@...radead.org>,
 Christian König <christian.koenig@....com>,
 Sumit Semwal <sumit.semwal@...aro.org>,
 Benjamin Gaignard <benjamin.gaignard@...labora.com>,
 Brian Starkey <Brian.Starkey@....com>, John Stultz <jstultz@...gle.com>,
 "T.J. Mercier" <tjmercier@...gle.com>, linux-media@...r.kernel.org,
 dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org,
 linux-kernel@...r.kernel.org, opensource.kernel@...o.com
Subject: Re: [PATCH 1/2] dma-buf: heaps: DMA_HEAP_IOCTL_ALLOC_READ_FILE
 framework


在 2024/7/18 1:03, Christoph Hellwig 写道:
> copy_file_range only work inside the same file system anyway, so
> it is completely irrelevant here.
>
> What should work just fine is using sendfile (or splice if you like it
> complicated) to write TO the dma buf.  That just iterates over the page
> cache on the source file and calls ->write_iter from the page cache
> pages.  Of course that requires that you actually implement
> ->write_iter, but given that dmabufs support mmaping there I can't
> see why you should not be able to write to it.

This day, I test dma-buf read large file with sendfile. Here are two 
problem I find when read O_DIRECT open file.

1. sendfile/splice transfer data between read and write through a pipe.
     Even if the read process does not generate page cache, an 
equivalent amount of CPU copy is still required.
     This is particularly noticeable in the performance degradation when 
reading large files.

2. Each pipe max_bytes is 64K(in my phone and arch test), This means 
that for each IO, only 64K is read and then copied, resulting in poor IO 
performance.

Based on observations from testing, it takes an average of 7s to perform 
O_DIRECT read of a 3GB file. Trace show much runable and running and 
some I/O between this.

For buffer read large file into dma-buf by sendfile, cost 2.3s, is normal.

Maybe this is not a good way to let dma-buf support direct IO?


>
> Reading FROM the dma buf in that fashion should also work if you provide
> a ->read_iter wire up ->splice_read to copy_splice_read so that it
We current more care abount read file into dma-buf, not write. :)
> doesn't require any page cache.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ