lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240730075755.10941-3-link@vivo.com>
Date: Tue, 30 Jul 2024 15:57:46 +0800
From: Huan Yang <link@...o.com>
To: Sumit Semwal <sumit.semwal@...aro.org>,
	Benjamin Gaignard <benjamin.gaignard@...labora.com>,
	Brian Starkey <Brian.Starkey@....com>,
	John Stultz <jstultz@...gle.com>,
	"T.J. Mercier" <tjmercier@...gle.com>,
	Christian König <christian.koenig@....com>,
	linux-media@...r.kernel.org,
	dri-devel@...ts.freedesktop.org,
	linaro-mm-sig@...ts.linaro.org,
	linux-kernel@...r.kernel.org
Cc: opensource.kernel@...o.com,
	Huan Yang <link@...o.com>
Subject: [PATCH v2 2/5] dma-buf: heaps: Introduce async alloc read ops

The DMA_HEAP_ALLOC_AND_READ_FILE heap flag patch enables us to
synchronously read files using direct I/O.

This approach helps to save CPU copying and avoid a certain degree of
memory thrashing (page cache generation and reclamation)

When dealing with large file sizes, the benefits of this approach become
particularly significant.

However, there are currently some methods that can improve performance,
not just save system resources:

Due to the large file size, for example, a AI 7B model of around 3.4GB, the
time taken to allocate DMA-BUF memory will be relatively long. Waiting
for the allocation to complete before reading the file will add to the
overall time consumption. Therefore, the total time for DMA-BUF
allocation and file read can be calculated using the formula
   T(total) = T(alloc) + T(I/O)

However, if we change our approach, we don't necessarily need to wait
for the DMA-BUF allocation to complete before initiating I/O. In fact,
during the allocation process, we already hold a portion of the page,
which means that waiting for subsequent page allocations to complete
before carrying out file reads is actually unfair to the pages that have
already been allocated.

The allocation of pages is sequential, and the reading of the file is
also sequential, with the content and size corresponding to the file.
This means that the memory location for each page, which holds the
content of a specific position in the file, can be determined at the
time of allocation.

However, to fully leverage I/O performance, it is best to wait and
gather a certain number of pages before initiating batch processing.

This patch only provides an allocate_async_read heap ops, without
including other infrastructure for completing async reads and the
corresponding heap implementation. When the allocate_async_read ops heap
is not implemented, it will wait for the dma-buf to be allocated before
reading the file (sync).

Signed-off-by: Huan Yang <link@...o.com>
---
 drivers/dma-buf/dma-heap.c | 14 ++++++++++----
 include/linux/dma-heap.h   |  8 ++++++--
 2 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index f19b944d4eaa..91e241763ebc 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -131,21 +131,27 @@ static int dma_heap_buffer_alloc_and_read(struct dma_heap *heap, int file_fd,
 	struct dma_heap_file heap_file;
 	struct dma_buf *dmabuf;
 	int ret, fd;
+	bool async_read = heap->ops->allocate_async_read ? true : false;
 
 	ret = init_dma_heap_file(&heap_file, file_fd);
 	if (ret)
 		return ret;
 
-	dmabuf = heap->ops->allocate(heap, heap_file.fsize, fd_flags,
-				     heap_flags);
+	if (async_read)
+		dmabuf = heap->ops->allocate_async_read(heap, &heap_file,
+							fd_flags, heap_flags);
+	else
+		dmabuf = heap->ops->allocate(heap, heap_file.fsize, fd_flags,
+					     heap_flags);
 	if (IS_ERR(dmabuf)) {
 		ret = PTR_ERR(dmabuf);
 		goto error_file;
 	}
 
-	ret = dma_heap_read_file_sync(dmabuf, &heap_file);
-	if (ret)
+	if (!async_read && dma_heap_read_file_sync(dmabuf, &heap_file)) {
+		ret = -EIO;
 		goto error_put;
+	}
 
 	ret = dma_buf_fd(dmabuf, fd_flags);
 	if (ret < 0)
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 064bad725061..824acbf5a1bc 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -13,11 +13,12 @@
 #include <linux/types.h>
 
 struct dma_heap;
+struct dma_heap_file;
 
 /**
  * struct dma_heap_ops - ops to operate on a given heap
- * @allocate:		allocate dmabuf and return struct dma_buf ptr
- *
+ * @allocate:			allocate dmabuf and return struct dma_buf ptr
+ * @allocate_async_read:	allocate and async read file.
  * allocate returns dmabuf on success, ERR_PTR(-errno) on error.
  */
 struct dma_heap_ops {
@@ -25,6 +26,9 @@ struct dma_heap_ops {
 				    unsigned long len,
 				    u32 fd_flags,
 				    u64 heap_flags);
+	struct dma_buf *(*allocate_async_read)(struct dma_heap *heap,
+					       struct dma_heap_file *heap_file,
+					       u32 fd_flags, u64 heap_flags);
 };
 
 /**
-- 
2.45.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ