lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240710135757.25786-2-liulei.rjpt@vivo.com>
Date: Wed, 10 Jul 2024 21:57:53 +0800
From: Lei Liu <liulei.rjpt@...o.com>
To: Sumit Semwal <sumit.semwal@...aro.org>,
	Benjamin Gaignard <benjamin.gaignard@...labora.com>,
	Brian Starkey <Brian.Starkey@....com>,
	John Stultz <jstultz@...gle.com>,
	"T.J. Mercier" <tjmercier@...gle.com>,
	Christian König <christian.koenig@....com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	David Hildenbrand <david@...hat.com>,
	Matthew Wilcox <willy@...radead.org>,
	Muhammad Usama Anjum <usama.anjum@...labora.com>,
	Andrei Vagin <avagin@...gle.com>,
	Ryan Roberts <ryan.roberts@....com>,
	Hugh Dickins <hughd@...gle.com>,
	Kefeng Wang <wangkefeng.wang@...wei.com>,
	linux-media@...r.kernel.org,
	dri-devel@...ts.freedesktop.org,
	linaro-mm-sig@...ts.linaro.org,
	linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org,
	linux-mm@...ck.org
Cc: opensource.kernel@...o.com,
	Lei Liu <liulei.rjpt@...o.com>
Subject: [PATCH 1/2] mm: dmabuf_direct_io: Support direct_io for memory allocated by dmabuf

1.Effects and reasons for lack of support:

Currently, memory allocated by dmabuf cannot be read from files using
direct_io. With the increasing use of AI models in mobile applications,
there is a growing need to load large model files occupying up to 3-4GB
into mobile memory. Presently, the only way to read is through
buffer_io, which limits performance. In low memory scenarios on 12GB RAM
smartphones, buffer_io requires additional memory, leading to a 3-4
times degradation in read performance with significant fluctuations.

The reason for the lack of support for direct_io reading is that the
current system establishes mappings for memory allocated by dmabuf using
remap_pfn_range, which includes the VM_PFN_MAP flag. When attempting
direct_io reads, the get_user_page process intercepts the VM_PFN_MAP
flag, preventing the page from being returned and resulting in read
failures.

2.Proposed solution:
  (1) Establish mmap mappings for memory allocated by dmabuf using the
vm_insert_page method to support direct_io read and write.

3.Advantages and benefits:
  (1) Faster and more stable reading speed.
  (2) Reduced pagecache memory usage.
  (3) Reduction in CPU data copying and unnecessary power consumption.

4.In a clean and stressapptest(a 16GB memory phone consumed 4GB of
  memory). A comparison of the time taken to read a 3.2GB large AI model
file using buffer_io and direct_io.

Read 3.21G AI large model file on mobilephone
Memstress  Rounds    DIO-Time/ms   BIO-Time/ms
             01        1432          2034
Clean        02        1406          2225
             03        1476          2097
           average     1438          2118
Memstress  Rounds    DIO-Time/ms   BIO-Time/ms
             01        1585          4821
Eat 4GB      02        1560          4957
             03        1519          4936
           average     1554          4905

Signed-off-by: Lei Liu <liulei.rjpt@...o.com>
---
 drivers/dma-buf/heaps/system_heap.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 9076d47ed2ef..87547791f9e1 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -203,8 +203,7 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
 		struct page *page = sg_page_iter_page(&piter);
 
-		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
-				      vma->vm_page_prot);
+		ret = vm_insert_page(vma, addr, page);
 		if (ret)
 			return ret;
 		addr += PAGE_SIZE;
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ