lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210630013421.735092-1-john.stultz@linaro.org>
Date:   Wed, 30 Jun 2021 01:34:16 +0000
From:   John Stultz <john.stultz@...aro.org>
To:     lkml <linux-kernel@...r.kernel.org>
Cc:     John Stultz <john.stultz@...aro.org>,
        Daniel Vetter <daniel@...ll.ch>,
        Christian Koenig <christian.koenig@....com>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Liam Mark <lmark@...eaurora.org>,
        Chris Goldsworthy <cgoldswo@...eaurora.org>,
        Laura Abbott <labbott@...nel.org>,
        Brian Starkey <Brian.Starkey@....com>,
        Hridya Valsaraju <hridya@...gle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Sandeep Patil <sspatil@...gle.com>,
        Daniel Mentz <danielmentz@...gle.com>,
        Ørjan Eide <orjan.eide@....com>,
        Robin Murphy <robin.murphy@....com>,
        Ezequiel Garcia <ezequiel@...labora.com>,
        Simon Ser <contact@...rsion.fr>,
        James Jones <jajones@...dia.com>, linux-media@...r.kernel.org,
        dri-devel@...ts.freedesktop.org
Subject: [PATCH v9 0/5] Generic page pool & deferred freeing for system dmabuf hea

After an unfortunately long pause (covid work-schedule burnout),
I wanted to revive and resubmit this series.

As before, the point of this series is trying to add both a page
pool as well as deferred-freeingto the DMA-BUF system heap to
improve allocation performance (so that it can match or beat the
old ION system heaps performance).

The combination of the page pool along with deferred freeing
allows us to offload page-zeroing out of the allocation hot
path. This was done originally with ION and this patch series
allows the DMA-BUF system heap to match ION's system heap
allocation performance in a simple microbenchmark [1] (ION
re-added to the kernel for comparision, running on an x86 vm
image):

./dmabuf-heap-bench -i 0 1 system
Testing dmabuf system vs ion heaptype 0 (flags: 0x1)
---------------------------------------------
dmabuf heap: alloc 4096 bytes 5000 times in 79314244 ns          15862 ns/call
ion heap:    alloc 4096 bytes 5000 times in 107390769 ns         21478 ns/call
dmabuf heap: alloc 1048576 bytes 5000 times in 259083419 ns      51816 ns/call
ion heap:    alloc 1048576 bytes 5000 times in 340497344 ns      68099 ns/call
dmabuf heap: alloc 8388608 bytes 5000 times in 2603105563 ns     520621 ns/call
ion heap:    alloc 8388608 bytes 5000 times in 3613592860 ns     722718 ns/call
dmabuf heap: alloc 33554432 bytes 5000 times in 12212492979 ns   2442498 ns/call
ion heap:    alloc 33554432 bytes 5000 times in 14584157792 ns   2916831 ns/call


Daniel didn't like earlier attempts to re-use the network
page-pool code to achieve this, and suggested the ttm_pool be
used instead, so this series pulls the page pool functionality
out of the ttm_pool logic and creates a generic page pool
that can be shared.

New in v9:
* Tried to address Christian König's feedback on the page pool
  changes (Kerneldoc, static functions, locking issues, duplicative
  order tracking)
* Fix up Kconfig dependency issue as Reported-by:
  kernel test robot <lkp@...el.com>
* Fix compiler warning Reported-by:
  kernel test robot <lkp@...el.com>

I know Christian had some less specific feedback on the deferred free
work that I'd like to revisit, but I wanted to restart the discussion
with this new series, rather then trying to dregdge up and reply to
a ~4mo old thread.

Input would be greatly appreciated. Testing as well, as I don't
have any development hardware that utilizes the ttm pool.

Thanks
-john

[1] https://android.googlesource.com/platform/system/memory/libdmabufheap/+/refs/heads/master/tests/dmabuf_heap_bench.c

Cc: Daniel Vetter <daniel@...ll.ch>
Cc: Christian Koenig <christian.koenig@....com>
Cc: Sumit Semwal <sumit.semwal@...aro.org>
Cc: Liam Mark <lmark@...eaurora.org>
Cc: Chris Goldsworthy <cgoldswo@...eaurora.org>
Cc: Laura Abbott <labbott@...nel.org>
Cc: Brian Starkey <Brian.Starkey@....com>
Cc: Hridya Valsaraju <hridya@...gle.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>
Cc: Sandeep Patil <sspatil@...gle.com>
Cc: Daniel Mentz <danielmentz@...gle.com>
Cc: Ørjan Eide <orjan.eide@....com>
Cc: Robin Murphy <robin.murphy@....com>
Cc: Ezequiel Garcia <ezequiel@...labora.com>
Cc: Simon Ser <contact@...rsion.fr>
Cc: James Jones <jajones@...dia.com>
Cc: linux-media@...r.kernel.org
Cc: dri-devel@...ts.freedesktop.org

John Stultz (5):
  drm: Add a sharable drm page-pool implementation
  drm: ttm_pool: Rework ttm_pool to use drm_page_pool
  dma-buf: system_heap: Add drm pagepool support to system heap
  dma-buf: heaps: Add deferred-free-helper library code
  dma-buf: system_heap: Add deferred freeing to the system heap

 drivers/dma-buf/heaps/Kconfig                |   5 +
 drivers/dma-buf/heaps/Makefile               |   1 +
 drivers/dma-buf/heaps/deferred-free-helper.c | 138 +++++++++
 drivers/dma-buf/heaps/deferred-free-helper.h |  55 ++++
 drivers/dma-buf/heaps/system_heap.c          |  46 ++-
 drivers/gpu/drm/Kconfig                      |   4 +
 drivers/gpu/drm/Makefile                     |   2 +
 drivers/gpu/drm/page_pool.c                  | 297 +++++++++++++++++++
 drivers/gpu/drm/ttm/ttm_pool.c               | 167 ++---------
 include/drm/page_pool.h                      |  68 +++++
 include/drm/ttm/ttm_pool.h                   |  14 +-
 11 files changed, 643 insertions(+), 154 deletions(-)
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h
 create mode 100644 drivers/gpu/drm/page_pool.c
 create mode 100644 include/drm/page_pool.h

-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ