[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201228234955.190858-1-dgilbert@interlog.com>
Date: Mon, 28 Dec 2020 18:49:51 -0500
From: Douglas Gilbert <dgilbert@...erlog.com>
To: linux-scsi@...r.kernel.org, linux-block@...r.kernel.org,
target-devel@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: martin.petersen@...cle.com, jejb@...ux.vnet.ibm.com,
bostroesser@...il.com, bvanassche@....org, ddiss@...e.de
Subject: [PATCH v5 0/4] scatterlist: add new capabilities
Scatter-gather lists (sgl_s) are frequently used as data carriers in
the block layer. For example the SCSI and NVMe subsystems interchange
data with the block layer using sgl_s. The sgl API is declared in
<linux/scatterlist.h>
The author has extended these transient sgl use cases to a store (i.e.
a ramdisk) in the scsi_debug driver. Other new potential uses of sgl_s
could be for the target subsystem. When this extra step is taken, the
need to copy between sgl_s becomes apparent. The patchset adds
sgl_copy_sgl(), sgl_compare_sgl() and sgl_memset().
The existing sgl_alloc_order() function can be seen as a replacement
for vmalloc() for large, long-term allocations. For what seems like
no good reason, sgl_alloc_order() currently restricts its total
allocation to less than or equal to 4 GiB. vmalloc() has no such
restriction.
Changes since v3 [posted 20201105]:
- rebase on lk 5.11.0-rc2
Changes since v3 [posted 20201019]:
- re-instate check on integer overflow of nent calculation in
sgl_alloc_order(). Do it in such a way as to not limit the
overall sgl size to 4 GiB
- introduce sgl_compare_sgl_idx() helper function that, if
requested and if a miscompare is detected, will yield the byte
index of the first miscompare.
- add Reviewed-by tags from Bodo Stroesser
- rebase on lk 5.10.0-rc2 [was on lk 5.9.0]
Changes since v2 [posted 20201018]:
- remove unneeded lines from sgl_memset() definition.
- change sg_zero_buffer() to call sgl_memset() as the former
is a subset.
Changes since v1 [posted 20201016]:
- Bodo Stroesser pointed out a problem with the nesting of
kmap_atomic() [called via sg_miter_next()] and kunmap_atomic()
calls [called via sg_miter_stop()] and proposed a solution that
simplifies the previous code.
- the new implementation of the three functions has shorter periods
when pre-emption is disabled (but has more them). This should
make operations on large sgl_s more pre-emption "friendly" with
a relatively small performance hit.
- sgl_memset return type changed from void to size_t and is the
number of bytes actually (over)written. That number is needed
anyway internally so may as well return it as it may be useful to
the caller.
This patchset is against lk 5.10.0-rc2
Douglas Gilbert (4):
sgl_alloc_order: remove 4 GiB limit, sgl_free() warning
scatterlist: add sgl_copy_sgl() function
scatterlist: add sgl_compare_sgl() function
scatterlist: add sgl_memset()
include/linux/scatterlist.h | 16 +++
lib/scatterlist.c | 244 +++++++++++++++++++++++++++++++++---
2 files changed, 243 insertions(+), 17 deletions(-)
--
2.25.1
Powered by blists - more mailing lists