[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220426101241.30100-2-nj.shetty@samsung.com>
Date: Tue, 26 Apr 2022 15:42:29 +0530
From: Nitesh Shetty <nj.shetty@...sung.com>
To: unlisted-recipients:; (no To-header on input)
Cc: chaitanyak@...dia.com, linux-block@...r.kernel.org,
linux-scsi@...r.kernel.org, dm-devel@...hat.com,
linux-nvme@...ts.infradead.org, linux-fsdevel@...r.kernel.org,
axboe@...nel.dk, msnitzer@...hat.com, bvanassche@....org,
martin.petersen@...cle.com, hare@...e.de, kbusch@...nel.org,
hch@....de, Frederick.Knight@...app.com, osandov@...com,
lsf-pc@...ts.linux-foundation.org, djwong@...nel.org,
josef@...icpanda.com, clm@...com, dsterba@...e.com, tytso@....edu,
jack@...e.com, nitheshshetty@...il.com, gost.dev@...sung.com,
Nitesh Shetty <nj.shetty@...sung.com>,
Kanchan Joshi <joshi.k@...sung.com>,
Arnav Dawn <arnav.dawn@...sung.com>,
Alasdair Kergon <agk@...hat.com>,
Mike Snitzer <snitzer@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>,
James Smart <james.smart@...adcom.com>,
Chaitanya Kulkarni <kch@...dia.com>,
Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
Naohiro Aota <naohiro.aota@....com>,
Johannes Thumshirn <jth@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
linux-kernel@...r.kernel.org
Subject: [PATCH v4 01/10] block: Introduce queue limits for copy-offload
support
Add device limits as sysfs entries,
- copy_offload (RW)
- copy_max_bytes (RW)
- copy_max_hw_bytes (RO)
- copy_max_range_bytes (RW)
- copy_max_range_hw_bytes (RO)
- copy_max_nr_ranges (RW)
- copy_max_nr_ranges_hw (RO)
Above limits help to split the copy payload in block layer.
copy_offload, used for setting copy offload(1) or emulation(0).
copy_max_bytes: maximum total length of copy in single payload.
copy_max_range_bytes: maximum length in a single entry.
copy_max_nr_ranges: maximum number of entries in a payload.
copy_max_*_hw_*: Reflects the device supported maximum limits.
Signed-off-by: Nitesh Shetty <nj.shetty@...sung.com>
Signed-off-by: Kanchan Joshi <joshi.k@...sung.com>
Signed-off-by: Arnav Dawn <arnav.dawn@...sung.com>
---
Documentation/ABI/stable/sysfs-block | 83 ++++++++++++++++
block/blk-settings.c | 59 ++++++++++++
block/blk-sysfs.c | 138 +++++++++++++++++++++++++++
include/linux/blkdev.h | 13 +++
4 files changed, 293 insertions(+)
diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index e8797cd09aff..65e64b5a0105 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -155,6 +155,89 @@ Description:
last zone of the device which may be smaller.
+What: /sys/block/<disk>/queue/copy_offload
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RW] When read, this file shows whether offloading copy to
+ device is enabled (1) or disabled (0). Writing '0' to this
+ file will disable offloading copies for this device.
+ Writing any '1' value will enable this feature.
+
+
+What: /sys/block/<disk>/queue/copy_max_bytes
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RW] While 'copy_max_hw_bytes' is the hardware limit for the
+ device, 'copy_max_bytes' setting is the software limit.
+ Setting this value lower will make Linux issue smaller size
+ copies.
+
+
+What: /sys/block/<disk>/queue/copy_max_hw_bytes
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RO] Devices that support offloading copy functionality may have
+ internal limits on the number of bytes that can be offloaded
+ in a single operation. The `copy_max_hw_bytes`
+ parameter is set by the device driver to the maximum number of
+ bytes that can be copied in a single operation. Copy
+ requests issued to the device must not exceed this limit.
+ A value of 0 means that the device does not
+ support copy offload.
+
+
+What: /sys/block/<disk>/queue/copy_max_nr_ranges
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RW] While 'copy_max_nr_ranges_hw' is the hardware limit for the
+ device, 'copy_max_nr_ranges' setting is the software limit.
+
+
+What: /sys/block/<disk>/queue/copy_max_nr_ranges_hw
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RO] Devices that support offloading copy functionality may have
+ internal limits on the number of ranges in single copy operation
+ that can be offloaded in a single operation.
+ A range is tuple of source, destination and length of data
+ to be copied. The `copy_max_nr_ranges_hw` parameter is set by
+ the device driver to the maximum number of ranges that can be
+ copied in a single operation. Copy requests issued to the device
+ must not exceed this limit. A value of 0 means that the device
+ does not support copy offload.
+
+
+What: /sys/block/<disk>/queue/copy_max_range_bytes
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RW] While 'copy_max_range_hw_bytes' is the hardware limit for
+ the device, 'copy_max_range_bytes' setting is the software
+ limit.
+
+
+What: /sys/block/<disk>/queue/copy_max_range_hw_bytes
+Date: April 2022
+Contact: linux-block@...r.kernel.org
+Description:
+ [RO] Devices that support offloading copy functionality may have
+ internal limits on the size of data, that can be copied in a
+ single range within a single copy operation.
+ A range is tuple of source, destination and length of data to be
+ copied. The `copy_max_range_hw_bytes` parameter is set by the
+ device driver to set the maximum length in bytes of a range
+ that can be copied in an operation.
+ Copy requests issued to the device must not exceed this limit.
+ Sum of sizes of all ranges in a single opeartion should not
+ exceed 'copy_max_hw_bytes'. A value of 0 means that the device
+ does not support copy offload.
+
+
What: /sys/block/<disk>/queue/crypto/
Date: February 2022
Contact: linux-block@...r.kernel.org
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 6ccceb421ed2..70167aee3bf7 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -57,6 +57,12 @@ void blk_set_default_limits(struct queue_limits *lim)
lim->misaligned = 0;
lim->zoned = BLK_ZONED_NONE;
lim->zone_write_granularity = 0;
+ lim->max_hw_copy_sectors = 0;
+ lim->max_copy_sectors = 0;
+ lim->max_hw_copy_nr_ranges = 0;
+ lim->max_copy_nr_ranges = 0;
+ lim->max_hw_copy_range_sectors = 0;
+ lim->max_copy_range_sectors = 0;
}
EXPORT_SYMBOL(blk_set_default_limits);
@@ -81,6 +87,12 @@ void blk_set_stacking_limits(struct queue_limits *lim)
lim->max_dev_sectors = UINT_MAX;
lim->max_write_zeroes_sectors = UINT_MAX;
lim->max_zone_append_sectors = UINT_MAX;
+ lim->max_hw_copy_sectors = ULONG_MAX;
+ lim->max_copy_sectors = ULONG_MAX;
+ lim->max_hw_copy_range_sectors = UINT_MAX;
+ lim->max_copy_range_sectors = UINT_MAX;
+ lim->max_hw_copy_nr_ranges = USHRT_MAX;
+ lim->max_copy_nr_ranges = USHRT_MAX;
}
EXPORT_SYMBOL(blk_set_stacking_limits);
@@ -177,6 +189,45 @@ void blk_queue_max_discard_sectors(struct request_queue *q,
}
EXPORT_SYMBOL(blk_queue_max_discard_sectors);
+/**
+ * blk_queue_max_copy_sectors - set max sectors for a single copy payload
+ * @q: the request queue for the device
+ * @max_copy_sectors: maximum number of sectors to copy
+ **/
+void blk_queue_max_copy_sectors(struct request_queue *q,
+ unsigned int max_copy_sectors)
+{
+ q->limits.max_hw_copy_sectors = max_copy_sectors;
+ q->limits.max_copy_sectors = max_copy_sectors;
+}
+EXPORT_SYMBOL_GPL(blk_queue_max_copy_sectors);
+
+/**
+ * blk_queue_max_copy_range_sectors - set max sectors for a single range, in a copy payload
+ * @q: the request queue for the device
+ * @max_copy_range_sectors: maximum number of sectors to copy in a single range
+ **/
+void blk_queue_max_copy_range_sectors(struct request_queue *q,
+ unsigned int max_copy_range_sectors)
+{
+ q->limits.max_hw_copy_range_sectors = max_copy_range_sectors;
+ q->limits.max_copy_range_sectors = max_copy_range_sectors;
+}
+EXPORT_SYMBOL_GPL(blk_queue_max_copy_range_sectors);
+
+/**
+ * blk_queue_max_copy_nr_ranges - set max number of ranges, in a copy payload
+ * @q: the request queue for the device
+ * @max_copy_nr_ranges: maximum number of ranges
+ **/
+void blk_queue_max_copy_nr_ranges(struct request_queue *q,
+ unsigned int max_copy_nr_ranges)
+{
+ q->limits.max_hw_copy_nr_ranges = max_copy_nr_ranges;
+ q->limits.max_copy_nr_ranges = max_copy_nr_ranges;
+}
+EXPORT_SYMBOL_GPL(blk_queue_max_copy_nr_ranges);
+
/**
* blk_queue_max_secure_erase_sectors - set max sectors for a secure erase
* @q: the request queue for the device
@@ -572,6 +623,14 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
+ t->max_copy_sectors = min(t->max_copy_sectors, b->max_copy_sectors);
+ t->max_hw_copy_sectors = min(t->max_hw_copy_sectors, b->max_hw_copy_sectors);
+ t->max_copy_range_sectors = min(t->max_copy_range_sectors, b->max_copy_range_sectors);
+ t->max_hw_copy_range_sectors = min(t->max_hw_copy_range_sectors,
+ b->max_hw_copy_range_sectors);
+ t->max_copy_nr_ranges = min(t->max_copy_nr_ranges, b->max_copy_nr_ranges);
+ t->max_hw_copy_nr_ranges = min(t->max_hw_copy_nr_ranges, b->max_hw_copy_nr_ranges);
+
t->misaligned |= b->misaligned;
alignment = queue_limit_alignment_offset(b, start);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 88bd41d4cb59..bae987c10f7f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -212,6 +212,129 @@ static ssize_t queue_discard_zeroes_data_show(struct request_queue *q, char *pag
return queue_var_show(0, page);
}
+static ssize_t queue_copy_offload_show(struct request_queue *q, char *page)
+{
+ return queue_var_show(blk_queue_copy(q), page);
+}
+
+static ssize_t queue_copy_offload_store(struct request_queue *q,
+ const char *page, size_t count)
+{
+ unsigned long copy_offload;
+ ssize_t ret = queue_var_store(©_offload, page, count);
+
+ if (ret < 0)
+ return ret;
+
+ if (copy_offload && !q->limits.max_hw_copy_sectors)
+ return -EINVAL;
+
+ if (copy_offload)
+ blk_queue_flag_set(QUEUE_FLAG_COPY, q);
+ else
+ blk_queue_flag_clear(QUEUE_FLAG_COPY, q);
+
+ return ret;
+}
+
+static ssize_t queue_copy_max_hw_show(struct request_queue *q, char *page)
+{
+ return sprintf(page, "%llu\n",
+ (unsigned long long)q->limits.max_hw_copy_sectors << 9);
+}
+
+static ssize_t queue_copy_max_show(struct request_queue *q, char *page)
+{
+ return sprintf(page, "%llu\n",
+ (unsigned long long)q->limits.max_copy_sectors << 9);
+}
+
+static ssize_t queue_copy_max_store(struct request_queue *q,
+ const char *page, size_t count)
+{
+ unsigned long max_copy;
+ ssize_t ret = queue_var_store(&max_copy, page, count);
+
+ if (ret < 0)
+ return ret;
+
+ if (max_copy & (queue_logical_block_size(q) - 1))
+ return -EINVAL;
+
+ max_copy >>= 9;
+ if (max_copy > q->limits.max_hw_copy_sectors)
+ max_copy = q->limits.max_hw_copy_sectors;
+
+ q->limits.max_copy_sectors = max_copy;
+ return ret;
+}
+
+static ssize_t queue_copy_range_max_hw_show(struct request_queue *q, char *page)
+{
+ return sprintf(page, "%llu\n",
+ (unsigned long long)q->limits.max_hw_copy_range_sectors << 9);
+}
+
+static ssize_t queue_copy_range_max_show(struct request_queue *q,
+ char *page)
+{
+ return sprintf(page, "%llu\n",
+ (unsigned long long)q->limits.max_copy_range_sectors << 9);
+}
+
+static ssize_t queue_copy_range_max_store(struct request_queue *q,
+ const char *page, size_t count)
+{
+ unsigned long max_copy;
+ ssize_t ret = queue_var_store(&max_copy, page, count);
+
+ if (ret < 0)
+ return ret;
+
+ if (max_copy & (queue_logical_block_size(q) - 1))
+ return -EINVAL;
+
+ max_copy >>= 9;
+ if (max_copy > UINT_MAX)
+ return -EINVAL;
+
+ if (max_copy > q->limits.max_hw_copy_range_sectors)
+ max_copy = q->limits.max_hw_copy_range_sectors;
+
+ q->limits.max_copy_range_sectors = max_copy;
+ return ret;
+}
+
+static ssize_t queue_copy_nr_ranges_max_hw_show(struct request_queue *q, char *page)
+{
+ return queue_var_show(q->limits.max_hw_copy_nr_ranges, page);
+}
+
+static ssize_t queue_copy_nr_ranges_max_show(struct request_queue *q,
+ char *page)
+{
+ return queue_var_show(q->limits.max_copy_nr_ranges, page);
+}
+
+static ssize_t queue_copy_nr_ranges_max_store(struct request_queue *q,
+ const char *page, size_t count)
+{
+ unsigned long max_nr;
+ ssize_t ret = queue_var_store(&max_nr, page, count);
+
+ if (ret < 0)
+ return ret;
+
+ if (max_nr > USHRT_MAX)
+ return -EINVAL;
+
+ if (max_nr > q->limits.max_hw_copy_nr_ranges)
+ max_nr = q->limits.max_hw_copy_nr_ranges;
+
+ q->limits.max_copy_nr_ranges = max_nr;
+ return ret;
+}
+
static ssize_t queue_write_same_max_show(struct request_queue *q, char *page)
{
return queue_var_show(0, page);
@@ -596,6 +719,14 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
QUEUE_RO_ENTRY(queue_max_open_zones, "max_open_zones");
QUEUE_RO_ENTRY(queue_max_active_zones, "max_active_zones");
+QUEUE_RW_ENTRY(queue_copy_offload, "copy_offload");
+QUEUE_RO_ENTRY(queue_copy_max_hw, "copy_max_hw_bytes");
+QUEUE_RW_ENTRY(queue_copy_max, "copy_max_bytes");
+QUEUE_RO_ENTRY(queue_copy_range_max_hw, "copy_max_range_hw_bytes");
+QUEUE_RW_ENTRY(queue_copy_range_max, "copy_max_range_bytes");
+QUEUE_RO_ENTRY(queue_copy_nr_ranges_max_hw, "copy_max_nr_ranges_hw");
+QUEUE_RW_ENTRY(queue_copy_nr_ranges_max, "copy_max_nr_ranges");
+
QUEUE_RW_ENTRY(queue_nomerges, "nomerges");
QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity");
QUEUE_RW_ENTRY(queue_poll, "io_poll");
@@ -642,6 +773,13 @@ static struct attribute *queue_attrs[] = {
&queue_discard_max_entry.attr,
&queue_discard_max_hw_entry.attr,
&queue_discard_zeroes_data_entry.attr,
+ &queue_copy_offload_entry.attr,
+ &queue_copy_max_hw_entry.attr,
+ &queue_copy_max_entry.attr,
+ &queue_copy_range_max_hw_entry.attr,
+ &queue_copy_range_max_entry.attr,
+ &queue_copy_nr_ranges_max_hw_entry.attr,
+ &queue_copy_nr_ranges_max_entry.attr,
&queue_write_same_max_entry.attr,
&queue_write_zeroes_max_entry.attr,
&queue_zone_append_max_entry.attr,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1b24c1fb3bb1..3596fd37fae7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -270,6 +270,13 @@ struct queue_limits {
unsigned int discard_alignment;
unsigned int zone_write_granularity;
+ unsigned long max_hw_copy_sectors;
+ unsigned long max_copy_sectors;
+ unsigned int max_hw_copy_range_sectors;
+ unsigned int max_copy_range_sectors;
+ unsigned short max_hw_copy_nr_ranges;
+ unsigned short max_copy_nr_ranges;
+
unsigned short max_segments;
unsigned short max_integrity_segments;
unsigned short max_discard_segments;
@@ -574,6 +581,7 @@ struct request_queue {
#define QUEUE_FLAG_RQ_ALLOC_TIME 27 /* record rq->alloc_time_ns */
#define QUEUE_FLAG_HCTX_ACTIVE 28 /* at least one blk-mq hctx is active */
#define QUEUE_FLAG_NOWAIT 29 /* device supports NOWAIT */
+#define QUEUE_FLAG_COPY 30 /* supports copy offload */
#define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
(1 << QUEUE_FLAG_SAME_COMP) | \
@@ -596,6 +604,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
test_bit(QUEUE_FLAG_STABLE_WRITES, &(q)->queue_flags)
#define blk_queue_io_stat(q) test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
#define blk_queue_add_random(q) test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
+#define blk_queue_copy(q) test_bit(QUEUE_FLAG_COPY, &(q)->queue_flags)
#define blk_queue_zone_resetall(q) \
test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
#define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
@@ -960,6 +969,10 @@ extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int);
extern void blk_queue_max_segments(struct request_queue *, unsigned short);
extern void blk_queue_max_discard_segments(struct request_queue *,
unsigned short);
+extern void blk_queue_max_copy_sectors(struct request_queue *q, unsigned int max_copy_sectors);
+extern void blk_queue_max_copy_range_sectors(struct request_queue *q,
+ unsigned int max_copy_range_sectors);
+extern void blk_queue_max_copy_nr_ranges(struct request_queue *q, unsigned int max_copy_nr_ranges);
void blk_queue_max_secure_erase_sectors(struct request_queue *q,
unsigned int max_sectors);
extern void blk_queue_max_segment_size(struct request_queue *, unsigned int);
--
2.35.1.500.gb896f729e2
Powered by blists - more mailing lists