[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13462e59-82f3-d6fc-a84e-2cf3083e0cc7@acm.org>
Date: Wed, 8 Dec 2021 10:33:19 -0800
From: Bart Van Assche <bvanassche@....org>
To: Eric Biggers <ebiggers@...nel.org>, linux-block@...r.kernel.org,
Jens Axboe <axboe@...nel.dk>
Cc: linux-doc@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org, Hannes Reinecke <hare@...e.de>
Subject: Re: [PATCH v2 6/8] docs: sysfs-block: document virt_boundary_mask
On 12/7/21 4:56 PM, Eric Biggers wrote:
> +What: /sys/block/<disk>/queue/virt_boundary_mask
> +Date: April 2021
> +Contact: linux-block@...r.kernel.org
> +Description:
> + [RO] This file shows the I/O segment alignment mask for the
> + block device. I/O requests to this device will be split between
> + segments wherever either the end of the previous segment or the
> + beginning of the current segment is not aligned to
> + virt_boundary_mask + 1 bytes.
"I/O segment alignment" looks confusing to me. My understanding is that this
attribute refers to the alignment of the internal data buffer boundaries and not
to the alignment of the offset on the storage medium. The name "virt_boundary"
refers to the property that if all internal boundaries are a multiple of
(virt_boundary_mask + 1) then an MMU with page size (virt_boundary_mask + 1) can
map the entire data buffer onto a contiguous range of virtual addresses. E.g.
RDMA adapters have an MMU that can do this. Several drivers that set this
attribute support a storage controller that does not have an internal MMU. As an
example, the NVMe core sets this mask since the NVMe specification requires that
only the first element in a PRP list has a non-zero offset. From the NVMe
specification: "PRP entries contained within a PRP List shall have a memory page
offset of 0h. If a second PRP entry is present within a command, it shall have a
memory page offset of 0h. In both cases, the entries are memory".
Bart.
Powered by blists - more mailing lists