[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20191107111558.DEC5FA4059@d06av23.portsmouth.uk.ibm.com>
Date: Thu, 7 Nov 2019 16:45:58 +0530
From: Ritesh Harjani <riteshh@...ux.ibm.com>
To: Jan Kara <jack@...e.cz>
Cc: tytso@....edu, linux-ext4@...r.kernel.org,
linux-fsdevel@...r.kernel.org, mbobrowski@...browski.org
Subject: Re: [RFC 0/5] Ext4: Add support for blocksize < pagesize for
dioread_nolock
On 11/6/19 10:53 PM, Jan Kara wrote:
> On Wed 16-10-19 13:07:06, Ritesh Harjani wrote:
>> This patch series adds the support for blocksize < pagesize for
>> dioread_nolock feature.
>>
>> Since in case of blocksize < pagesize, we can have multiple
>> small buffers of page as unwritten extents, we need to
>> maintain a vector of these unwritten extents which needs
>> the conversion after the IO is complete. Thus, we maintain
>> a list of tuple <offset, size> pair (io_end_vec) for this &
>> traverse this list to do the unwritten to written conversion.
>>
>> Appreciate any reviews/comments on this patches.
>
> I know Ted has merged the patches already so this is just informational but
> I've read the patches and they look fine to me. Thanks for the work! I was
Appreciate your help too for valuable feedback & pointers at various places.
> just thinking that we could actually make the vector tracking more
> efficient because the io_end always looks like:
>
> one-big-extent-to-fully-write + whatever it takes to fully write out the
> last page
>
> So your vectors could be also expressed as "extent to write" + bitmap of
> blocks written in the last page. And 64-bits are enough for the bitmap for
> anything ext4 supports so we could easily save allocation of ioend_vec etc.
> Just a suggestion.
Yes, sounds good to me. Although slab allocations are also fast.
However I agree this should be more efficient and will also avoid the
list management and or list pointer traversal.
Sure, will work over this optimization once I get some closure on some
ongoing open items.
Thanks & appreciate your feedback.
ritesh
Powered by blists - more mailing lists