[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170217041635.4hxnlqf7mh2wwtok@kernel.org>
Date: Thu, 16 Feb 2017 20:16:35 -0800
From: Shaohua Li <shli@...nel.org>
To: Ming Lei <tom.leiming@...il.com>
Cc: Jens Axboe <axboe@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"open list:SOFTWARE RAID (Multiple Disks) SUPPORT"
<linux-raid@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>,
Christoph Hellwig <hch@...radead.org>,
NeilBrown <neilb@...e.com>
Subject: Re: [PATCH 00/17] md: cleanup on direct access to bvec table
On Fri, Feb 17, 2017 at 09:25:27AM +0800, Ming Lei wrote:
> Hi Shaohua,
>
> On Fri, Feb 17, 2017 at 6:16 AM, Shaohua Li <shli@...nel.org> wrote:
> > On Thu, Feb 16, 2017 at 07:45:30PM +0800, Ming Lei wrote:
> >> In MD's resync I/O path, there are lots of direct access to bio's
> >> bvec table. This patchset kills most of them, and the conversion
> >> is quite straightforward.
> >
> > I don't like this approach. The MD uses a hacky way to manage pages allocated,
> > this is the root of the problem. The patches add another hack way to do the
>
> Yes, I agree, and bio_iov_iter_get_pages() uses this kind of hacky way too
> actually.
>
> > management. I'd like to see explict management of the pages, for example, add
> > data structure in r1bio to manage the pages, then we can use existing API for
> > all the stuffes we need.
>
> Yeah, that is definitely clean, but we have to pay the following cost:
>
> - allocate at least N * (128 + 4) bytes per each r1_bio/r10_bio
> - N is pool_info.raid_disks for raid1, and conf->copies for raid10
>
> If we are happy to introduce the cost, I can take this way in V1.
It's not a big deal. The inflight bio shouldn't be big, so the r1_bio count
isn't big. We don't waste much.
Thanks,
Shaohua
Powered by blists - more mailing lists