lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVPRuikOn=zJ5fboGMyQs3rMxaUK4f-Lk0s-+HpPy+ov8w@mail.gmail.com>
Date:   Fri, 17 Feb 2017 09:25:27 +0800
From:   Ming Lei <tom.leiming@...il.com>
To:     Shaohua Li <shli@...nel.org>
Cc:     Jens Axboe <axboe@...com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "open list:SOFTWARE RAID (Multiple Disks) SUPPORT" 
        <linux-raid@...r.kernel.org>,
        linux-block <linux-block@...r.kernel.org>,
        Christoph Hellwig <hch@...radead.org>,
        NeilBrown <neilb@...e.com>
Subject: Re: [PATCH 00/17] md: cleanup on direct access to bvec table

Hi Shaohua,

On Fri, Feb 17, 2017 at 6:16 AM, Shaohua Li <shli@...nel.org> wrote:
> On Thu, Feb 16, 2017 at 07:45:30PM +0800, Ming Lei wrote:
>> In MD's resync I/O path, there are lots of direct access to bio's
>> bvec table. This patchset kills most of them, and the conversion
>> is quite straightforward.
>
> I don't like this approach. The MD uses a hacky way to manage pages allocated,
> this is the root of the problem. The patches add another hack way to do the

Yes, I agree, and bio_iov_iter_get_pages() uses this kind of hacky way too
actually.

> management. I'd like to see explict management of the pages, for example, add
> data structure in r1bio to manage the pages, then we can use existing API for
> all the stuffes we need.

Yeah, that is definitely clean, but we have to pay the following cost:

- allocate at least N * (128 + 4) bytes per each r1_bio/r10_bio
- N is pool_info.raid_disks for raid1, and conf->copies for raid10

If we are happy to introduce the cost, I can take this way in V1.


Thanks,
Ming Lei

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ