[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CH2PR04MB65228D54F66068DA125CCE47E7A90@CH2PR04MB6522.namprd04.prod.outlook.com>
Date: Wed, 13 Jan 2021 09:28:02 +0000
From: Damien Le Moal <Damien.LeMoal@....com>
To: Ming Lei <tom.leiming@...il.com>,
Changheun Lee <nanich.lee@...sung.com>
CC: Johannes Thumshirn <Johannes.Thumshirn@....com>,
Jens Axboe <axboe@...nel.dk>,
"jisoo2146.oh@...sung.com" <jisoo2146.oh@...sung.com>,
"junho89.kim@...sung.com" <junho89.kim@...sung.com>,
linux-block <linux-block@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"mj0123.lee@...sung.com" <mj0123.lee@...sung.com>,
"seunghwan.hyun@...sung.com" <seunghwan.hyun@...sung.com>,
"sookwan7.kim@...sung.com" <sookwan7.kim@...sung.com>,
Tejun Heo <tj@...nel.org>,
"yt0928.kim@...sung.com" <yt0928.kim@...sung.com>,
"woosung2.lee@...sung.com" <woosung2.lee@...sung.com>
Subject: Re: [PATCH] bio: limit bio max size.
On 2021/01/13 18:19, Ming Lei wrote:
> On Wed, Jan 13, 2021 at 12:09 PM Changheun Lee <nanich.lee@...sung.com> wrote:
>>
>>> On 2021/01/12 21:14, Changheun Lee wrote:
>>>>> On 2021/01/12 17:52, Changheun Lee wrote:
>>>>>> From: "Changheun Lee" <nanich.lee@...sung.com>
>>>>>>
>>>>>> bio size can grow up to 4GB when muli-page bvec is enabled.
>>>>>> but sometimes it would lead to inefficient behaviors.
>>>>>> in case of large chunk direct I/O, - 64MB chunk read in user space -
>>>>>> all pages for 64MB would be merged to a bio structure if memory address is
>>>>>> continued phsycally. it makes some delay to submit until merge complete.
>>>>>> bio max size should be limited as a proper size.
>>>>>
>>>>> But merging physically contiguous pages into the same bvec + later automatic bio
>>>>> split on submit should give you better throughput for large IOs compared to
>>>>> having to issue a bio chain of smaller BIOs that are arbitrarily sized and will
>>>>> likely need splitting anyway (because of DMA boundaries etc).
>>>>>
>>>>> Do you have a specific case where you see higher performance with this patch
>>>>> applied ? On Intel, BIO_MAX_SIZE would be 1MB... That is arbitrary and too small
>>>>> considering that many hardware can execute larger IOs than that.
>>>>>
>>>>
>>>> When I tested 32MB chunk read with O_DIRECT in android, all pages of 32MB
>>>> is merged into a bio structure.
>>>> And elapsed time to merge complete was about 2ms.
>>>> It means first bio-submit is after 2ms.
>>>> If bio size is limited with 1MB with this patch, first bio-submit is about
>>>> 100us by bio_full operation.
>>>
>>> bio_submit() will split the large BIO case into multiple requests while the
>>> small BIO case will likely result one or two requests only. That likely explain
>>> the time difference here. However, for the large case, the 2ms will issue ALL
>>> requests needed for processing the entire 32MB user IO while the 1MB bio case
>>> will need 32 different bio_submit() calls. So what is the actual total latency
>>> difference for the entire 32MB user IO ? That is I think what needs to be
>>> compared here.
>>>
>>> Also, what is your device max_sectors_kb and max queue depth ?
>>>
>>
>> 32MB total latency is about 19ms including merge time without this patch.
>> But with this patch, total latency is about 17ms including merge time too.
>
> 19ms looks too big just for preparing one 32MB sized bio, which isn't
> supposed to
> take so long. Can you investigate where the 19ms is taken just for
> preparing one
> 32MB sized bio?
Changheun mentioned that the device side IO latency is 16.7ms out of the 19ms
total. So the BIO handling, submission+completion takes about 2.3ms, and
Changheun points above to 2ms for the submission part.
>
> It might be iov_iter_get_pages() for handling page fault. If yes, one suggestion
> is to enable THP(Transparent HugePage Support) in your application.
But if that was due to page faults, the same large-ish time would be taken for
the preparing the size-limited BIOs too, no ? No matter how the BIOs are diced,
all 32MB of pages of the user IO are referenced...
>
>
--
Damien Le Moal
Western Digital Research
Powered by blists - more mailing lists