[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1e551e7b-be35-94d2-05de-9d49dc538d42@huawei.com>
Date: Fri, 15 Apr 2022 09:39:19 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/vmscan: fix comment for current_may_throttle
On 2022/4/15 4:43, Andrew Morton wrote:
> On Thu, 14 Apr 2022 20:02:02 +0800 Miaohe Lin <linmiaohe@...wei.com> wrote:
>
>> Since commit 6d6435811c19 ("remove bdi_congested() and wb_congested() and
>> related functions"), there is no congested backing device check anymore.
>> Correct the comment accordingly.
>>
>> ...
>>
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -2334,8 +2334,7 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
>> /*
>> * If a kernel thread (such as nfsd for loop-back mounts) services
>> * a backing device by writing to the page cache it sets PF_LOCAL_THROTTLE.
>> - * In that case we should only throttle if the backing device it is
>> - * writing to is congested. In other cases it is safe to throttle.
>> + * In that case we should not throttle it otherwise it is safe to do so.
>> */
>> static int current_may_throttle(void)
>> {
>
> That's a bit awkward to read. I tweaked it, and reflowed the comment
> to 80 cols.
>
> --- a/mm/vmscan.c~mm-vmscan-fix-comment-for-current_may_throttle-fix
> +++ a/mm/vmscan.c
> @@ -2332,9 +2332,9 @@ static unsigned int move_pages_to_lru(st
> }
>
> /*
> - * If a kernel thread (such as nfsd for loop-back mounts) services
> - * a backing device by writing to the page cache it sets PF_LOCAL_THROTTLE.
> - * In that case we should not throttle it otherwise it is safe to do so.
> + * If a kernel thread (such as nfsd for loop-back mounts) services a backing
> + * device by writing to the page cache it sets PF_LOCAL_THROTTLE. In this case
> + * we should not throttle. Otherwise it is safe to do so.
> */
> static int current_may_throttle(void)
> {
> _
Looks better. Many thanks for doing this! :)
>
> .
>
Powered by blists - more mailing lists