lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ed383686-e71a-453c-b751-182531b46a76@arm.com>
Date: Thu, 4 Jul 2024 16:54:52 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Bang Li <libang.linux@...il.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>, akpm@...ux-foundation.org,
 hughd@...gle.com
Cc: willy@...radead.org, david@...hat.com, wangkefeng.wang@...wei.com,
 ying.huang@...el.com, 21cnbao@...il.com, shy828301@...il.com,
 ziy@...dia.com, ioworker0@...il.com, da.gomez@...sung.com,
 p.raghav@...sung.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 4/6] mm: shmem: add mTHP support for anonymous shmem

On 04/07/2024 16:05, Bang Li wrote:
> Hey Ryan,
> 
> On 2024/7/4 21:58, Ryan Roberts wrote:
>>>> Then for tmpfs, which doesn't support non-PMD-sizes yet, we just always use the
>>>> PMD-size control for decisions.
>>>>
>>>> I'm also really struggling with the concept of shmem_is_huge() existing along
>>>> side shmem_allowable_huge_orders(). Surely this needs to all be refactored into
>>>> shmem_allowable_huge_orders()?
>>> I understood. But now they serve different purposes: shmem_is_huge() will be
>>> used to check the huge orders for the top level, for*tmpfs*  and anon shmem;
>>> whereas shmem_allowable_huge_orders() will only be used to check the per-size
>>> huge orders for anon shmem (excluding tmpfs now). However, as I plan to add mTHP
>>> support for tmpfs, I think we can perform some cleanups.
>>>
>>>>> +    /* Allow mTHP that will be fully within i_size. */
>>>>> +    order = highest_order(within_size_orders);
>>>>> +    while (within_size_orders) {
>>>>> +        index = round_up(index + 1, order);
>>>>> +        i_size = round_up(i_size_read(inode), PAGE_SIZE);
>>>>> +        if (i_size >> PAGE_SHIFT >= index) {
>>>>> +            mask |= within_size_orders;
>>>>> +            break;
>>>>> +        }
>>>>> +
>>>>> +        order = next_order(&within_size_orders, order);
>>>>> +    }
>>>>> +
>>>>> +    if (vm_flags & VM_HUGEPAGE)
>>>>> +        mask |= READ_ONCE(huge_shmem_orders_madvise);
>>>>> +
>>>>> +    if (global_huge)
>>>> Perhaps I've misunderstood global_huge, but I think its just the return value
>>>> from shmem_is_huge()? But you're also using shmem_huge directly in this
>>> Yes.
>>>
>>>> function. I find it all rather confusing.
>>> I think I have explained why need these logics as above. Since mTHP support for
>>> shmem has just started (tmpfs is still in progress). I will make it more clear
>>> in the following patches.
>> OK as long as you have a plan for the clean up, that's good enough for me.
> 
> Can I continue to push the following patch [1]? When other types of shmem mTHP
> are supported, we will perform cleanups uniformly.

I guess that makes sense.

> 
> [1] https://lore.kernel.org/linux-mm/20240702023401.41553-1-libang.li@antgroup.com/
> 
> Thanks,
> Bang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ