[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2a213eb-e69b-4572-b837-0c384bbb5960@igalia.com>
Date: Thu, 31 Oct 2024 08:04:58 -0300
From: Maíra Canal <mcanal@...lia.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jonathan Corbet <corbet@....net>, Hugh Dickins <hughd@...gle.com>,
Barry Song <baohua@...nel.org>, David Hildenbrand <david@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, Lance Yang
<ioworker0@...il.com>, linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-dev@...lia.com
Subject: Re: [PATCH v3 0/4] mm: add more kernel parameters to control mTHP
Hi Andrew,
On 30/10/24 19:50, Andrew Morton wrote:
> On Wed, 30 Oct 2024 09:58:54 -0300 Maíra Canal <mcanal@...lia.com> wrote:
>
>> The second and third patches focus on controlling THP support for shmem
>> via the kernel command line. The second patch introduces a parameter to
>> control the global default huge page allocation policy for the internal
>> shmem mount.
>
> The changelogs for patches 2 and 3 both say
>
> : By configuring ..., applications that use shmem, such as the DRM GEM objects,
> : can take advantage of mTHP before it's been configured through sysfs.
>
> There isn't a lot of info here - please explain this timing issue in
> more detail.
>
> Because the question which leaps to mind is: shouldn't the
> "applications that use shmem" be changed to "configure mTHP through
> sysfs" *before* "using shmem"? Seems pretty basic.
Sorry about that, I'll try to improve the commit messages and add more
details.
As mentioned in the example I gave ("DRM GEM objects"), my main use is
GEM objects backed by shmem. I'd like to use Huge Pages on the GPU and I
can only do that if I have contiguous memory to back my objects.
I can't think how I can change sysfs from a DRM driver.
Best Regards,
- Maíra
>
>
> Also, please consider my question to be a critique of the changelogs.
> If the changelogs were complete, I wouldn't need to ask any questions!
Powered by blists - more mailing lists