[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201112112242.GA12240@dhcp22.suse.cz>
Date: Thu, 12 Nov 2020 12:22:42 +0100
From: Michal Hocko <mhocko@...e.com>
To: Rik van Riel <riel@...riel.com>
Cc: hughd@...gle.com, xuyu@...ux.alibaba.com,
akpm@...ux-foundation.org, mgorman@...e.de, aarcange@...hat.com,
willy@...radead.org, linux-kernel@...r.kernel.org,
kernel-team@...com, linux-mm@...ck.org, vbabka@...e.cz,
Andrey Grodzovsky <andrey.grodzovsky@....com>,
Chris Wilson <chris@...is-wilson.co.uk>
Subject: Re: [PATCH 2/2] mm,thp,shm: limit gfp mask to no more than specified
[Cc Chris for i915 and Andray]
On Thu 05-11-20 14:15:08, Rik van Riel wrote:
> Matthew Wilcox pointed out that the i915 driver opportunistically
> allocates tmpfs memory, but will happily reclaim some of its
> pool if no memory is available.
It would be good to explicitly mention the requested gfp flags for those
allocations. i915 uses __GFP_NORETRY | __GFP_NOWARN, or GFP_KERNEL. Is
__shmem_rw really meant to not allocate from highmeme/movable zones? Can
it be ever backed by THPs?
ttm might want __GFP_RETRY_MAYFAIL while shmem_read_mapping_page use
the mapping gfp mask which can be NOFS or something else. This is quite
messy already and I suspect that they are more targeting regular order-0
requests. E.g. have a look at cb5f1a52caf23.
I am worried that this games with gfp flags will lead to unmaintainable
code later on. There is a clear disconnect betwen the core THP
allocation strategy and what drivers are asking for and those
requirements might be really conflicting. Not to mention that flags
might be different between regular and THP pages.
> Make sure the gfp mask used to opportunistically allocate a THP
> is always at least as restrictive as the original gfp mask.
>
> Signed-off-by: Rik van Riel <riel@...riel.com>
> Suggested-by: Matthew Wilcox <willy@...radead.org>
> ---
> mm/shmem.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 6c3cb192a88d..ee3cea10c2a4 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1531,6 +1531,26 @@ static struct page *shmem_swapin(swp_entry_t swap, gfp_t gfp,
> return page;
> }
>
> +/*
> + * Make sure huge_gfp is always more limited than limit_gfp.
> + * Some of the flags set permissions, while others set limitations.
> + */
> +static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
> +{
> + gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM;
> + gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY;
> + gfp_t result = huge_gfp & ~allowflags;
> +
> + /*
> + * Minimize the result gfp by taking the union with the deny flags,
> + * and the intersection of the allow flags.
> + */
> + result |= (limit_gfp & denyflags);
> + result |= (huge_gfp & limit_gfp) & allowflags;
> +
> + return result;
> +}
> +
> static struct page *shmem_alloc_hugepage(gfp_t gfp,
> struct shmem_inode_info *info, pgoff_t index)
> {
> @@ -1889,6 +1909,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>
> alloc_huge:
> huge_gfp = vma_thp_gfp_mask(vma);
> + huge_gfp = limit_gfp_mask(huge_gfp, gfp);
> page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true);
> if (IS_ERR(page)) {
> alloc_nohuge:
> --
> 2.25.4
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists