lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 29 Jun 2020 10:03:50 +0200 From: Michal Hocko <mhocko@...nel.org> To: Joonsoo Kim <js1304@...il.com> Cc: Andrew Morton <akpm@...ux-foundation.org>, Linux Memory Management List <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>, kernel-team@....com, Vlastimil Babka <vbabka@...e.cz>, Christoph Hellwig <hch@...radead.org>, Roman Gushchin <guro@...com>, Mike Kravetz <mike.kravetz@...cle.com>, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Joonsoo Kim <iamjoonsoo.kim@....com> Subject: Re: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function On Mon 29-06-20 15:41:37, Joonsoo Kim wrote: > 2020년 6월 26일 (금) 오후 4:33, Michal Hocko <mhocko@...nel.org>님이 작성: > > > > On Fri 26-06-20 14:02:49, Joonsoo Kim wrote: > > > 2020년 6월 25일 (목) 오후 9:05, Michal Hocko <mhocko@...nel.org>님이 작성: > > > > > > > > On Tue 23-06-20 15:13:45, Joonsoo Kim wrote: > > [...] > > > > > -struct page *new_page_nodemask(struct page *page, > > > > > - int preferred_nid, nodemask_t *nodemask) > > > > > +struct page *alloc_migration_target(struct page *page, unsigned long private) > > > > > { > > > > > - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; > > > > > + struct migration_target_control *mtc; > > > > > + gfp_t gfp_mask; > > > > > unsigned int order = 0; > > > > > struct page *new_page = NULL; > > > > > + int zidx; > > > > > + > > > > > + mtc = (struct migration_target_control *)private; > > > > > + gfp_mask = mtc->gfp_mask; > > > > > > > > > > if (PageHuge(page)) { > > > > > return alloc_huge_page_nodemask( > > > > > - page_hstate(compound_head(page)), > > > > > - preferred_nid, nodemask, 0, false); > > > > > + page_hstate(compound_head(page)), mtc->nid, > > > > > + mtc->nmask, gfp_mask, false); > > > > > } > > > > > > > > > > if (PageTransHuge(page)) { > > > > > + gfp_mask &= ~__GFP_RECLAIM; > > > > > > > > What's up with this gfp_mask modification? > > > > > > THP page allocation uses a standard gfp masks, GFP_TRANSHUGE_LIGHT and > > > GFP_TRANHUGE. __GFP_RECLAIM flags is a big part of this standard mask design. > > > So, I clear it here so as not to disrupt the THP gfp mask. > > > > Why this wasn't really needed before? I guess I must be missing > > something here. This patch should be mostly mechanical convergence of > > existing migration callbacks but this change adds a new behavior AFAICS. > > Before this patch, a user cannot specify a gfp_mask and THP allocation > uses GFP_TRANSHUGE > statically. Unless I am misreading there are code paths (e.g.new_page_nodemask) which simply use add GFP_TRANSHUGE to GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL. And this goes all the way to thp migration introduction. > After this patch, a user can specify a gfp_mask and it > could conflict with GFP_TRANSHUGE. > This code tries to avoid this conflict. > > > It would effectively drop __GFP_RETRY_MAYFAIL and __GFP_KSWAPD_RECLAIM > > __GFP_RETRY_MAYFAIL isn't dropped. __GFP_RECLAIM is > "___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM". > So, __GFP_KSWAPD_RECLAIM would be dropped for THP allocation. > IIUC, THP allocation doesn't use __GFP_KSWAPD_RECLAIM since it's > overhead is too large and this overhead should be given to the caller > rather than system thread (kswapd) and so on. Yes, there is a reason why KSWAPD is excluded from THP allocations in the page fault path. Maybe we want to extend that behavior to the migration as well. I do not have a strong opinion on that because I haven't seen excessive kswapd reclaim due to THP migrations. They are likely too rare. But as I've said in my previous email. Make this a separate patch with an explanation why we want this. -- Michal Hocko SUSE Labs
Powered by blists - more mailing lists