lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <97ccf48e-f30c-4abd-b8ff-2b5310a8b60f@suse.cz>
Date: Wed, 23 Oct 2024 09:34:59 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Yu Zhao <yuzhao@...gle.com>
Cc: Michal Hocko <mhocko@...e.com>, Andrew Morton
 <akpm@...ux-foundation.org>, David Rientjes <rientjes@...gle.com>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org, Link Lin
 <linkl@...gle.com>, Mel Gorman <mgorman@...hsingularity.net>,
 Matt Fleming <mfleming@...udflare.com>
Subject: Re: [PATCH mm-unstable v1] mm/page_alloc: try not to overestimate
 free highatomic

On 10/23/24 08:36, Yu Zhao wrote:
> On Tue, Oct 22, 2024 at 4:53 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>>
>> +Cc Mel and Matt
>>
>> On 10/21/24 19:25, Michal Hocko wrote:
>> > On Mon 21-10-24 11:10:50, Yu Zhao wrote:
>> >> On Mon, Oct 21, 2024 at 2:13 AM Michal Hocko <mhocko@...e.com> wrote:
>> >> >
>> >> > On Sat 19-10-24 23:13:15, Yu Zhao wrote:
>> >> > > OOM kills due to vastly overestimated free highatomic reserves were
>> >> > > observed:
>> >> > >
>> >> > >   ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
>> >> > >   Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
>> >> > >   Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
>> >> > >
>> >> > > The second line above shows that the OOM kill was due to the following
>> >> > > condition:
>> >> > >
>> >> > >   free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
>> >> > >
>> >> > > And the third line shows there were no free pages in any
>> >> > > MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
>> >> > > 'H'. Therefore __zone_watermark_unusable_free() overestimated free
>> >> > > highatomic reserves. IOW, it underestimated the usable free memory by
>> >> > > over 1GB, which resulted in the unnecessary OOM kill.
>> >> >
>> >> > Why doesn't unreserve_highatomic_pageblock deal with this situation?
>> >>
>> >> The current behavior of unreserve_highatomic_pageblock() seems WAI to
>> >> me: it unreserves highatomic pageblocks that contain *free* pages so
>>
>> Hm I don't think it's completely WAI. The intention is that we should be
>> able to unreserve the highatomic pageblocks before going OOM, and there
>> seems to be an unintended corner case that if the pageblocks are fully
>> exhausted, they are not reachable for unreserving.
> 
> I still think unreserving should only apply to highatomic PBs that
> contain free pages. Otherwise, it seems to me that it'd be
> self-defecting because:
> 1. Unreserving fully used hightatomic PBs can't fulfill the alloc
> demand immediately.

I thought the alloc demand is only blocked on the pessimistic watermark
calculation. Usable free pages exist, but the allocation is not allowed to
use them.

> 2. More importantly, it only takes one alloc failure in
> __alloc_pages_direct_reclaim() to reset nr_reserved_highatomic to 2MB,
> from as high as 1% of a zone (in this case 1GB). IOW, it makes more
> sense to me that highatomic only unreserves what it doesn't fully use
> each time unreserve_highatomic_pageblock() is called, not everything
> it got (except the last PB).

But if the highatomic pageblocks are already full, we are not really
removing any actual highatomic reserves just by changing the migratetype and
decreasing nr_reserved_highatomic? In fact that would allow the reserves
grow with some actual free pages in the future.

> Also not reachable from free_area[] isn't really a big problem. There
> are ways to solve this without scanning the PB bitmap.

Sure, if we agree it's the way to go.

>> The nr_highatomic is then
>> also fully misleading as it prevents allocations due to a limit that does
>> not reflect reality.
> 
> Right, and the comments warn about this.

Yes and explains it's to avoid the cost of searching free lists. Your fix
introduces that cost and that's not really great for a watermark check fast
path. I'd rather move the cost to highatomic unreserve which is not a fast path.

>> Your patch addresses the second issue, but there's a
>> cost to it when calculating the watermarks, and it would be better to
>> address the root issue instead.
> 
> Theoretically, yes. And I don't think it's actually measurable
> considering the paths (alloc/reclaim) we are in -- all the data
> structures this patch accesses should already have been cache-hot, due
> to unreserve_highatomic_pageblock(), etc.

__zone_watermark_unusable_free() will be executed from every allocation's
fast path, and not only after we recently did
unreserve_highatomic_pageblock(). AFAICS as soon as nr_reserved_highatomic
is over pageblock_nr_pages we'll unconditionally start counting precisely
and the design wanted to avoid this.

> Also, we have not agreed on the root cause yet.
> 
>> >> that those pages can become usable to others. There is nothing to
>> >> unreserve when they have no free pages.
>>
>> Yeah there are no actual free pages to unreserve, but unreserving would fix
>> the nr_highatomic overestimate and thus allow allocations to proceed.
> 
> Yes, but honestly, I think this is going to cause regression in
> highatomic allocs.

I think not as having more realistic counter of what's actually reserved
(and not already used up) can also allow reserving new pageblocks.

>> > I do not follow. How can you have reserved highatomic pages of that size
>> > without having page blocks with free memory. In other words is this an
>> > accounting problem or reserves problem? This is not really clear from
>> > your description.
>>
>> I think it's the problem of finding the highatomic pageblocks for
>> unreserving them once they become full. The proper fix is not exactly
>> trivial though. Either we'll have to scan for highatomic pageblocks in the
>> pageblock bitmap, or track them using an additional data structure.
> 
> Assuming we want to unreserve fully used hightatomic PBs, we wouldn't
> need to scan for them or track them. We'd only need to track the delta
> between how many we want to unreserve (full or not) and how many we
> are able to do so. The first page freed in a PB that's highatomic
> would need to try to reduce the delta by changing the MT.

Hm that assumes we're adding some checks in free fastpath, and for that to
work also that there will be a freed page in highatomic PC in near enough
future from the decision we need to unreserve something. Which is not so
much different from the current assumption we'll find such a free page
already in the free list immediately.

> To summarize, I think this is an estimation problem, which I would
> categorize as a lesser problem than accounting problems. But it sounds
> to me that you think it's a policy problem, i.e., the highatomic
> unreserving policy is wrong or not properly implemented?

Yeah I'd say not properly implemented, but that sounds like a mechanism, not
policy problem to me :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ