lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7afc485-9036-8f09-e582-f2eb45620670@redhat.com>
Date:   Tue, 10 Aug 2021 15:24:45 -0400
From:   Nico Pache <npache@...hat.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     linux-mm@...ck.org, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, aquini@...hat.com,
        shakeelb@...gle.com, llong@...hat.com, mhocko@...e.com,
        hakavlad@...ox.lv
Subject: Re: [PATCH v3] vm_swappiness=0 should still try to avoid swapping
 anon memory



On 8/10/21 11:27 AM, Johannes Weiner wrote:
> Hello Nico,
> 
> On Mon, Aug 09, 2021 at 06:37:40PM -0400, Nico Pache wrote:
>> Since commit 170b04b7ae49 ("mm/workingset: prepare the workingset detection
>> infrastructure for anon LRU") and commit b91ac374346b ("mm: vmscan: enforce
>> inactive:active ratio at the reclaim root") swappiness can start prematurely
> 
> Could clarify what you mean by "prematurely"?

Hi Johannes! 

The reason I used the words prematurely and indiscriminately when trying to describe the behavior is because AFAICS the swappiness value is not being considered and this isnt a OOM case, so its prematurely going for anon memory. 

> 
> The new balancing algorithm targets the lowest amount of overall
> paging IO performed across the anon and file sets. It doesn't swap
> unless it has an indication that a couple of swap writes are
> outweighed by a reduction of reads on the cache side.
> 
> Is this not working for you?

Well it is for the most part, but to your point below, the sc->is_file_tiny case can directly bypass the meaning of swappiness and chooses to do whatever it likes.

> 
>> swapping anon memory. This is due to the assumption that refaulting anon should
>> always allow the shrinker to target anon memory.
> 
> This doesn't sound right. Did you mean "refaulting file"?

<code> 
   
refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON);
    if (refaults != target_lruvec->refaults[0] || 
	inactive_is_low(target_lruvec, LRU_INACTIVE_ANON))
	sc->may_deactivate |= DEACTIVATE_ANON;

</code> 

Perhaps this is incorrect then? target_lruvec is using refaults[0] which is collected in snapshot_refaults. snapshot_refaults is populating index 0 with the WORKINGSET_ACTIVATE_ANON page state. the refaults variable is doing the same. So I assumed the refaulting ( new refault count != snapshot count) is comparing that of the anon workingset memory, not the refaulting of file cache.  

> 
>> Add a check for swappiness being >0 before indiscriminately
>> targeting Anon.
> 
>> Before these commits when a user had swappiness=0 anon memory would
>> rarely get swapped; this behavior has remained constant sense
>> RHEL5. This commit keeps that behavior intact and prevents the new
>> workingset refaulting from challenging the anon memory when
>> swappiness=0.
> 
> I'm wondering how you're getting anon scans with swappiness=0. If you
> look at get_scan_count(), SCAN_FRACT with swappines=0 should always
> result in ap = fraction[0] = 0, which never yields any anon scan
> targets. So I'm thinking you're running into sc->file_is_tiny
> situations, meaning remaining file pages alone are not enough to
> restore watermarks anymore. Is that possible?

Yes DEACTIVATE_ANON is enabling the file_is_tiny case in shrink_node(). That is what im trying to prevent in the swappiness=0 case. 

> 
> In that case, anon scanning is forced, and always has been. But the
> difference is that before the above-mentioned patches, we'd usually
> force scan just the smaller inactive list, whereas now we disable
> active list protection due to swapins and target the entire anon
> set. I suppose you'd prefer we go back to that, so that more pressure
> remains proportionally on the file set, and just enough anon to get
> above the watermarks again

Well kind of. It used to be that inactive_list_is_low would allow allow for the scanning of anon memory, but I am not removing that case here. Thats why my V3 separated the swappiness check from the inactive_is_low. Furthermore, the active list protection use to only be considered on the file LRU, as seem in ~4.18 inactive_list_is_low.  

> 
> One complication I could see with that is that we no longer start anon
> pages on the active list like we used to. We used to say active until
> proven otherwise; now it's inactive until proven otherwise. It's
> possible for the inactive list to contain a much bigger share of the
> total anon set now than before, in which case your patch wouldn't have
> the desired effect of targetting just a small amount of anon pages to
> get over the watermark hump.

Yes I believe this is also makes the problem worst. Im not sure if given the anon memory the same read-once optimization (starts on the inactive list) as file cache is the way to go. 

> 
> We may need a get_scan_count() solution after all, and I agree with
> previous reviews that this is the better location for such an issue...

I cant see why a get_scan_count solution is better then removing the problem where it starts.

> 
> One thing I think we should do - whether we need more on top or not -
> is allowing file reclaim to continue when sc->file_is_tiny. Yes, we
> also need anon to meet the watermarks, but it's not clear why we
> should stop scanning file pages altogether: it's possible they get us
> there 99% of the way, and somebody clearly wanted us to swap as little
> as possible to end up in a situation like that, so:> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index eeab6611993c..90dac3dc9903 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2477,7 +2477,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>  	 * If the system is almost out of file pages, force-scan anon.
>  	 */
>  	if (sc->file_is_tiny) {
> -		scan_balance = SCAN_ANON;
> +		scan_balance = SCAN_EQUAL;
>  		goto out;
>  	}

I agree, I think allowing it to scan both would be better as well. 

Cheers!
-- Nico 
  

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ