lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.2012141317351.1925@eggly.anvils>
Date:   Mon, 14 Dec 2020 13:50:16 -0800 (PST)
From:   Hugh Dickins <hughd@...gle.com>
To:     Yu Zhao <yuzhao@...gle.com>
cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Hugh Dickins <hughd@...gle.com>,
        Alex Shi <alex.shi@...ux.alibaba.com>,
        Michal Hocko <mhocko@...nel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Roman Gushchin <guro@...com>, Vlastimil Babka <vbabka@...e.cz>,
        Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/11] mm: enlarge the "int nr_pages" parameter of
 update_lru_size()

On Mon, 7 Dec 2020, Yu Zhao wrote:

> update_lru_sizes() defines an unsigned long argument and passes it as
> nr_pages to update_lru_size(). Though this isn't causing any overflows
> I'm aware of, it's a bad idea to go through the demotion given that we
> have recently stumbled on a related type promotion problem fixed by
> commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit")
> 
> Note that the underlying counters are already in long. This is another
> reason we shouldn't have the demotion.
> 
> This patch enlarges all relevant parameters on the path to the final
> underlying counters:
> 	update_lru_size(int -> long)
> 		if memcg:
> 			__mod_lruvec_state(int -> long)
> 				if smp:
> 					__mod_node_page_state(long)
> 				else:
> 					__mod_node_page_state(int -> long)
> 			__mod_memcg_lruvec_state(int -> long)
> 				__mod_memcg_state(int -> long)
> 		else:
> 			__mod_lruvec_state(int -> long)
> 				if smp:
> 					__mod_node_page_state(long)
> 				else:
> 					__mod_node_page_state(int -> long)
> 
> 		__mod_zone_page_state(long)
> 
> 		if memcg:
> 			mem_cgroup_update_lru_size(int -> long)
> 
> Note that __mod_node_page_state() for the smp case and
> __mod_zone_page_state() already use long. So this change also fixes
> the inconsistency.
> 
> Signed-off-by: Yu Zhao <yuzhao@...gle.com>

NAK from me to this 11/11: I'm running happily with your 1-10 on top of
mmotm (I'll review them n a few days, but currently more concerned with
Rik's shmem huge gfp_mask), but had to leave this one out.

You think you are future-proofing with this, but it is present-breaking.

It looks plausible (though seems random: why these particular functions
use long but others not? why __mod_memcg_state() long, mod_memcg_state()
int?), and I was fooled; but fortunately was still testing with memcg
moving, for Alex's patchset.

Soon got stuck waiting in balance_dirty_pages(), /proc/vmstat showing
nr_anon_pages 2263142822377729
nr_mapped 125095217474159
nr_file_pages 225421358649526
nr_dirty 8589934592
nr_writeback 1202590842920
nr_shmem 40501541678768
nr_anon_transparent_hugepages 51539607554

That last (anon THPs) nothing to do with this patch, but illustrates
what Muchun is fixing in his 1/7 "mm: memcontrol: fix NR_ANON_THPS
accounting in charge moving".

The rest of them could be fixed by changing mem_cgroup_move_account()'s
"unsigned int nr_pages" to "long nr_pages" in this patch, but I think
it's safer just to drop the patch: the promotion of "unsigned int" to
"long" does not work as you would like it to.

I see that mm/vmscan.c contains several "unsigned int" counts of pages,
everything works fine at present so far as I know, and those appeared
to work even with your patch; but I am not confident in my test coverage,
and not confident in us being able to outlaw unsigned int page counts in
future.

Hugh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ