lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200219082303.GA32242@linux.ibm.com>
Date:   Wed, 19 Feb 2020 09:23:03 +0100
From:   Mike Rapoport <rppt@...ux.ibm.com>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Jeremy Cline <jcline@...hat.com>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [patch 2/2] mm, thp: track fallbacks due to failed memcg charges
 separately

On Tue, Feb 18, 2020 at 06:29:21PM -0800, David Rientjes wrote:
> The thp_fault_fallback stat in /proc/vmstat is incremented if either the
> hugepage allocation fails through the page allocator or the hugepage
> charge fails through mem cgroup.
> 
> This patch leaves this field untouched but adds a new field,
> thp_fault_fallback_charge, which is incremented only when the mem cgroup
> charge fails.
> 
> This distinguishes between faults that want to be backed by hugepages but
> fail due to fragmentation (or low memory conditions) and those that fail
> due to mem cgroup limits.  That can be used to determine the impact of
> fragmentation on the system by excluding faults that failed due to memcg
> usage.
> 
> Signed-off-by: David Rientjes <rientjes@...gle.com>

Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>	# Documentation

> ---
>  v2:
>   - supported for shmem faults as well per Kirill
>   - fixed worked in documentation and commit description per Mike
> 
>  Documentation/admin-guide/mm/transhuge.rst | 5 +++++
>  include/linux/vm_event_item.h              | 1 +
>  mm/huge_memory.c                           | 2 ++
>  mm/shmem.c                                 | 4 +++-
>  mm/vmstat.c                                | 1 +
>  5 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -310,6 +310,11 @@ thp_fault_fallback
>  	is incremented if a page fault fails to allocate
>  	a huge page and instead falls back to using small pages.
>  
> +thp_fault_fallback_charge
> +	is incremented if a page fault fails to charge a huge page and
> +	instead falls back to using small pages even though the
> +	allocation was successful.
> +
>  thp_collapse_alloc_failed
>  	is incremented if khugepaged found a range
>  	of pages that should be collapsed into one huge page but failed
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -73,6 +73,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		THP_FAULT_ALLOC,
>  		THP_FAULT_FALLBACK,
> +		THP_FAULT_FALLBACK_CHARGE,
>  		THP_COLLAPSE_ALLOC,
>  		THP_COLLAPSE_ALLOC_FAILED,
>  		THP_FILE_ALLOC,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>  	if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) {
>  		put_page(page);
>  		count_vm_event(THP_FAULT_FALLBACK);
> +		count_vm_event(THP_FAULT_FALLBACK_CHARGE);
>  		return VM_FAULT_FALLBACK;
>  	}
>  
> @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
>  			put_page(page);
>  		ret |= VM_FAULT_FALLBACK;
>  		count_vm_event(THP_FAULT_FALLBACK);
> +		count_vm_event(THP_FAULT_FALLBACK_CHARGE);
>  		goto out;
>  	}
>  
> diff --git a/mm/shmem.c b/mm/shmem.c
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1872,8 +1872,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>  	error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg,
>  					    PageTransHuge(page));
>  	if (error) {
> -		if (vmf && PageTransHuge(page))
> +		if (vmf && PageTransHuge(page)) {
>  			count_vm_event(THP_FAULT_FALLBACK);
> +			count_vm_event(THP_FAULT_FALLBACK_CHARGE);
> +		}
>  		goto unacct;
>  	}
>  	error = shmem_add_to_page_cache(page, mapping, hindex,
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1254,6 +1254,7 @@ const char * const vmstat_text[] = {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	"thp_fault_alloc",
>  	"thp_fault_fallback",
> +	"thp_fault_fallback_charge",
>  	"thp_collapse_alloc",
>  	"thp_collapse_alloc_failed",
>  	"thp_file_alloc",

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ