lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 26 Jun 2024 21:31:57 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Takaya Saeki <takayas@...omium.org>
Cc: Matthew Wilcox <willy@...radead.org>, Andrew Morton
 <akpm@...ux-foundation.org>, Steven Rostedt <rostedt@...dmis.org>, Masami
 Hiramatsu <mhiramat@...nel.org>, Mathieu Desnoyers
 <mathieu.desnoyers@...icios.com>, Junichi Uekawa <uekawa@...omium.org>,
 linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
 linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v2] filemap: add trace events for get_pages, map_pages,
 and fault

On Thu, 20 Jun 2024 16:19:03 +0000
Takaya Saeki <takayas@...omium.org> wrote:

> To allow precise tracking of page caches accessed, add new tracepoints
> that trigger when a process actually accesses them.
> 
> The ureadahead program used by ChromeOS traces the disk access of
> programs as they start up at boot up. It uses mincore(2) or the
> 'mm_filemap_add_to_page_cache' trace event to accomplish this. It stores
> this information in a "pack" file and on subsequent boots, it will read
> the pack file and call readahead(2) on the information so that disk
> storage can be loaded into RAM before the applications actually need it.
> 
> A problem we see is that due to the kernel's readahead algorithm that
> can aggressively pull in more data than needed (to try and accomplish
> the same goal) and this data is also recorded. The end result is that
> the pack file contains a lot of pages on disk that are never actually
> used. Calling readahead(2) on these unused pages can slow down the
> system boot up times.
> 
> To solve this, add 3 new trace events, get_pages, map_pages, and fault.
> These will be used to trace the pages are not only pulled in from disk,
> but are actually used by the application. Only those pages will be
> stored in the pack file, and this helps out the performance of boot up.
> 
> With the combination of these 3 new trace events and
> mm_filemap_add_to_page_cache, we observed a reduction in the pack file
> by 7.3% - 20% on ChromeOS varying by device.
> 

This looks good to me from the trace-event point of view.

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>

Thanks!

> Signed-off-by: Takaya Saeki <takayas@...omium.org>
> ---
> Changelog between v2 and v1
> - Fix a file offset type usage by casting pgoff_t to loff_t
> - Fixed format string of dev and inode
> 
>  include/trace/events/filemap.h | 84 ++++++++++++++++++++++++++++++++++
>  mm/filemap.c                   |  4 ++
>  2 files changed, 88 insertions(+)
> 
> V1:https://lore.kernel.org/all/20240618093656.1944210-1-takayas@chromium.org/
> 
> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
> index 46c89c1e460c..3a94bd633bf0 100644
> --- a/include/trace/events/filemap.h
> +++ b/include/trace/events/filemap.h
> @@ -56,6 +56,90 @@ DEFINE_EVENT(mm_filemap_op_page_cache, mm_filemap_add_to_page_cache,
>  	TP_ARGS(folio)
>  	);
>  
> +DECLARE_EVENT_CLASS(mm_filemap_op_page_cache_range,
> +
> +	TP_PROTO(
> +		struct address_space *mapping,
> +		pgoff_t index,
> +		pgoff_t last_index
> +	),
> +
> +	TP_ARGS(mapping, index, last_index),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned long, i_ino)
> +		__field(dev_t, s_dev)
> +		__field(unsigned long, index)
> +		__field(unsigned long, last_index)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->i_ino = mapping->host->i_ino;
> +		if (mapping->host->i_sb)
> +			__entry->s_dev =
> +				mapping->host->i_sb->s_dev;
> +		else
> +			__entry->s_dev = mapping->host->i_rdev;
> +		__entry->index = index;
> +		__entry->last_index = last_index;
> +	),
> +
> +	TP_printk(
> +		"dev=%d:%d ino=%lx ofs=%lld max_ofs=%lld",
> +		MAJOR(__entry->s_dev),
> +		MINOR(__entry->s_dev), __entry->i_ino,
> +		((loff_t)__entry->index) << PAGE_SHIFT,
> +		((loff_t)__entry->last_index) << PAGE_SHIFT
> +	)
> +);
> +
> +DEFINE_EVENT(mm_filemap_op_page_cache_range, mm_filemap_get_pages,
> +	TP_PROTO(
> +		struct address_space *mapping,
> +		pgoff_t index,
> +		pgoff_t last_index
> +	),
> +	TP_ARGS(mapping, index, last_index)
> +);
> +
> +DEFINE_EVENT(mm_filemap_op_page_cache_range, mm_filemap_map_pages,
> +	TP_PROTO(
> +		struct address_space *mapping,
> +		pgoff_t index,
> +		pgoff_t last_index
> +	),
> +	TP_ARGS(mapping, index, last_index)
> +);
> +
> +TRACE_EVENT(mm_filemap_fault,
> +	TP_PROTO(struct address_space *mapping, pgoff_t index),
> +
> +	TP_ARGS(mapping, index),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned long, i_ino)
> +		__field(dev_t, s_dev)
> +		__field(unsigned long, index)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->i_ino = mapping->host->i_ino;
> +		if (mapping->host->i_sb)
> +			__entry->s_dev =
> +				mapping->host->i_sb->s_dev;
> +		else
> +			__entry->s_dev = mapping->host->i_rdev;
> +		__entry->index = index;
> +	),
> +
> +	TP_printk(
> +		"dev=%d:%d ino=%lx ofs=%lld",
> +		MAJOR(__entry->s_dev),
> +		MINOR(__entry->s_dev), __entry->i_ino,
> +		((loff_t)__entry->index) << PAGE_SHIFT
> +	)
> +);
> +
>  TRACE_EVENT(filemap_set_wb_err,
>  		TP_PROTO(struct address_space *mapping, errseq_t eseq),
>  
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 876cc64aadd7..39f9d7fb3d2c 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2556,6 +2556,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count,
>  			goto err;
>  	}
>  
> +	trace_mm_filemap_get_pages(mapping, index, last_index);
>  	return 0;
>  err:
>  	if (err < 0)
> @@ -3286,6 +3287,8 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>  	if (unlikely(index >= max_idx))
>  		return VM_FAULT_SIGBUS;
>  
> +	trace_mm_filemap_fault(mapping, index);
> +
>  	/*
>  	 * Do we have something in the page cache already?
>  	 */
> @@ -3652,6 +3655,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
>  	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
>  	add_mm_counter(vma->vm_mm, folio_type, rss);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> +	trace_mm_filemap_map_pages(mapping, start_pgoff, end_pgoff);
>  out:
>  	rcu_read_unlock();
>  
> -- 
> 2.45.2.627.g7a2c4fd464-goog
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ