lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150324131708.GB14900@danjae.kornet>
Date:	Tue, 24 Mar 2015 22:17:08 +0900
From:	Namhyung Kim <namhyung@...nel.org>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Joonsoo Kim <js1304@...il.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Jiri Olsa <jolsa@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	David Ahern <dsahern@...il.com>,
	Minchan Kim <minchan@...nel.org>
Subject: Re: [PATCH 2/5] perf kmem: Analyze page allocator events also

Hi Ingo,

On Tue, Mar 24, 2015 at 08:08:03AM +0100, Ingo Molnar wrote:
> * Joonsoo Kim <js1304@...il.com> wrote:
> > How about following change and making 'perf kmem' print pfn?
> > If we store pfn on the trace buffer, we can print $debugfs/tracing/trace
> > as is and 'perf kmem' can also print pfn.
> > 
> > Thanks.
> > 
> > diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
> > index 4ad10ba..9dcfd0b 100644
> > --- a/include/trace/events/kmem.h
> > +++ b/include/trace/events/kmem.h
> > @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
> >         TP_ARGS(page, order, gfp_flags, migratetype),
> > 
> >         TP_STRUCT__entry(
> > -               __field(        struct page *,  page            )
> > +               __field(        unsigned long,  pfn             )
> >                 __field(        unsigned int,   order           )
> >                 __field(        gfp_t,          gfp_flags       )
> >                 __field(        int,            migratetype     )
> >         ),
> > 
> >         TP_fast_assign(
> > -               __entry->page           = page;
> > +               __entry->pfn            = page ? page_to_pfn(page) : -1;
> >                 __entry->order          = order;
> >                 __entry->gfp_flags      = gfp_flags;
> >                 __entry->migratetype    = migratetype;
> >         ),
> > 
> >         TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
> > -               __entry->page,
> > -               __entry->page ? page_to_pfn(__entry->page) : 0,
> > +               __entry->pfn != -1 ? pfn_to_page(__entry->pfn) : NULL,
> > +               __entry->pfn != -1 ? __entry->pfn : 0,
> >                 __entry->order,
> >                 __entry->migratetype,
> >                 show_gfp_flags(__entry->gfp_flags))
> 
> Acked-by: Ingo Molnar <mingo@...nel.org>
> 
> It would be very nice to make all the other page granular tracepoints 
> output pfn (which is a physical address that can be resolved to 'node' 
> and other properties), not 'struct page *' (which is a kernel resource 
> with little meaning to user-space tooling).
> 
> I.e. the following tracepoints:
> 
> triton:~/tip> git grep -E '__field.*struct page *' include/trace/
> include/trace/events/filemap.h:         __field(struct page *, page)
> include/trace/events/kmem.h:            __field(        struct page *,  page            )
> include/trace/events/kmem.h:            __field(        struct page *,  page            )
> include/trace/events/kmem.h:            __field(        struct page *,  page            )
> include/trace/events/kmem.h:            __field(        struct page *,  page            )
> include/trace/events/kmem.h:            __field(        struct page *,  page                    )
> include/trace/events/pagemap.h:         __field(struct page *,  page    )
> include/trace/events/pagemap.h:         __field(struct page *,  page    )
> include/trace/events/vmscan.h:          __field(struct page *, page)

Okay, will do.

> 
> there's very little breakage I can imagine: they have traced pointers 
> to 'struct page', which is a pretty opaque page identifier to 
> user-space, and they'll trace pfn's in the future, which still serves 
> as a page identifier.

Agreed.

> 
> One thing would be important: to do all these changes at once, to make 
> sure that the various page identifiers can be compared.

OK

> 
> Also, we might keep the 'page' field name if anything relies on that - 
> but 'pfn' is even better.

Another option is to keep page field and add new pfn field.  The
events in pagemap.h already do this way.  This will minimize the
possible breakage but increase the trace size somewhat.

Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ