lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171108155640.Horde.pdOYcdS3_Zox7UhqM007fgC@www.imp.polymtl.ca>
Date:   Wed, 08 Nov 2017 15:56:40 +0000
From:   Abderrahmane Benbachir <abderrahmane.benbachir@...ymtl.ca>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        peterz@...radead.org, mathieu.desnoyers@...icios.com
Subject: Re: [RFC PATCH v2] ftrace: support very early function tracing


Steven Rostedt <rostedt@...dmis.org> a écrit :


> 	ring_buffer_set_clock(tr->trace_buffer.buffer,
> 				early_trace_clock);
>
> Then have:
>
> static u64 early_timestamp __initdata;
>
> static __init u64 early_trace_clock(void)
> {
> 	return early_timestamp;
> }
>
> Then we can have:
>
>> +	preempt_disable_notrace();
>> +	for (i = 0; i < vearly_entries_count; i++) {
>> +		entry = &ftrace_vearly_entries[i];
>> +
>> +#ifdef CONFIG_X86_TSC
>> +		ns = cycles_to_ns(entry->clock, cpu_khz);
>> +#else
>> +		ns = entry->clock;
>> +#endif
>
> 		early_timestamp = ns;
>
>> +		trace_function(tr, entry->ip, entry->parent_ip, 0, 0);
>
> And it will fill the entries properly, and you don't need to worry
> about delta's or anything.
>
>> +	}
>> +	preempt_enable_notrace();
>
> 	/* Set the default clock back */
> 	ring_buffer_set_clock(tr->trace_buffer.buffer,
> 			trace_local_clock);
>

Thanks, this solution looks very clean.

>> +static struct ftrace_vearly_obs_param ftrace_vearly_params[] __initdata = {
>> +	{ .str = "ftrace_vearly", .setup_func = set_ftrace_vearly_enable },
>> +#ifdef CONFIG_DYNAMIC_FTRACE
>> +	{
>> +		.str = "ftrace_notrace",
>> +		.data = &ftrace_data_notrace,
>> +		.setup_func = set_ftrace_vearly_filtering,
>> +	},
>> +	{
>> +		.str = "ftrace_filter",
>> +		.data = &ftrace_data_filter,
>> +		.setup_func = set_ftrace_vearly_filtering,
>> +	},
>> +#endif
>
> Hmm, wouldn't this actually still be able to work even if
> DYNAMIC_FTRACE was not set?

Yes your right, it should work without DYNAMIC_FTRACE. But I just notice that
FTRACE_MCOUNT_RECORD depends on DYNAMIC_FTRACE.

kernel/trace/Kconfig:
...
config FTRACE_MCOUNT_RECORD
	def_bool y
	depends on DYNAMIC_FTRACE
	depends on HAVE_FTRACE_MCOUNT_RECORD

include/asm-generic/vmlinux.lds.h:
...
#ifdef CONFIG_FTRACE_MCOUNT_RECORD
#define MCOUNT_REC()    . = ALIGN(8);                           \
                         VMLINUX_SYMBOL(__start_mcount_loc) = .; \
                         *(__mcount_loc)                         \
                         VMLINUX_SYMBOL(__stop_mcount_loc) = .;
#else
#define MCOUNT_REC()

#endif



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ