lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Mar 2015 11:11:24 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Alexei Starovoitov <ast@...mgrid.com>
Cc:	Ingo Molnar <mingo@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Jiri Olsa <jolsa@...hat.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	"David S. Miller" <davem@...emloft.net>,
	Daniel Borkmann <daniel@...earbox.net>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	linux-api@...r.kernel.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 tip 3/8] tracing: allow BPF programs to call
 bpf_ktime_get_ns()

On Mon, 16 Mar 2015 14:49:39 -0700
Alexei Starovoitov <ast@...mgrid.com> wrote:

> bpf_ktime_get_ns() is used by programs to compue time delta between events

 "compute"

> or as a timestamp
> 
> Signed-off-by: Alexei Starovoitov <ast@...mgrid.com>
> ---
>  include/uapi/linux/bpf.h |    1 +
>  kernel/trace/bpf_trace.c |   11 +++++++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 4486d36d2e9e..101e509d1001 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -165,6 +165,7 @@ enum bpf_func_id {
>  	BPF_FUNC_map_update_elem, /* int map_update_elem(&map, &key, &value, flags) */
>  	BPF_FUNC_map_delete_elem, /* int map_delete_elem(&map, &key) */
>  	BPF_FUNC_probe_read,      /* int bpf_probe_read(void *dst, int size, void *src) */
> +	BPF_FUNC_ktime_get_ns,    /* u64 bpf_ktime_get_ns(void) */
>  	__BPF_FUNC_MAX_ID,
>  };
>  
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index ba95b131082c..74eb6abda6a1 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -56,6 +56,12 @@ static u64 bpf_probe_read(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
>  	return probe_kernel_read(dst, unsafe_ptr, size);
>  }
>  
> +static u64 bpf_ktime_get_ns(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
> +{
> +	/* NMI safe access to clock monotonic */
> +	return ktime_get_mono_fast_ns();
> +}
> +
>  static struct bpf_func_proto kprobe_prog_funcs[] = {
>  	[BPF_FUNC_probe_read] = {
>  		.func = bpf_probe_read,
> @@ -65,6 +71,11 @@ static struct bpf_func_proto kprobe_prog_funcs[] = {
>  		.arg2_type = ARG_CONST_STACK_SIZE,
>  		.arg3_type = ARG_ANYTHING,
>  	},
> +	[BPF_FUNC_ktime_get_ns] = {
> +		.func = bpf_ktime_get_ns,
> +		.gpl_only = true,
> +		.ret_type = RET_INTEGER,

Hmm, a nanosecond value returned as integer? Is there a way to make
this a 64 bit return type, or is RET_INTEGER default to 64 bits in BPF
functions?

-- Steve


> +	},
>  };
>  
>  static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ