lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160113051358.GA37858@ast-mbp.thefacebook.com>
Date:	Tue, 12 Jan 2016 21:14:00 -0800
From:	Alexei Starovoitov <alexei.starovoitov@...il.com>
To:	"Wangnan (F)" <wangnan0@...wei.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>, acme@...nel.org,
	linux-kernel@...r.kernel.org, pi3orama@....com, lizefan@...wei.com,
	netdev@...r.kernel.org, davem@...emloft.net,
	Adrian Hunter <adrian.hunter@...el.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	David Ahern <dsahern@...il.com>,
	Ingo Molnar <mingo@...nel.org>,
	Yunlong Song <yunlong.song@...wei.com>
Subject: Re: [PATCH 27/53] perf/core: Put size of a sample at the end of it
 by PERF_SAMPLE_TAILSIZE

On Wed, Jan 13, 2016 at 12:34:19PM +0800, Wangnan (F) wrote:
> 
> >>Or moving whole header to the end of a record?
> >I think moving the whole header under new TAILHEADER flag is
> >actually very good idea. The ring buffer will be fully utilized
> >and no extra bytes necessary. User space would need to parse it
> >backwards, but for this use case it fits well.
> 
> I have another crazy suggestion: can we make kernel writing to
> the ring buffer from the end to the beginning? For example:
> 
> This is the initial state of the ring buffer, head pointer
> pointes to the end of it:
> 
>       -------------> Address increase
> 
>                                     head
>                                       |
>                                       V
>  +--+---+-------+----------+------+---+
>  |                                    |
>  +--+---+-------+----------+------+---+
> 
> 
> Write the first event at the end of the ring buffer, and *decrease*
> the head pointer:
> 
>                                 head
>                                   |
>                                   V
>  +--+---+-------+----------+------+---+
>  |                                | A |
>  +--+---+-------+----------+------+---+
> 
> 
> Another record:
>                           head
>                            |
>                            V
>  +--+---+-------+----------+------+---+
>  |                         |   B  | A |
>  +--+---+-------+----------+------+---+
> 
> 
> Ring buffer rewind, A is fully overwritten and B is broken:
> 
>                                head
>                                  |
>                                  V
>  +--+---+-------+----------+-----+----+
>  |F | E |   D   | C        | ... | F  |
>  +--+---+-------+----------+-----+----+
> 
> At this time user can parse the ring buffer normally from
> F to C. From timestamp in it he know which one is the
> oldest.
> 
> By this perf don't need too much extra work to do. There's no
> performance penalty at all, and the 8 bytes are saved.
> 
> Thought?

I like it.
I think from algorithmic stand point it's very pretty, but real
cpus may not like to stream the data backwards. x86 can detect
the stride and prefetch the next cache line when stride is
positive. I don't think there is such hw logic for negative strides.
So if it's not too hard, I would suggest to implement both of
your ideas. I negative stride is just as fast as normal, then
let's use that, since it doesn't change the header and nothing
needs to change on perf side or any other tools that read
ring-buffer manually.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ