lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190109172843.GE19455@krava>
Date:   Wed, 9 Jan 2019 18:28:43 +0100
From:   Jiri Olsa <jolsa@...hat.com>
To:     Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1 0/4] perf: enable compression of record mode trace to
 save storage space

On Mon, Dec 24, 2018 at 04:21:33PM +0300, Alexey Budankov wrote:
> 
> The patch set implements runtime record trace compression accompanied by 
> trace file decompression implemented in the tool report mode. Zstandard 
> library API [1] is used for compression/decompression of data that come 
> from perf_events kernel data buffers.
> 
> Realized -z,--compression_level=n option provides ~3-5x avg. trace file 
> size reduction on the tested workloads what significantly saves user's 
> storage space on larger server systems where trace file size can easily 
> reach several tens or even hundreds of GiBs, especially when profiling 
> with stacks for later dwarf unwinding, context-switches tracing and etc.
> 
> The option is effective jointly with asynchronous trace writing because 
> compression requires auxiliary memory buffers to operate on and memory 
> buffers for asynchronous trace writing serve that purpose.

I dont like that it's onlt for aio only, I can't really see why it's
a problem for normal data.. can't we just have one layer before and
stream the data to the compress function instead of the file (or aio
buffers).. and that compress functions would spit out 64K size COMPRESSED
events, which would go to file (or aio buffers)

the report side would process them (decompress) on the session layer
before the tool callbacks are called

jirka

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ