lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C1507E62-2E3B-41A7-9C17-36CF86D8674B@fb.com>
Date:   Mon, 12 Mar 2018 22:47:21 +0000
From:   Song Liu <songliubraving@...com>
To:     Alexei Starovoitov <ast@...com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "ast@...nel.org" <ast@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <Kernel-team@...com>,
        "hannes@...xchg.org" <hannes@...xchg.org>,
        Teng Qin <qinteng@...com>
Subject: Re: [PATCH bpf-next v4 1/2] bpf: extend stackmap to save
 binary_build_id+offset instead of address



> On Mar 12, 2018, at 2:31 PM, Alexei Starovoitov <ast@...com> wrote:
> 
> On 3/12/18 2:12 PM, Song Liu wrote:
>> 
>>> On Mar 12, 2018, at 2:00 PM, Alexei Starovoitov <ast@...com> wrote:
>>> 
>>> On 3/12/18 1:39 PM, Song Liu wrote:
>>>> +	page = find_get_page(vma->vm_file->f_mapping, 0);
>>> 
>>> did you test it with config_debug_atomic_sleep ?
>>> it should have complained...
>> 
>> Yeah, I have CONFIG_DEBUG_ATOMIC_SLEEP=y.
>> 
>> I think find_get_page() will not sleep. The variation find_get_page_flags()
>> may sleep with flag FGP_CREAT.
> 
> I see. gfp_mask == 0 and no locks. should work indeed.
> curious how perf report looks like for heavy bpf_get_stackid() usage?

I modified samples/bpf/sampleip to only call bpf_get_stackid(). The following 
is captured with bpf_get_stackid() called at 10k Hz. stressapptest is running
with 16 threads on a system with 56 cores. 


Samples: 1M of event 'cycles:pp', Event count (approx.): 628092326243
  Overhead  Command          Shared Object                 Symbol
+   51.61%  stressapptest    stressapptest                 [.] AdlerMemcpyC             
-   20.82%  stressapptest    [kernel.vmlinux]              [k] queued_spin_lock_slowpath
   - queued_spin_lock_slowpath                                                          
      - 20.80% pcpu_freelist_pop                                                        
           bpf_get_stackid                                                              
           bpf_get_stackid_tp                                                           
         - 0x590c                                                                       
              16.12% AdlerMemcpyC                                                       
              4.50% OsLayer::CpuStressWorkload                                          
+   14.36%  stressapptest    stressapptest                 [.] OsLayer::CpuStressWorkload
-    8.74%  stressapptest    [kernel.vmlinux]              [k] _raw_spin_lock            
   - _raw_spin_lock                                                                      
      - 8.73% bpf_get_stackid                                                            
           bpf_get_stackid_tp                                                            
         + 0x590c                                                                        
-    0.67%  stressapptest    [kernel.vmlinux]              [k] pcpu_freelist_pop         
   - pcpu_freelist_pop                                                                   
      - 0.67% bpf_get_stackid                                                            
           bpf_get_stackid_tp                                                            
         + 0x590c

Seems lock contention is the dominating overhead here. This should be the same
for original stackmap. 

Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ