lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_D53295A257B55119C425836EA19E2CE84B07@qq.com>
Date:   Mon, 28 Aug 2023 08:57:21 +0800
From:   Rong Tao <rtoax@...mail.com>
To:     olsajiri@...il.com
Cc:     alexandre.torgue@...s.st.com, andrii@...nel.org, ast@...nel.org,
        bpf@...r.kernel.org, chantr4@...il.com, daniel@...earbox.net,
        deso@...teo.net, eddyz87@...il.com, haoluo@...gle.com,
        iii@...ux.ibm.com, john.fastabend@...il.com, kpsingh@...nel.org,
        laoar.shao@...il.com, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        linux-stm32@...md-mailman.stormreply.com, martin.lau@...ux.dev,
        mcoquelin.stm32@...il.com, mykolal@...com, rongtao@...tc.cn,
        rtoax@...mail.com, sdf@...gle.com, shuah@...nel.org,
        song@...nel.org, xukuohai@...wei.com, yonghong.song@...ux.dev,
        zwisler@...gle.com
Subject: Re: [PATCH bpf-next v8] selftests/bpf: trace_helpers.c: optimize kallsyms cache

Hi, jirka. Thanks for your reply.

> > @@ -164,13 +165,14 @@ int main(int argc, char **argv)
> >  	}
> >  
> >  	/* initialize kernel symbol translation */
> > -	if (load_kallsyms()) {
> > +	ksyms = load_kallsyms();
> 
> if we keep the load_kallsyms/ksym_search/ksym_get_addr functions as described
> in [1] the samples/bpf would stay untouched apart from the Makefile change

Maybe we should make this modification, wouldn't it be better? After all,
not modifying the source code of samples/bpf is not really a reason not to
make modifications to load_kallsyms(), what do you think?

In addition, if we continue to keep the original ksym_search() interface,
the following problems are very difficult to deal with:

	Source code ksym_search [1]

    struct ksym *ksym_search(long key)
    {
    	int start = 0, end = sym_cnt;
    	int result;
    
    	/* kallsyms not loaded. return NULL */
    	if (sym_cnt <= 0)
    		return NULL;
    
    	while (start < end) {
    		size_t mid = start + (end - start) / 2;
    
    		result = key - syms[mid].addr;
    		if (result < 0)
    			end = mid;
    		else if (result > 0)
    			start = mid + 1;
    		else
    			return &syms[mid];                         <<<
    	}
    
    	if (start >= 1 && syms[start - 1].addr < key &&
    	    key < syms[start].addr)
    		/* valid ksym */
    		return &syms[start - 1];                       <<<
    
    	/* out of range. return _stext */
    	return &syms[0];                                   <<<
    }

The original ksym_search() interface directly returns the global syms 
address, which is also dangerous for multi-threading. If we allocate new
memory for this, it is not a perfect solution.

If we rewrite

	struct ksym *ksym_search(long key)

to
	struct ksym ksym_search(long key)

This also affects the source code in samples/bpf.

The same problem exists with ksym_get_addr().

Best wishes,
Rong Tao

[1] https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/tools/testing/selftests/bpf/trace_helpers.c#n100

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ