lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191112071819.GB67139@TonyMac-Alibaba>
Date:   Tue, 12 Nov 2019 15:18:19 +0800
From:   Tony Lu <tonylu@...ux.alibaba.com>
To:     Stephen Hemminger <stephen@...workplumber.org>
Cc:     davem@...emloft.net, shemminger@...l.org, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net: remove static inline from dev_put/dev_hold

On Mon, Nov 11, 2019 at 08:56:32AM -0800, Stephen Hemminger wrote:
> On Mon, 11 Nov 2019 22:05:03 +0800
> Tony Lu <tonylu@...ux.alibaba.com> wrote:
> 
> > This patch removes static inline from dev_put/dev_hold in order to help
> > trace the pcpu_refcnt leak of net_device.
> > 
> > We have sufferred this kind of issue for several times during
> > manipulating NIC between different net namespaces. It prints this
> > log in dmesg:
> > 
> >   unregister_netdevice: waiting for eth0 to become free. Usage count = 1
> > 
> > However, it is hard to find out who called and leaked refcnt in time. It
> > only left the crime scene but few evidence. Once leaked, it is not
> > safe to fix it up on the running host. We can't trace dev_put/dev_hold
> > directly, for the functions are inlined and used wildly amoung modules.
> > And this issue is common, there are tens of patches fix net_device
> > refcnt leak for various causes.
> > 
> > To trace the refcnt manipulating, this patch removes static inline from
> > dev_put/dev_hold. We can use handy tools, such as eBPF with kprobe, to
> > find out who holds but forgets to put refcnt. This will not be called
> > frequently, so the overhead is limited.
> > 
> > Signed-off-by: Tony Lu <tonylu@...ux.alibaba.com>
> 
> In the past dev_hold/dev_put was in the hot path for several
> operations. What is the performance implication of doing this?

>From code analysis, there should be a little performance backwards.
I don't have the benchmark data for now. I will make a kernel module to
take a series of benchmarks for dev_put/dev_hold. Actually there is a plan
to take a whole solution for this issue. The benchmarks will be done
after this.

Cheers
Tony Lu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ