lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 May 2020 08:20:50 -0700
From:   Eric Dumazet <>
To:     SeongJae Park <>,
        Eric Dumazet <>
Cc:     David Miller <>,
        Al Viro <>,
        Jakub Kicinski <>,
        Greg Kroah-Hartman <>,, netdev <>,
        LKML <>,
        SeongJae Park <>,,,
Subject: Re: [PATCH net v2 0/2] Revert the 'socket_alloc' life cycle change

On 5/5/20 8:07 AM, SeongJae Park wrote:
> On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet <> wrote:

>> Why do we have 10,000,000 objects around ? Could this be because of
>> some RCU problem ?
> Mainly because of a long RCU grace period, as you guess.  I have no idea how
> the grace period became so long in this case.
> As my test machine was a virtual machine instance, I guess RCU readers
> preemption[1] like problem might affected this.
> [1]
>> Once Al patches reverted, do you have 10,000,000 sock_alloc around ?
> Yes, both the old kernel that prior to Al's patches and the recent kernel
> reverting the Al's patches didn't reproduce the problem.

I repeat my question : Do you have 10,000,000 (smaller) objects kept in slab caches ?

TCP sockets use the (very complex, error prone) SLAB_TYPESAFE_BY_RCU, but not the struct socket_wq
object that was allocated in sock_alloc_inode() before Al patches.

These objects should be visible in kmalloc-64 kmem cache.

Powered by blists - more mailing lists