lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Mar 2017 18:31:27 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Josh Hunt <johunt@...mai.com>
Cc:     David Miller <davem@...emloft.net>, edumazet@...gle.com,
        arnd@...db.de, soheil@...gle.com, willemb@...gle.com,
        pabeni@...hat.com, linux-arch@...r.kernel.org,
        netdev@...r.kernel.org
Subject: Re: [RFC PATCH] sock: add SO_RCVQUEUE_SIZE getsockopt

On Mon, 2017-03-13 at 18:34 -0500, Josh Hunt wrote:

> In this particular case they really do want to know total # of bytes in 
> the receive queue, not the data bytes they can consume from an 
> application pov. The kernel currently only exposes this value through 
> netlink or /proc/net/udp from what I saw.
> 
> I believe Eric's suggestion in his previous mail was to export all of 
> these meminfo metrics via a single socket option call similar to how its 
> done in netlink. We could then use that for both call sites.
> 
> I agree that it would be useful to also have the data you and Eric are 
> suggesting exposed somewhere, the total # of skb->len bytes sitting in 
> the receive queue. I could add that as a second socket option.

Please note that UDP stack does not maintain a per socket sum(skb->len)

Implementing this in a system call would require to lock the receive
queue (blocking BH) and iterating over a potential huge skb list.

Or add a new socket field and add/sub every skb->len of packets
added/removed to/from receive queue.

So I would prefer to not provide this information, this looks quite a
bloat.


Powered by blists - more mailing lists