lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 14 Mar 2017 17:11:17 -0500
From:   Josh Hunt <johunt@...mai.com>
To:     David Miller <davem@...emloft.net>
Cc:     edumazet@...gle.com, arnd@...db.de, soheil@...gle.com,
        willemb@...gle.com, pabeni@...hat.com, linux-arch@...r.kernel.org,
        netdev@...r.kernel.org
Subject: Re: [RFC PATCH] sock: add SO_RCVQUEUE_SIZE getsockopt

On 03/13/2017 07:10 PM, David Miller wrote:
> From: Josh Hunt <johunt@...mai.com>
> Date: Mon, 13 Mar 2017 18:34:41 -0500
>
>> In this particular case they really do want to know total # of bytes
>> in the receive queue, not the data bytes they can consume from an
>> application pov. The kernel currently only exposes this value through
>> netlink or /proc/net/udp from what I saw.
>
> Can you explain in what way this is useful?
>
> The difference between skb->len and skb->truesize is really kernel
> internal implementation detail, and I'm trying to figure out why
> this would be useful to an application.
>

First, it looks like my original patch was against an old kernel which 
did not have the updated udp accounting code. Not sure how that 
happened. Apologies for that. There's no need to add in the backlog, at 
least for udp now, sk_rmem_alloc is all that is needed for my case.

The application here is interested in monitoring the amount of data in 
the receive buffer. Looking for and identifying overflows, and also 
understanding how full it is. I know we already have SO_RXQ_OVFL, but 
this only shows the # of drops on overflow.

We expose this (skmem) information via /proc and netlink today. It seems 
like unnecessary overhead to require an application to also create a 
netlink socket to get this data.

Creating a socket option to mimic the behavior of 
sock_diag_put_meminfo() and export all meminfo_vars would be great if 
that's something you'd accept.

Josh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ