lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180807153111.53b0da8d@epycfail>
Date:   Tue, 7 Aug 2018 15:31:11 +0200
From:   Stefano Brivio <sbrivio@...hat.com>
To:     Pravin Shelar <pshelar@....org>
Cc:     Matteo Croce <mcroce@...hat.com>,
        Justin Pettit <jpettit@...are.com>,
        Greg Rose <gvrose8192@...il.com>, Ben Pfaff <blp@....org>,
        netdev <netdev@...r.kernel.org>, ovs dev <dev@...nvswitch.org>,
        Jiri Benc <jbenc@...hat.com>, Aaron <aconole@...hat.com>
Subject: Re: [PATCH RFC net-next] openvswitch: Queue upcalls to userspace in
 per-port round-robin order

Hi Pravin,

On Tue, 31 Jul 2018 16:12:03 -0700
Pravin Shelar <pshelar@....org> wrote:

> Rather than reducing number of thread down to 1, we could find better
> number of FDs per port.
> How about this simple solution:
> 1. Allocate (N * P) FDs as long as it is under FD limit.
> 2. If FD limit (-EMFILE) is hit reduce N value by half and repeat step 1.

I still see a few quantitative issues with this approach, other than
Ben's observation about design (which, by the way, looks entirely
reasonable to me).

We're talking about a disproportionate amount of sockets in any case.
We can have up to 2^16 vports here, with 5k vports being rather common,
and for any reasonable value of N that manages somehow to perturbate the
distribution of upcalls per thread, we are talking about something well
in excess of 100k sockets. I think this doesn't really scale.

With the current value for N (3/4 * number of threads) this can even get
close to /proc/sys/fs/file-max on some systems, and there raising the
number of allowed file descriptors for ovs-vswitchd isn't a solution
anymore.

I would instead try to address the concerns that you had about the
original patch adding fairness in the kernel, rather than trying to
make the issue appear less severe in ovs-vswitchd.

-- 
Stefano

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ