lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <DB6PR0501MB2485EDCDEA5F8A5E0FADBBDCC5FA0@DB6PR0501MB2485.eurprd05.prod.outlook.com>
Date:   Thu, 4 Jul 2019 12:30:17 +0000
From:   Idan Burstein <idanb@...lanox.com>
To:     Sagi Grimberg <sagi@...mberg.me>,
        Saeed Mahameed <saeedm@...lanox.com>,
        "David S. Miller" <davem@...emloft.net>,
        Doug Ledford <dledford@...hat.com>,
        Jason Gunthorpe <jgg@...lanox.com>
CC:     Leon Romanovsky <leonro@...lanox.com>,
        Or Gerlitz <ogerlitz@...lanox.com>,
        Tal Gilboa <talgi@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        Yamin Friedman <yaminf@...lanox.com>,
        Max Gurtovoy <maxg@...lanox.com>
Subject: RE: [for-next V2 10/10] RDMA/core: Provide RDMA DIM support for ULPs

The essence of the dynamic in DIM is that it would fit to the workload running on the cores. For user not to trade bandwidth/cqu% and latency with a module parameter they don't know how to config. If DIM consistently hurts latency of latency critical workloads we should debug and fix.

This is where we should go. End goal of no configurate with out of the box performance in terms of both bandwidth/cpu% and latency.

We could make several steps towards this direction if we are not mature enough today but let's define them (e.g. tests on different ulps).

-----Original Message-----
From: linux-rdma-owner@...r.kernel.org <linux-rdma-owner@...r.kernel.org> On Behalf Of Sagi Grimberg
Sent: Tuesday, July 2, 2019 8:37 AM
To: Idan Burstein <idanb@...lanox.com>; Saeed Mahameed <saeedm@...lanox.com>; David S. Miller <davem@...emloft.net>; Doug Ledford <dledford@...hat.com>; Jason Gunthorpe <jgg@...lanox.com>
Cc: Leon Romanovsky <leonro@...lanox.com>; Or Gerlitz <ogerlitz@...lanox.com>; Tal Gilboa <talgi@...lanox.com>; netdev@...r.kernel.org; linux-rdma@...r.kernel.org; Yamin Friedman <yaminf@...lanox.com>; Max Gurtovoy <maxg@...lanox.com>
Subject: Re: [for-next V2 10/10] RDMA/core: Provide RDMA DIM support for ULPs

Hey Idan,

> " Please don't. This is a bad choice to opt it in by default."
> 
> I disagree here. I'd prefer Linux to have good out of the box experience (e.g. reach 100G in 4K NVMeOF on Intel servers) with the default parameters. Especially since Yamin have shown it is beneficial / not hurting in terms of performance for variety of use cases. The whole concept of DIM is that it adapts to the workload requirements in terms of bandwidth and latency.

Well, its a Mellanox device driver after all.

But do note that by far, the vast majority of users are not saturating 100G of 4K I/O. The absolute vast majority of users are primarily sensitive to synchronous QD=1 I/O latency, and when the workload is much more dynamic than the synthetic 100%/50%/0% read mix.

As much as I'm a fan (IIRC I was the one giving a first pass at this), the dim default opt-in is not only not beneficial, but potentially harmful to the majority of users out-of-the-box experience.

Given that this is a fresh code with almost no exposure, and that was not tested outside of Yamin running limited performance testing, I think it would be a mistake to add it as a default opt-in, that can come as an incremental stage.

Obviously, I cannot tell what Mellanox should/shouldn't do in its own device driver of course, but I just wanted to emphasize that I think this is a mistake.

> Moreover, net-dim is enabled by default, I don't see why RDMA is different.

Very different animals.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ