lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200731171714.GA513060@nvidia.com>
Date:   Fri, 31 Jul 2020 14:17:14 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Eric Dumazet <edumazet@...gle.com>
CC:     Doug Ledford <dledford@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Eric Dumazet <eric.dumazet@...il.com>,
        <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH net] RDMA/umem: add a schedule point in ib_umem_get()

On Wed, Jul 29, 2020 at 06:57:55PM -0700, Eric Dumazet wrote:
> Mapping as little as 64GB can take more than 10 seconds,
> triggering issues on kernels with CONFIG_PREEMPT_NONE=y.
> 
> ib_umem_get() already splits the work in 2MB units on x86_64,
> adding a cond_resched() in the long-lasting loop is enough
> to solve the issue.
> 
> Note that sg_alloc_table() can still use more than 100 ms,
> which is also problematic. This might be addressed later
> in ib_umem_add_sg_table(), adding new blocks in sgl
> on demand.

I have seen some patches in progress to do exactly this, the
motivation is to reduce the memory consumption if a lot of pages are
combined.

> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Doug Ledford <dledford@...hat.com>
> Cc: Jason Gunthorpe <jgg@...pe.ca>
> Cc: linux-rdma@...r.kernel.org
> ---
>  drivers/infiniband/core/umem.c | 1 +
>  1 file changed, 1 insertion(+)

Why [PATCH net] ?

Anyhow, applied to rdma for-next

Thanks,
Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ