lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+2MQi_O47B8zOa_TwZqzRsS0LFoPS77+61mUV=yT1U3sa6xQw@mail.gmail.com>
Date:   Tue, 5 Jan 2021 10:14:11 +0800
From:   Liang Li <liliang324@...il.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Michal Hocko <mhocko@...e.com>,
        Liang Li <liliangleo@...iglobal.com>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        virtualization@...ts.linux-foundation.org
Subject: Re: [RFC v2 PATCH 0/4] speed up page allocation for __GFP_ZERO

> >>> In our production environment, there are three main applications have such
> >>> requirement, one is QEMU [creating a VM with SR-IOV passthrough device],
> >>> anther other two are DPDK related applications, DPDK OVS and SPDK vhost,
> >>> for best performance, they populate memory when starting up. For SPDK vhost,
> >>> we make use of the VHOST_USER_GET/SET_INFLIGHT_FD feature for
> >>> vhost 'live' upgrade, which is done by killing the old process and
> >>> starting a new
> >>> one with the new binary. In this case, we want the new process started as quick
> >>> as possible to shorten the service downtime. We really enable this feature
> >>> to speed up startup time for them  :)
>
> Am I wrong or does using hugeltbfs/tmpfs ... i.e., a file not-deleted between shutting down the old instances and firing up the new instance just solve this issue?

You are right, it works for the SPDK vhost upgrade case.

>
> >>
> >> Thanks for info on the use case!
> >>
> >> All of these use cases either already use, or could use, huge pages
> >> IMHO. It's not your ordinary proprietary gaming app :) This is where
> >> pre-zeroing of huge pages could already help.
> >
> > You are welcome.  For some historical reason, some of our services are
> > not using hugetlbfs, that is why I didn't start with hugetlbfs.
> >
> >> Just wondering, wouldn't it be possible to use tmpfs/hugetlbfs ...
> >> creating a file and pre-zeroing it from another process, or am I missing
> >> something important? At least for QEMU this should work AFAIK, where you
> >> can just pass the file to be use using memory-backend-file.
> >>
> > If using another process to create a file, we can offload the overhead to
> > another process, and there is no need to pre-zeroing it's content, just
> > populating the memory is enough.
>
> Right, if non-zero memory can be tolerated (e.g., for vms usually has to).

I mean there is no need to pre-zeroing the file content obviously in user space,
the kernel will do it when populating the memory.

> > If we do it that way, then how to determine the size of the file? it depends
> > on the RAM size of the VM the customer buys.
> > Maybe we can create a file
> > large enough in advance and truncate it to the right size just before the
> > VM is created. Then, how many large files should be created on a host?
>
> That‘s mostly already existing scheduling logic, no? (How many vms can I put onto a specific machine eventually)

It depends on how the scheduling component is designed. Yes, you can put
10 VMs with 4C8G(4CPU, 8G RAM) on a host and 20 VMs with 2C4G on
another one. But if one type of them, e.g. 4C8G are sold out, customers
can't by more 4C8G VM while there are some free 2C4G VMs, the resource
reserved for them can be provided as 4C8G VMs

> > You will find there are a lot of things that have to be handled properly.
> > I think it's possible to make it work well, but we will transfer the
> > management complexity to up layer components. It's a bad practice to let
> > upper layer components process such low level details which should be
> > handled in the OS layer.
>
> It‘s bad practice to squeeze things into the kernel that can just be handled on upper layers ;)
>

You must know there are a lot of functions in the kernel which can
be done in userspace. e.g. Some of the device emulations like APIC,
vhost-net backend which has userspace implementation.   :)
Bad or not depends on the benefits the solution brings.
>From the viewpoint of a user space application, the kernel should
provide high performance memory management service. That's why
I think it should be done in the kernel.

Thanks
Liang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ