lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05e1fe61-1c29-152f-414b-cd6a44525af0@collabora.com>
Date:   Wed, 9 Mar 2022 23:06:33 +0300
From:   Dmitry Osipenko <dmitry.osipenko@...labora.com>
To:     Rob Clark <robdclark@...il.com>
Cc:     David Airlie <airlied@...ux.ie>, Gerd Hoffmann <kraxel@...hat.com>,
        Gurchetan Singh <gurchetansingh@...omium.org>,
        Chia-I Wu <olvaffe@...il.com>, Daniel Vetter <daniel@...ll.ch>,
        Daniel Almeida <daniel.almeida@...labora.com>,
        Gert Wollny <gert.wollny@...labora.com>,
        Tomeu Vizoso <tomeu.vizoso@...labora.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "open list:VIRTIO GPU DRIVER" 
        <virtualization@...ts.linux-foundation.org>,
        Gustavo Padovan <gustavo.padovan@...labora.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Dmitry Osipenko <digetx@...il.com>,
        Rob Clark <robdclark@...omium.org>
Subject: Re: [PATCH v1 0/5] Add memory shrinker to VirtIO-GPU DRM driver

On 3/9/22 03:56, Rob Clark wrote:
>> If we really can't track madvise state in the guest for dealing with
>> host memory pressure, I think the better option is to introduce
>> MADV:WILLNEED_REPLACE, ie. something to tell the host kernel that the
>> buffer is needed but the previous contents are not (as long as the GPU
>> VA remains the same).  With this the host could allocate new pages if
>> needed, and the guest would not need to wait for a reply from host.
> If variant with the memory ballooning will work, then it will be
> possible to track the state within guest-only. Let's consider the
> simplest variant for now.
> 
> I'll try to implement the balloon driver support in the v2 and will get
> back to you.
> 

I looked at the generic balloon driver and looks like this not what we
want because:

1. Memory ballooning is primarily about handling memory overcommit
situations. I.e. when there are multiple VMs consuming more memory than
available in the system. Ballooning allows host to ask guest to give
unused pages back to host and host could give pages to other VMs.

2. Memory ballooning operates with guest memory pages only. I.e. each
ballooned page is reported to/from host in a form of page's DMA address.

3. There is no direct connection between host's OOM events and the
balloon manager. I guess host could watch system's memory pressure and
inflate VMs' balloons on low memory, releasing the guest's memory to the
system, but apparently this use-case not supported by anyone today, at
least I don't see Qemu supporting it.


So the virtio-balloon driver isn't very useful for us as-is.

One possible solution could be to create something like a new
virtio-shrinker device or add shrinker functionality to the virtio-gpu
device, allowing host to ask guests to drop shared caches. Host then
should become a PSI handler. I think this should be doable in a case of
crosvm. In a case of GNU world, it could take a lot of effort to get
everything to upstreamable state, at first there is a need to
demonstrate real problem being solved by this solution.

The other minor issue is that only integrated GPUs may use system's
memory and even then they could use a dedicated memory carveout, i.e.
releasing VRAM BOs may not help with host's OOM. In case of virgl
context we have no clue about where buffers are physically located. On
the other hand, in the worst case dropping host caches just won't help
with OOM.

It's now unclear how we should proceed with the host-side shrinker
support. Thoughts?

We may start easy and instead of thinking about host-side shrinker, we
could make VirtIO-GPU driver to expire cached BOs after a certain
timeout. Mesa already uses timeout-based BO caching, but it doesn't have
an alarm timer and simply checks expiration when BO is allocated. Should
be too much trouble to handle timers within Mesa since it's executed in
application context, easier to do it in kernel, like VC4 driver does it
for example. This is not good as a proper memory shrinker, but could be
good enough in practice.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ