lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Jun 2017 15:49:10 +0800
From:   Zhi Wang <zhi.a.wang@...el.com>
To:     Gerd Hoffmann <kraxel@...hat.com>,
        Alex Williamson <alex.williamson@...hat.com>
CC:     "Wang, Zhenyu Z" <zhenyu.z.wang@...el.com>,
        "intel-gfx@...ts.freedesktop.org" <intel-gfx@...ts.freedesktop.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Chen, Xiaoguang" <xiaoguang.chen@...el.com>,
        "Zhang, Tina" <tina.zhang@...el.com>,
        Kirti Wankhede <kwankhede@...dia.com>,
        "Lv, Zhiyuan" <zhiyuan.lv@...el.com>,
        "intel-gvt-dev@...ts.freedesktop.org" 
        <intel-gvt-dev@...ts.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH v9 5/7] vfio: Define vfio based dma-buf operations

Hi:
     Thanks for the discussions! If the userspace application has 
already maintained a LRU list, it looks like we don't need generation 
anymore, as userspace application will lookup the guest framebuffer from 
the LRU list by "offset". No matter how, it would know if this is a new 
guest framebuffer or not. If it's a new guest framebuffer, a new dmabuf 
fd should be generated. If it's an old framebuffer, it can just show 
that framebuffer.

Thanks,
Zhi.

On 06/23/17 15:26, Gerd Hoffmann wrote:
>    Hi,
>
>> Is this only going to support accelerated driver output, not basic
>> VGA
>> modes for BIOS interaction?
> Right now there is no vgabios or uefi support for the vgpu.
>
> But even with that in place there still is the problem that the display
> device initialization happens before the guest runs and therefore there
> isn't an plane yet ...
>
>>> Right now the experimental intel patches throw errors in case no
>>> plane
>>> exists (yet).  Maybe we should have a "bool is_enabled" field in
>>> the
>>> plane_info struct, so drivers can use that to signal whenever the
>>> guest
>>> has programmed a valid video mode or not (likewise for the cursor,
>>> which doesn't exist with fbcon, only when running xorg).  With that
>>> in
>>> place using the QUERY_PLANE ioctl also for probing looks
>>> reasonable.
>> Sure, or -ENOTTY for ioctl not implemented vs -EINVAL for no
>> available
>> plane, but then that might not help the user know how a plane would
>> be
>> available if it were available.
> So maybe a "enum plane_state" (instead of "bool is_enabled")?  So we
> can clearly disturgish ENABLED, DISABLED, NOT_SUPPORTED cases?
>
>>> Yes, I'd leave that to userspace.  So, when the generation changes
>>> userspace knows the guest changed the plane.  It could be a
>>> configuration the guest has used before (and where userspace could
>>> have
>>> a cached dma-buf handle for), or it could be something new.
>> But userspace also doesn't know that a dmabuf generation will ever be
>> visited again,
> kernel wouldn't know either, only the guest knows ...
>
>> so they're bound to have some stale descriptors.  Are
>> we thinking userspace would have some LRU list of dmabufs so that
>> they
>> don't collect too many?  Each uses some resources,  do we just rely
>> on
>> open file handles to set an upper limit?
> Yep, this is exactly what my qemu patches are doing, keep a LRU list.
>   
>>>> What happens to
>>>> existing dmabuf fds when the generation updates, do they stop
>>>> refreshing?
>>> Depends on what the guest is doing ;)
>>>
>>> The dma-buf is just a host-side handle for the piece of video
>>> memory
>>> where the guest stored the framebuffer.
>> So the resources the user is holding if they don't release their
>> dmabuf
>> are potentially non-trivial.
> Not really.  Host IGD has a certain amount of memory, some of it is
> assigned to the guest, guest stores the framebuffer there, the dma-buf
> is a host handle (drm object, usable for rendering ops) for the guest
> framebuffer.  So it doesn't use much ressources.  Some memory is needed
> for management structs, but not for the actual data as this in the
> video memory dedicated to the guest.
>
>>> Ok, if we want support multiple regions.  Do we?  Using the offset
>>> we
>>> can place multiple planes in a single region.  And I'm not sure
>>> nvidia
>>> plans to use multiple planes in the first place ...
>> I don't want to take a driver ioctl interface as a throw away, one
>> time
>> use exercise.  If we can think of such questions now, let's define
>> how
>> they work.  A device could have multiple graphics regions with
>> multiple
>> planes within each region.
> I'd suggest to settle for one of these two.  Either one region and
> multiple planes inside (using offset) or one region per plane.  I'd
> prefer the former.  When going for the latter then yes we have to
> specify the region.  I'd name the field region_id then to make clear
> what it is.
>
> What would be the use case for multiple planes?
>
> cursor support?  We already have plane_type for that.
>
> multihead support?  We'll need (at minimum) a head_id field for that
> (for both dma-buf and region)
>
> pageflipping support?  Nothing needed, query_plane will simply return
> the currently visible plane.  Region only needs to be big enough to fit
> the framebuffer twice.  Then the driver can flip between two buffers,
> point to the one qemu should display using "offset".
>
>> Do we also want to exclude that device
>> needs to be strictly region or dmabuf?  Maybe it does both.
> Very unlikely IMHO.
>
>> Or maybe
>> it supports dmabuf-ng (ie. whatever comes next).
> Possibly happens some day, but who knows what interfaces we'll need to
> support that ...
>
>>> vfio_device_query {
>>>      u32 argsz;
>>>      u32 flags;
>>>      enum query_type;  /* or use flags for that */
>> We don't have an infinite number of ioctls
> The limited ioctl number space is a good reason indeed.
> Ok, lets take this route then.
>
> cheers,
>    Gerd
>
> _______________________________________________
> intel-gvt-dev mailing list
> intel-gvt-dev@...ts.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ