lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170425102607.GL30290@intel.com>
Date:   Tue, 25 Apr 2017 13:26:07 +0300
From:   Ville Syrjälä <ville.syrjala@...ux.intel.com>
To:     Michel Dänzer <michel@...nzer.net>
Cc:     amd-gfx@...ts.freedesktop.org,
        open list <linux-kernel@...r.kernel.org>,
        dri-devel@...ts.freedesktop.org, Gerd Hoffmann <kraxel@...hat.com>,
        Daniel Vetter <daniel.vetter@...el.com>,
        Christian König <christian.koenig@....com>
Subject: Re: [PATCH] drm: fourcc byteorder: brings header file comments in
 line with reality.

On Tue, Apr 25, 2017 at 10:12:37AM +0900, Michel Dänzer wrote:
> On 24/04/17 10:03 PM, Ville Syrjälä wrote:
> > On Mon, Apr 24, 2017 at 03:57:02PM +0900, Michel Dänzer wrote:
> >> On 22/04/17 07:05 PM, Ville Syrjälä wrote:
> >>> On Fri, Apr 21, 2017 at 06:14:31PM +0200, Gerd Hoffmann wrote:
> >>>>   Hi,
> >>>>
> >>>>>> My personal opinion is that formats in drm_fourcc.h should be 
> >>>>>> independent of the CPU byte order and the function 
> >>>>>> drm_mode_legacy_fb_format() and drivers depending on that incorrect 
> >>>>>> assumption be fixed instead.
> >>>>>
> >>>>> The problem is this isn't a kernel-internal thing any more.  With the
> >>>>> addition of the ADDFB2 ioctl the fourcc codes became part of the
> >>>>> kernel/userspace abi ...
> >>>>
> >>>> Ok, added some printk's to the ADDFB and ADDFB2 code paths and tested a
> >>>> bit.  Apparently pretty much all userspace still uses the ADDFB ioctl.
> >>>> xorg (modesetting driver) does.  gnome-shell in wayland mode does.
> >>>> Seems the big transition to ADDFB2 didn't happen yet.
> >>>>
> >>>> I guess that makes changing drm_mode_legacy_fb_format + drivers a
> >>>> reasonable option ...
> >>>
> >>> Yeah, I came to the same conclusion after chatting with some
> >>> folks on irc.
> >>>
> >>> So my current idea is that we change any driver that wants to follow the
> >>> CPU endianness
> >>
> >> This isn't really optional for various reasons, some of which have been
> >> covered in this discussion.
> >>
> >>
> >>> to declare support for big endian formats if the CPU is
> >>> big endian. Presumably these are mostly the virtual GPU drivers.
> >>>
> >>> Additonally we'll make the mapping performed by drm_mode_legacy_fb_format()
> >>> driver controlled. That way drivers that got changed to follow CPU
> >>> endianness can return a framebuffer that matches CPU endianness. And
> >>> drivers that expect the GPU endianness to not depend on the CPU
> >>> endianness will keep working as they do now. The downside is that users
> >>> of the legacy addfb ioctl will need to magically know which endianness
> >>> they will get, but that is apparently already the case. And users of
> >>> addfb2 will keep on specifying the endianness explicitly with
> >>> DRM_FORMAT_BIG_ENDIAN vs. 0.
> >>
> >> I'm afraid it's not that simple.
> >>
> >> The display hardware of older (pre-R600 generation) Radeon GPUs does not
> >> support the "big endian" formats directly. In order to allow userspace
> >> to access pixel data in native endianness with the CPU, we instead use
> >> byte-swapping functionality which only affects CPU access.
> > 
> > OK, I'm getting confused. Based on our irc discussion I got the
> > impression you don't byte swap CPU accesses.
> 
> Sorry for the confusion. The radeon kernel driver does support
> byte-swapping for CPU access to VRAM with pre-R600 GPUs, and this is
> used for fbdev emulation. What I meant on IRC is that the xf86-video-ati
> radeon driver doesn't make use of this, mostly because it only applies
> while a BO is in VRAM, and userspace can't control when that's the case
> (while a BO isn't being scanned out).

So that was my other question. So if someone just creates a bo, I presume
ttm can more or less move it between system memory and vram at any
time. So if we then mmap the bo, does it mean the CPU will see the bytes
in different order depending on where the bo happens to live at
the time the CPU access happens?

And how would that work wih dumb bos? Would they be forced to live in vram?
I see it's passing VRAM as the initial domain, but I can't quickly see
whether that would mean it can't even be moved out.

> 
> 
> > But since you do, how do you deal with mixing 8bpp vs. 16bpp vs. 32bpp?
> 
> The byte-swapping is configured per-BO via the
> RADEON_TILING_SWAP_16/32BIT flags.

Which translates into usage of the surface regs it seems. So I wasn't
totally crazy to think that such things existed :)

-- 
Ville Syrjälä
Intel OTC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ