lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <92ff14e4-b3a3-8bc8-b30b-778c63556534@daenzer.net>
Date:   Tue, 25 Apr 2017 12:08:58 +0900
From:   Michel Dänzer <michel@...nzer.net>
To:     Ville Syrjälä <ville.syrjala@...ux.intel.com>
Cc:     open list <linux-kernel@...r.kernel.org>,
        dri-devel@...ts.freedesktop.org, amd-gfx@...ts.freedesktop.org,
        Daniel Vetter <daniel.vetter@...el.com>,
        Christian König <christian.koenig@....com>,
        Gerd Hoffmann <kraxel@...hat.com>
Subject: Re: [PATCH] drm: fourcc byteorder: brings header file comments in
 line with reality.

On 25/04/17 10:12 AM, Michel Dänzer wrote:
> On 24/04/17 10:03 PM, Ville Syrjälä wrote:
>> On Mon, Apr 24, 2017 at 03:57:02PM +0900, Michel Dänzer wrote:
>>> On 22/04/17 07:05 PM, Ville Syrjälä wrote:
>>>> On Fri, Apr 21, 2017 at 06:14:31PM +0200, Gerd Hoffmann wrote:
>>>>>   Hi,
>>>>>
>>>>>>> My personal opinion is that formats in drm_fourcc.h should be 
>>>>>>> independent of the CPU byte order and the function 
>>>>>>> drm_mode_legacy_fb_format() and drivers depending on that incorrect 
>>>>>>> assumption be fixed instead.
>>>>>>
>>>>>> The problem is this isn't a kernel-internal thing any more.  With the
>>>>>> addition of the ADDFB2 ioctl the fourcc codes became part of the
>>>>>> kernel/userspace abi ...
>>>>>
>>>>> Ok, added some printk's to the ADDFB and ADDFB2 code paths and tested a
>>>>> bit.  Apparently pretty much all userspace still uses the ADDFB ioctl.
>>>>> xorg (modesetting driver) does.  gnome-shell in wayland mode does.
>>>>> Seems the big transition to ADDFB2 didn't happen yet.
>>>>>
>>>>> I guess that makes changing drm_mode_legacy_fb_format + drivers a
>>>>> reasonable option ...
>>>>
>>>> Yeah, I came to the same conclusion after chatting with some
>>>> folks on irc.
>>>>
>>>> So my current idea is that we change any driver that wants to follow the
>>>> CPU endianness
>>>
>>> This isn't really optional for various reasons, some of which have been
>>> covered in this discussion.
>>>
>>>
>>>> to declare support for big endian formats if the CPU is
>>>> big endian. Presumably these are mostly the virtual GPU drivers.
>>>>
>>>> Additonally we'll make the mapping performed by drm_mode_legacy_fb_format()
>>>> driver controlled. That way drivers that got changed to follow CPU
>>>> endianness can return a framebuffer that matches CPU endianness. And
>>>> drivers that expect the GPU endianness to not depend on the CPU
>>>> endianness will keep working as they do now. The downside is that users
>>>> of the legacy addfb ioctl will need to magically know which endianness
>>>> they will get, but that is apparently already the case. And users of
>>>> addfb2 will keep on specifying the endianness explicitly with
>>>> DRM_FORMAT_BIG_ENDIAN vs. 0.
>>>
>>> I'm afraid it's not that simple.
>>>
>>> The display hardware of older (pre-R600 generation) Radeon GPUs does not
>>> support the "big endian" formats directly. In order to allow userspace
>>> to access pixel data in native endianness with the CPU, we instead use
>>> byte-swapping functionality which only affects CPU access.
>>
>> OK, I'm getting confused. Based on our irc discussion I got the
>> impression you don't byte swap CPU accesses.
> 
> Sorry for the confusion. The radeon kernel driver does support
> byte-swapping for CPU access to VRAM with pre-R600 GPUs, and this is
> used for fbdev emulation. What I meant on IRC is that the xf86-video-ati
> radeon driver doesn't make use of this, mostly because it only applies
> while a BO is in VRAM, and userspace can't control when that's the case
> (while a BO isn't being scanned out).
> 
> 
>> But since you do, how do you deal with mixing 8bpp vs. 16bpp vs. 32bpp?
> 
> The byte-swapping is configured per-BO via the
> RADEON_TILING_SWAP_16/32BIT flags.

... which means that it's disabled by default, so it shouldn't affect
generic userspace. So exposing the GPU format directly should be
feasible in this case as well after all. Sorry for the noise. :(


-- 
Earthling Michel Dänzer               |               http://www.amd.com
Libre software enthusiast             |             Mesa and X developer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ