lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 21 Jun 2024 18:37:23 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Javier Martinez Canillas <javierm@...hat.com>
Cc: Jocelyn Falempe <jfalempe@...hat.com>, linux-kernel@...r.kernel.org,
	Maxime Ripard <mripard@...nel.org>, Daniel Vetter <daniel@...ll.ch>,
	David Airlie <airlied@...il.com>,
	Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
	Thomas Zimmermann <tzimmermann@...e.de>,
	dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH] drm/ssd130x: Add drm_panic support

On Fri, Jun 21, 2024 at 05:42:53PM +0200, Javier Martinez Canillas wrote:
> Javier Martinez Canillas <javierm@...hat.com> writes:
> 
> > Jocelyn Falempe <jfalempe@...hat.com> writes:
> >
> > Hello Jocelyn, thanks for your feedback!
> >
> >> On 21/06/2024 00:22, Javier Martinez Canillas wrote:
> >>> Add support for the drm_panic infrastructure, which allows to display
> >>> a user friendly message on the screen when a Linux kernel panic occurs.
> >>> 
> >>> The display controller doesn't scanout the framebuffer, but instead the
> >>> pixels are sent to the device using a transport bus. For this reason, a
> >>> .panic_flush handler is needed to flush the panic image to the display.
> >>
> >> Thanks for this patch, that's really cool that drm_panic can work on 
> >> this device too.
> >>
> >
> > Indeed, that's why I did it. Just to see if it could work :)
> >
> > [...]
> >
> >>> +static void ssd130x_primary_plane_helper_panic_flush(struct drm_plane *plane)
> >>> +{
> >>> +	struct drm_plane_state *plane_state = plane->state;
> >>> +	struct ssd130x_plane_state *ssd130x_plane_state = to_ssd130x_plane_state(plane_state);
> >>> +	struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
> >>> +	struct drm_crtc *crtc = plane_state->crtc;
> >>> +	struct ssd130x_crtc_state *ssd130x_crtc_state = to_ssd130x_crtc_state(crtc->state);
> >>> +
> >>> +	ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &plane_state->dst,
> >>> +			     ssd130x_plane_state->buffer, ssd130x_crtc_state->data_array,
> >>> +			     &shadow_plane_state->fmtcnv_state);
> >>
> >> ssd130x_fb_blit_rect() will call regmap->write(), which involve mutex 
> >> and might sleep. And if the mutex is taken when the panic occurs, it 
> >> might deadlock the panic handling.
> >
> > That's a good point and I something haven't considered...
> >
> >> One solution would be to configure the regmap with config->fast_io and 
> >> config->use_raw_spinlock, and check that the lock is available with 
> >> try_lock(map->raw_spin_lock)
> >> But that means it will waste cpu cycle with busy waiting for normal 
> >> operation, which is not good.
> >>
> >
> > Yeah, I would prefer to not change the driver for normal operation.
> >
> 
> Another option, that I believe makes more sense, is to just disable the
> regmap locking (using struct regmap_config.disable_locking field [0]).
> 
> Since this regmap is not shared with other drivers and so any concurrent
> access should already be prevented by the DRM core locking scheme.
> 
> Is my understanding correct?

Quick irc discussion summary: Since these are panels that sit on i2c/spi
buses, you need to put the raw spinlock panic locking into these
subsystems. Which is going to be extreme amounts of fun, becuase:

- You need to protect innermost register access with a raw spinlock, but
  enough so that every access is still consistent.

- You need separate panic paths which avoid all the existing subsystem
  locking (i2c/spi have userspace apis, so they need mutexes) and only
  rely on the caller having done the raw spinlock trylocking.

- Bonus points: Who even owns that raw spinlock ...

I'm afraid, this is going to be a tough nut to crack :-/

Cheers, Sima
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ