lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Oct 2020 14:49:26 -0700
From:   Evan Green <evgreen@...omium.org>
To:     Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
Cc:     Rob Herring <robh+dt@...nel.org>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Douglas Anderson <dianders@...omium.org>,
        Stephen Boyd <swboyd@...omium.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/3] nvmem: qfprom: Don't touch certain fuses

On Thu, Oct 1, 2020 at 9:30 AM Srinivas Kandagatla
<srinivas.kandagatla@...aro.org> wrote:
>
>
>
> On 01/10/2020 17:27, Evan Green wrote:
> > On Thu, Oct 1, 2020 at 7:17 AM Srinivas Kandagatla
> > <srinivas.kandagatla@...aro.org> wrote:
> >>
> >> Hi Evan,
> >>
> >> On 29/09/2020 21:58, Evan Green wrote:
> >>> Some fuse ranges are protected by the XPU such that the AP cannot
> >>> access them. Attempting to do so causes an SError. Use the newly
> >>> introduced per-soc compatible string to attach the set of regions
> >>> we should not access. Then tiptoe around those regions.
> >>>
> >>
> >> This is a generic feature that can be used by any nvmem provider, can
> >> you move this logic to nvmem core instead of having it in qfprom!
> >
> > Sure! I'd prefer to keep this data in the driver for now rather than
> Ofcourse these can come from driver directly based on compatible!
>
> > trying to define DT bindings for the keepout zones. So then I'll pass
> > in my keepout array via struct nvmem_config at registration time, and
> > then the core can handle the keepout logic instead of qfprom.c.
> >
>
> Yes, that is inline with what am thinking of as well!

Oh no, I realized this isn't nearly as beautiful when I try to move it
into the core. The low level read/write interface between the nvmem
core and the driver is a range. So to move this into the core I'd have
to implement all the overlap computation logic to potentially break up
a read into several small reads in cases where there are many little
keepout ranges. It was much simpler when I could just check each byte
offset individually, and because I was doing it in this one
rarely-used driver I could make that performance tradeoff without much
penalty.

I could do all range/overlap handling if you want, but it'll be a
bigger change, and I worry my driver would be the only one to end up
using it. What do you think?
-Evan

>
>
> 00srini
> > -Evan
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ