[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YAWBvVPESMsRvacj@myrica>
Date: Mon, 18 Jan 2021 13:40:29 +0100
From: Jean-Philippe Brucker <jean-philippe@...aro.org>
To: John Garry <john.garry@...wei.com>
Cc: joro@...tes.org, will@...nel.org, linux-kernel@...r.kernel.org,
linuxarm@...wei.com, iommu@...ts.linux-foundation.org,
robin.murphy@....com
Subject: Re: [PATCH v4 2/3] iommu/iova: Avoid double-negatives in magazine
helpers
On Mon, Jan 18, 2021 at 10:55:52AM +0000, John Garry wrote:
> On 18/01/2021 10:08, Jean-Philippe Brucker wrote:
> > > > Any idea why that's happening? This fix seems ok but if we're expecting
> > > > allocation failures for the loaded magazine then we could easily get it
> > > > for cpu_rcaches too, and get a similar abort at runtime.
> > > It's not specifically that we expect them (allocation failures for the
> > > loaded magazine), rather we should make safe against it.
> > >
> > > So could you be more specific in your concern for the cpu_rcache failure?
> > > cpu_rcache magazine assignment comes from this logic.
> > If this fails:
> >
> > drivers/iommu/iova.c:847: rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), cache_line_size());
> >
> > then we'll get an Oops in __iova_rcache_get(). So if we're making the
> > module safer against magazine allocation failure, shouldn't we also
> > protect against cpu_rcaches allocation failure?
>
> Ah, gotcha. So we have the WARN there, but that's not much use as this would
> still crash, as you say.
>
> So maybe we can embed the cpu rcaches in iova_domain struct, to avoid the
> separate (failable) cpu rcache allocation.
>
> Alternatively, we could add NULL checks __iova_rcache_get() et al for this
> allocation failure but that's not preferable as it's fastpath.
>
> Finally so we could pass back an error code from init_iova_rcache() to its
> only caller, init_iova_domain(); but that has multiple callers and would
> need to be fixed up.
>
> Not sure which is best or on other options.
I would have initially gone with option 2 which seems the simplest, but I
don't have a setup to measure that overhead
Thanks,
Jean
Powered by blists - more mailing lists