[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170718202107.3hsptpdspr26snxc@redhat.com>
Date: Tue, 18 Jul 2017 16:21:07 -0400
From: Joe Lawrence <joe.lawrence@...hat.com>
To: Miroslav Benes <mbenes@...e.cz>
Cc: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jessica Yu <jeyu@...hat.com>, Jiri Kosina <jikos@...nel.org>,
Petr Mladek <pmladek@...e.com>
Subject: Re: [PATCH v2 1/2] livepatch: introduce shadow variable API
On Mon, Jul 17, 2017 at 05:29:41PM +0200, Miroslav Benes wrote:
>
> On Wed, 28 Jun 2017, Joe Lawrence wrote:
>
> > +Brief API summary
> > +-----------------
> > + [ ... snip ...]
> > +* klp_shadow_detach() - detach and free all <*, num> shadow variables
> > + - find and remove any <*, num> references from hashtable
> > + - if found, release shadow variable
>
> I think that the second one should be klp_shadow_detach_all(), shouldn't
> it?
Good catch, I'll fixup in v3.
> > +static DEFINE_HASHTABLE(klp_shadow_hash, 12);
>
> Is there a reason, why you pick 12? I'm just curious.
The hashtable bit-size was inherited from the kpatch implementation.
Perhaps Josh knows why this value was picked?
Aside: we could have per-livepatch hashtables if that was desired, this
value could be then adjusted accordingly. We haven't needed them for
kpatch, so I didn't see good reason to complicate things.
> > +static DEFINE_SPINLOCK(klp_shadow_lock);
> > +
> > +/**
> > + * struct klp_shadow - shadow variable structure
> > + * @node: klp_shadow_hash hash table node
> > + * @rcu_head: RCU is used to safely free this structure
> > + * @obj: pointer to original data
> > + * @num: numerical description of new data
>
> Josh proposed better description. Could we also have a note somewhere in
> the documentation what this member is practically for? I mean versioning
> and ability to attach new members to a data structure if live patches are
> stacked.
That's a good idea and I posted a sample doc-blurb in my other reply to
Petr about terminology.
> > + * @new_data: new data area
> > + */
> > +struct klp_shadow {
> > + struct hlist_node node;
> > + struct rcu_head rcu_head;
> > + void *obj;
> > + unsigned long num;
> > + char new_data[];
> > +};
>
> What is the reason to change 'void *new_data' to 'char new_data[]'? I
> assume it is related to API changes below...
>
> [...]
>
> > +/**
> > + * _klp_shadow_attach() - allocate and add a new shadow variable
> > + * @obj: pointer to original data
> > + * @num: numerical description of new data
> > + * @new_data: pointer to new data
> > + * @new_size: size of new data
> > + * @gfp_flags: GFP mask for allocation
> > + * @lock: take klp_shadow_lock during klp_shadow_hash operations
>
> I am not sure about lock argument. Do we need it? Common practice is to
> have function foo() which takes a lock, and function __foo() which does
> not.
>
> In klp_shadow_get_or_attach(), you use it as I'd expect. You take the
> spinlock, call this function and release the spinlock. Is it possible
> to do the same in klp_shadow_attach() and have __klp_shadow_attach()
> without lock argument?
Yes, this would be possible, though it would restrict
klp_shadow_attach() from accepting gfp_flags that might allow for
sleeping. More on that below ...
> > + *
> > + * Note: allocates @new_size space for shadow variable data and copies
> > + * @new_size bytes from @new_data into the shadow varaible's own @new_data
> > + * space. If @new_data is NULL, @new_size is still allocated, but no
> > + * copy is performed.
>
> I must say I'm not entirely happy with this. I don't know if this is what
> Petr had in mind (I'm sure he'll get to the patch set soon). Calling
> memcpy instead of a simple assignment in v1 seems worse.
This change was a bit of a experiment on my part in reaction to
adding klp_shadow_get_or_attach().
I like the simplicity of v1's pointer assignment -- in fact, moving all
allocation responsiblity (klp_shadow meta-data and data[] area) out to
the caller is doable, though implementing klp_shadow_get_or_attach() and
and klp_shadow_detach_all() complicates matters, for example, adding an
alloc/release callback. I originally attempted this for v2, but turned
back when the API and implementation grew complicated. If the memcpy
and gfp_flag restrictions are too ugly, I can try revisting that
approach. Ideas welcome :)
Regards,
-- Joe
Powered by blists - more mailing lists