[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79756d67-0f6e-4d7d-1827-f1e275e65f27@redhat.com>
Date: Thu, 20 Jul 2017 16:30:37 -0400
From: Joe Lawrence <joe.lawrence@...hat.com>
To: Petr Mladek <pmladek@...e.com>
Cc: live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jessica Yu <jeyu@...hat.com>, Jiri Kosina <jikos@...nel.org>,
Miroslav Benes <mbenes@...e.cz>
Subject: Re: [PATCH v2 1/2] livepatch: introduce shadow variable API
On 07/18/2017 08:45 AM, Petr Mladek wrote:
> On Wed 2017-06-28 11:37:26, Joe Lawrence wrote:
>> diff --git a/Documentation/livepatch/shadow-vars.txt b/Documentation/livepatch/shadow-vars.txt
>> new file mode 100644
>> index 000000000000..7f28982e6b1c
>> --- /dev/null
>> +++ b/Documentation/livepatch/shadow-vars.txt
>> +Use cases
>> +---------
>> +
>> +See the example shadow variable livepatch modules in samples/livepatch
>> +for full working demonstrations.
>> +
>> +Example 1: Commit 1d147bfa6429 ("mac80211: fix AP powersave TX vs.
>> +wakeup race") added a spinlock to net/mac80211/sta_info.h :: struct
>> +sta_info. Implementing this change with a shadow variable is
>> +straightforward.
>> +
>> +Allocation - when a host sta_info structure is allocated, attach a
>> +shadow variable copy of the ps_lock:
>> +
>> +#define PS_LOCK 1
>> +struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
>> + const u8 *addr, gfp_t gfp)
>> +{
>> + struct sta_info *sta;
>> + spinlock_t *ps_lock;
>> + ...
>> + sta = kzalloc(sizeof(*sta) + hw->sta_data_size, gfp);
>
> klp_shadow_attach() does the allocation as well now.
> Note that we could pass already initialized spin_lock.
>
>> + ...
>> + ps_lock = klp_shadow_attach(sta, PS_LOCK, NULL, sizeof(*ps_lock), gfp);
>> + if (!ps_lock)
>> + goto shadow_fail;
>> + spin_lock_init(ps_lock);
>> + ...
>> +
>> +Usage - when using the shadow spinlock, query the shadow variable API to
>> +retrieve it:
>> +
>> +void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
>> +{
>> + spinlock_t *ps_lock;
>> + ...
>> + /* sync with ieee80211_tx_h_unicast_ps_buf */
>> + ps_lock = klp_shadow_get(sta, "ps_lock");
>
> s/"ps_lock"/PS_LOCK/
>
> The same problem is repeated many times below (also in the 2nd
> example).
>
> Also this is a nice example, where klp_shadow_get_or_attach()
> would be useful. It would fix even already existing instances.
>
> So, the code might look like:
>
> void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
> {
> DEFINE_SPINLOCK(ps_lock_fallback)
> spinlock_t *ps_lock;
> ...
> /* sync with ieee80211_tx_h_unicast_ps_buf */
> ps_lock = klp_shadow_get_or_attach(sta, PS_LOCK,
> &ps_lock_fallback, sizeof(ps_lock_fallback),
> GFP_ATOMIC);
>
> It is a bit ugly that we always initialize ps_lock_fallback
> even when it is not used. But it helps to avoid a custom
> callback that would create the fallback variable. I think
> that it is an acceptable deal.
Yup, it's a tradeoff for the caller. If they want a shadow variable
added and considered "live", they better have previously initialized it :)
>
>> + if (ps_lock)
>> + spin_lock(ps_lock);
>> + ...
>> + if (ps_lock)
>> + spin_unlock(ps_lock);
>> + ...
>> +
>> +Release - when the host sta_info structure is freed, first detach the
>> +shadow variable and then free the shadow spinlock:
>> +
>> +void sta_info_free(struct ieee80211_local *local, struct sta_info *sta)
>> +{
>> + spinlock_t *ps_lock;
>> + ...
>> + ps_lock = klp_shadow_get(sta, "ps_lock");
>> + if (ps_lock)
>> + klp_shadow_detach(sta, "ps_lock");
>
> Isn't klp_shadow_detach() enough? If it an optimization,
> klp_shadow_detach() might get optimized the same way.
> But I am not sure if it is worth it.
Let me go back and review these inline examples for v3... I didn't
update them carefully enough when drafting v2.
>> + kfree(sta);
>> +
>> +
>
>
>> diff --git a/kernel/livepatch/shadow.c b/kernel/livepatch/shadow.c
>> new file mode 100644
>> index 000000000000..d37a61c57e72
>> --- /dev/null
>> +++ b/kernel/livepatch/shadow.c
>> +/**
>> + * _klp_shadow_attach() - allocate and add a new shadow variable
>> + * @obj: pointer to original data
>> + * @num: numerical description of new data
>> + * @new_data: pointer to new data
>> + * @new_size: size of new data
>> + * @gfp_flags: GFP mask for allocation
>> + * @lock: take klp_shadow_lock during klp_shadow_hash operations
>> + *
>> + * Note: allocates @new_size space for shadow variable data and copies
>> + * @new_size bytes from @new_data into the shadow varaible's own @new_data
>> + * space. If @new_data is NULL, @new_size is still allocated, but no
>> + * copy is performed.
>> + *
>> + * Return: the shadow variable new_data element, NULL on failure.
>> + */
>> +static void *_klp_shadow_attach(void *obj, unsigned long num, void *new_data,
>> + size_t new_size, gfp_t gfp_flags,
>> + bool lock)
>
> Nested implementation is usually prefixed by two underlines __.
> It is more visible and helps to distinguish it from the normal function.
Noted for v3.
>> +{
>> + struct klp_shadow *shadow;
>> + unsigned long flags;
>> +
>> + shadow = kzalloc(new_size + sizeof(*shadow), gfp_flags);
>> + if (!shadow)
>> + return NULL;
>> +
>> + shadow->obj = obj;
>> + shadow->num = num;
>> + if (new_data)
>> + memcpy(shadow->new_data, new_data, new_size);
>> +
>> + if (lock)
>> + spin_lock_irqsave(&klp_shadow_lock, flags);
>> + hash_add_rcu(klp_shadow_hash, &shadow->node, (unsigned long)obj);
>
> We should check if the shadow variable already existed. Otherwise,
> it would be possible to silently create many duplicates.
>
> It would make klp_shadow_attach() and klp_shadow_get_or_attach()
> to behave the same.
They would be almost exactly the same, except one version would bounce a
redundant entry while the other would return the existing one. I could
envision callers wanting any of the following behavior:
If a shadow <obj, id> already exists:
0 - add a second shadow variable (??? why)
1 - return NULL, WARN
2 - return the existing one
3 - update the existing one with the new data and return it
* v2 klp_shadow_attach() currently implements #0, can be made to do #1
* v2 klp_shadow_get_or_attach() currently implements #2, but maybe #3
makes more sense
Going back to existing kpatch use-cases, since we paired shadow variable
creation to their parent object creation, -EEXIST was never an issue. I
think we concocted one proof-of-concept kpatch where we created shadow
variables "in-flight", that is, we patched a routine that operated on
the parent object and created a shadow variable if one did not already
exist. The in-flight patch was for single function and we knew that it
would never be called concurrently for the same parent object. tl;dr =
kpatch never worried about existing shadow <obj, id>.
> I would do WARN() in klp_shadow_attach() when the variable
> already existed are return NULL. Of course it might be inoncent
> duplication. But it might mean that someone else is using another
> variable of the same name but with different content. klp_shadow_get()
> would then return the same variable for two different purposes.
> Then the whole system might end like a glass on a stony floor.
What do you think of expanding the API to include each the cases
outlined above? Something like:
1 - klp_attach = allocate and add a unique <obj, id> to the hash,
duplicates return NULL and a WARN
2 - klp_get_or_attach = return <obj, id> if it already exists,
otherwise allocate a new one
3 - klp_get_or_update = update and return <obj, id> if it already
exists, otherwise allocate a new one
IMHO, I think cases 1 and 3 are most intuitive, so maybe case 2 should
be dropped. Since you suggested adding klp_get_or_attach(), what do you
think?
>
>> + if (lock)
>> + spin_unlock_irqrestore(&klp_shadow_lock, flags);
>> +
>> + return shadow->new_data;
>> +}
>
> Otherwise, I rather like the API. Thanks a lot for adding
> klp_shadow_get_or_attach().
>
> I did not comment things that were already discussed in
> other threads.
klp_shadow_get_or_attach() looks to be really useful in concurrent
situations, especially cases where we'd like to do in-flight shadow
variable creation.
Appreciate the comments as always.
-- Joe
Powered by blists - more mailing lists