[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230117121839.vnubhcrlms7pt2ab@techsingularity.net>
Date: Tue, 17 Jan 2023 12:18:39 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Hillf Danton <hdanton@...a.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Linux-RT <linux-rt-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] locking/rwbase: Prevent indefinite writer starvation
On Tue, Jan 17, 2023 at 06:50:31PM +0800, Hillf Danton wrote:
> On Tue, 17 Jan 2023 08:38:17 +0000 Mel Gorman <mgorman@...hsingularity.net>
> > +/*
> > + * Allow reader bias with a pending writer for a minimum of 4ms or 1 tick.
> > + * The granularity is not exact as the lowest bit in rwbase_rt->waiter_blocked
> > + * is used to detect recent rt/dl tasks taking a read lock.
> > + */
> > +#define RW_CONTENTION_THRESHOLD (HZ/250+1)
> > +
> > +static void __sched update_dlrt_reader(struct rwbase_rt *rwb)
> > +{
> > + /* No update required if dl/rt tasks already identified. */
> > + if (rwb->waiter_blocked & 1)
> > + return;
> > +
> > + /*
> > + * Record a dl/rt task acquiring the lock for read. This may result
> > + * in indefinite writer starvation but dl/rt tasks should avoid such
> > + * behaviour.
> > + */
> > + if (dl_task(current) || rt_task(current)) {
> > + struct rt_mutex_base *rtm = &rwb->rtmutex;
> > + unsigned long flags;
> > +
> > + raw_spin_lock_irqsave(&rtm->wait_lock, flags);
> > + rwb->waiter_blocked |= 1;
> > + raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
> > + }
> > +}
> > +
> > +/* rtmutex->wait_lock must be held. */
> > +static void __sched set_writer_blocked(struct rwbase_rt *rwb)
> > +{
> > + /*
> > + * Lowest bit preserved to identify recent rt/dl tasks acquiring
> > + * the lock for read so guarantee at least one tick delay.
> > + */
> > + rwb->waiter_blocked |= (jiffies + 2) & ~1UL;
> > +}
> > +
> > +static bool __sched rwbase_allow_reader_bias(struct rwbase_rt *rwb)
> > +{
> > + /*
> > + * Allow reader bias if a dl or rt task took the lock for read
> > + * since the last write unlock. Such tasks should be designed
> > + * to avoid heavy writer contention or indefinite starvation.
> > + */
> > + if (rwb->waiter_blocked & 1)
> > + return true;
>
> This true opens a window for two tons of readers, no?
I don't understand the question or what you mean by two tons of readers.
In case you mean that the threshold is not precise, it's noted earlier
in a comment.
* The granularity is not exact as the lowest bit in rwbase_rt->waiter_blocked
* is used to detect recent rt/dl tasks taking a read lock.
> > +
> > + /*
> > + * Allow reader bias unless a writer has been blocked for more
> > + * than RW_CONTENTION_THRESHOLD jiffies.
> > + */
> > + return jiffies - rwb->waiter_blocked < RW_CONTENTION_THRESHOLD;
>
> Why pure 4ms deadline fails to work without taking care of dlrt tasks?
>
I don't understand the question. For DL/RT tasks taking the lock for read,
indefinite writer starvation is still possible. It is assumed DL/RT tasks
take care to avoid the scenario. For other tasks, 4 ms is an arbitrary
cutoff so forward progress is made.
> Given it would take 88 seconds to complete the test, is it going to
> take 100 seconds or more for the 4ms deadline?
It took 88 seconds to complete the test. Without the patch, the test
never finishes and is eventually killed after a timeout and marked as
failure. Actual time to completion will vary depending on the machine,
number of CPUs and speed of storage. The point isn't how fast the test
completes, the point is that the test completes successfully.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists