[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190410003305.24646-6-vinicius.gomes@intel.com>
Date: Tue, 9 Apr 2019 17:33:04 -0700
From: Vinicius Costa Gomes <vinicius.gomes@...el.com>
To: netdev@...r.kernel.org
Cc: Vinicius Costa Gomes <vinicius.gomes@...el.com>, jhs@...atatu.com,
xiyou.wangcong@...il.com, jiri@...nulli.us, olteanv@...il.com,
timo.koskiahde@...ech.com, m-karicheri2@...com
Subject: [RFC net-next v1 5/6] taprio: Add support for frame-preemption
Frame preemption can be used to further reduce the latency of network
communications, so some kinds of traffic can be preempted by higher
priorities ones. This is a hardware only feature.
Frame-preemption is in relation to transmission queues, if the nth bit
of the frame-preemption mask is enabled, it means that traffic going
through the nth TX queue can be preempted by higher priorities queues.
This only has any effect when offloading is enabled.
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@...el.com>
---
include/uapi/linux/pkt_sched.h | 1 +
net/sched/sch_taprio.c | 9 +++++++++
2 files changed, 10 insertions(+)
diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 8b2f993cbb77..a04df9a76864 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -1169,6 +1169,7 @@ enum {
TCA_TAPRIO_ATTR_ADMIN_SCHED, /* The admin sched, only used in dump */
TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME, /* s64 */
TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, /* s64 */
+ TCA_TAPRIO_ATTR_FRAME_PREEMPTION, /* u32 */
__TCA_TAPRIO_ATTR_MAX,
};
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index 3807aacde26b..0a815700c9cc 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -46,6 +46,7 @@ struct sched_gate_list {
s64 cycle_time;
s64 cycle_time_extension;
s64 base_time;
+ u32 frame_preemption;
};
struct taprio_sched {
@@ -387,6 +388,7 @@ static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = {
[TCA_TAPRIO_ATTR_SCHED_CLOCKID] = { .type = NLA_S32 },
[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME] = { .type = NLA_S64 },
[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 },
+ [TCA_TAPRIO_ATTR_FRAME_PREEMPTION] = { .type = NLA_U32 },
};
static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
@@ -494,6 +496,9 @@ static int parse_taprio_schedule(struct nlattr **tb,
if (tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME])
new->cycle_time = nla_get_s64(tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME]);
+ if (tb[TCA_TAPRIO_ATTR_FRAME_PREEMPTION])
+ new->frame_preemption = nla_get_u32(tb[TCA_TAPRIO_ATTR_FRAME_PREEMPTION]);
+
if (tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST])
err = parse_sched_list(
tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], new, extack);
@@ -971,6 +976,10 @@ static int dump_schedule(struct sk_buff *msg,
root->cycle_time_extension, TCA_TAPRIO_PAD))
return -1;
+ if (nla_put_u32(msg, TCA_TAPRIO_ATTR_FRAME_PREEMPTION,
+ root->frame_preemption))
+ return -1;
+
entry_list = nla_nest_start(msg, TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST);
if (!entry_list)
goto error_nest;
--
2.21.0
Powered by blists - more mailing lists