lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20221012095336.6km44bk3e2wi5cy6@skbuf> Date: Wed, 12 Oct 2022 09:53:37 +0000 From: Vladimir Oltean <vladimir.oltean@....com> To: David Ahern <dsahern@...nel.org> CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, Stephen Hemminger <stephen@...workplumber.org>, Vinicius Costa Gomes <vinicius.gomes@...el.com>, Jamal Hadi Salim <jhs@...atatu.com>, Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko <jiri@...nulli.us> Subject: Re: [PATCH v2 iproute2-next 2/2] taprio: support dumping and setting per-tc max SDU On Sun, Oct 09, 2022 at 02:07:37PM -0600, David Ahern wrote: > On 10/4/22 6:00 AM, Vladimir Oltean wrote: > > diff --git a/tc/q_taprio.c b/tc/q_taprio.c > > index e3af3f3fa047..45f82be1f50a 100644 > > --- a/tc/q_taprio.c > > +++ b/tc/q_taprio.c > > @@ -151,13 +151,32 @@ static struct sched_entry *create_entry(uint32_t gatemask, uint32_t interval, ui > > return e; > > } > > > > +static void add_tc_entries(struct nlmsghdr *n, > > + __u32 max_sdu[TC_QOPT_MAX_QUEUE]) > > +{ > > + struct rtattr *l; > > + __u32 tc; > > + > > + for (tc = 0; tc < TC_QOPT_MAX_QUEUE; tc++) { > > + l = addattr_nest(n, 1024, TCA_TAPRIO_ATTR_TC_ENTRY | NLA_F_NESTED); > > + > > + addattr_l(n, 1024, TCA_TAPRIO_TC_ENTRY_INDEX, &tc, sizeof(tc)); > > + addattr_l(n, 1024, TCA_TAPRIO_TC_ENTRY_MAX_SDU, > > + &max_sdu[tc], sizeof(max_sdu[tc])); > > + > > + addattr_nest_end(n, l); > > Why the full TC_QOPT_MAX_QUEUE? the parse_opt knows the index of the > last entry used. It doesn't make a difference if I send netlink attributes with zeroes or not. I was thinking the code is more future-proof for other per-tc attributes. If I then add support for other per-tc stuff, I'll have num_max_sdu_entries, num_other_stuff_entries, etc, and I'll have to iterate up to the max() of those (or just up to TC_QOPT_MAX_QUEUE), and add the nla only if i <= the num_*_entries of the corresponding attribute. Anyway I don't have plans for other per-tc entries so I'll make this change. > > + } > > +} > > + > > static int taprio_parse_opt(struct qdisc_util *qu, int argc, > > char **argv, struct nlmsghdr *n, const char *dev) > > { > > + __u32 max_sdu[TC_QOPT_MAX_QUEUE] = { }; > > __s32 clockid = CLOCKID_INVALID; > > struct tc_mqprio_qopt opt = { }; > > __s64 cycle_time_extension = 0; > > struct list_head sched_entries; > > + bool have_tc_entries = false; > > struct rtattr *tail, *l; > > __u32 taprio_flags = 0; > > __u32 txtime_delay = 0; > > @@ -211,6 +230,18 @@ static int taprio_parse_opt(struct qdisc_util *qu, int argc, > > free(tmp); > > idx++; > > } > > + } else if (strcmp(*argv, "max-sdu") == 0) { > > + while (idx < TC_QOPT_MAX_QUEUE && NEXT_ARG_OK()) { > > + NEXT_ARG(); > > + if (get_u32(&max_sdu[idx], *argv, 10)) { > > + PREV_ARG(); > > + break; > > + } > > + idx++; > > + } > > + for ( ; idx < TC_QOPT_MAX_QUEUE; idx++) > > + max_sdu[idx] = 0; > > max_sdu is initialized to 0 and you have "have_tc_entries" to detect > multiple options on the command line. Can remove. > > + have_tc_entries = true; > > } else if (strcmp(*argv, "sched-entry") == 0) { > > uint32_t mask, interval; > > struct sched_entry *e; > > @@ -341,6 +372,9 @@ static int taprio_parse_opt(struct qdisc_util *qu, int argc, > > addattr_l(n, 1024, TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, > > &cycle_time_extension, sizeof(cycle_time_extension)); > > > > + if (have_tc_entries) > > + add_tc_entries(n, max_sdu); > > + > > l = addattr_nest(n, 1024, TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST | NLA_F_NESTED); > > > > err = add_sched_list(&sched_entries, n); > > @@ -430,6 +464,59 @@ static int print_schedule(FILE *f, struct rtattr **tb) > > return 0; > > } > > > > +static void dump_tc_entry(__u32 max_sdu[TC_QOPT_MAX_QUEUE], > > + struct rtattr *item, bool *have_tc_entries) > > +{ > > + struct rtattr *tb[TCA_TAPRIO_TC_ENTRY_MAX + 1]; > > + __u32 tc, val = 0; > > + > > + parse_rtattr_nested(tb, TCA_TAPRIO_TC_ENTRY_MAX, item); > > + > > + if (!tb[TCA_TAPRIO_TC_ENTRY_INDEX]) { > > + fprintf(stderr, "Missing tc entry index\n"); > > + return; > > + } > > + > > + tc = rta_getattr_u32(tb[TCA_TAPRIO_TC_ENTRY_INDEX]); > > + > > + if (tb[TCA_TAPRIO_TC_ENTRY_MAX_SDU]) > > + val = rta_getattr_u32(tb[TCA_TAPRIO_TC_ENTRY_MAX_SDU]); > > + > > + max_sdu[tc] = val; The array dereference here can potentially go out of bounds with future kernels which might provide a TCA_TAPRIO_TC_ENTRY_INDEX past TC_QOPT_MAX_QUEUE. I will add a check for this, and ignore it (or error out ?!). > > + > > + *have_tc_entries = true; > > +} > > + > > +static void dump_tc_entries(FILE *f, struct rtattr *opt) > > +{ > > + __u32 max_sdu[TC_QOPT_MAX_QUEUE] = {}; > > + bool have_tc_entries = false; > > + struct rtattr *i; > > + int tc, rem; > > + > > + for (tc = 0; tc < TC_QOPT_MAX_QUEUE; tc++) > > + max_sdu[tc] = 0; > > max_sdu is initialized to 0 above when it is declared. True. > > + > > + rem = RTA_PAYLOAD(opt); > > + > > + for (i = RTA_DATA(opt); RTA_OK(i, rem); i = RTA_NEXT(i, rem)) { > > + if (i->rta_type != (TCA_TAPRIO_ATTR_TC_ENTRY | NLA_F_NESTED)) > > + continue; > > + > > + dump_tc_entry(max_sdu, i, &have_tc_entries); > > + } > > + > > + if (!have_tc_entries) > > + return; > > + > > + open_json_array(PRINT_ANY, "max-sdu"); > > + for (tc = 0; tc < TC_QOPT_MAX_QUEUE; tc++) > > you can know the max index so why not use it here? Can do. The kernel will always report TC_QOPT_MAX_QUEUE entries anyway, so it doesn't really have a practical effect right now. I'd like to avoid using an array as temporary storage in the first place (see the comment about kernel potentially causing out-of-bounds access), but I only want to call open_json_array() if there is at least one actual per-tc entry present. I'll think about this some more.
Powered by blists - more mailing lists