From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [IPv6:2a00:1450:4864:20::62f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id DC06D3B29E for ; Sat, 15 Oct 2022 22:26:44 -0400 (EDT) Received: by mail-ej1-x62f.google.com with SMTP id sc25so18011561ejc.12 for ; Sat, 15 Oct 2022 19:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jackrabbitwireless.com; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=y3krZ4JmozCJGxHHJSoDn2Su9OA+Q0OHfX6zSJ/VH1w=; b=4+OXYmArvV0RB+yYc2wn4Oh4GTsMu71p/QxjbdAOtPSF8sx+pnKhqEUnFbPPhgnvBI 362ULaBPDvbsO5zULVwyaDpgchxcIh/0ueVDo8Q5lUUdxPrIrcw0b5GtH9UVrcgFYpny hRPnJcgQXQkSnoHM309oc0+ErEyecSbmA4wi8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=y3krZ4JmozCJGxHHJSoDn2Su9OA+Q0OHfX6zSJ/VH1w=; b=0uAKFsWv9NOZ+5r6yGGZcQnyEnQk2rqDS40+SRvuAESWWW77hvfLaxe8km/hEQ3U3n zUXsH9BtBZQYssn9Swjv0utRBfaFWg3EpqHEh+Z8X4Ku2jCknW6lWE0uaGiHgCIMcMj6 rnOvjxtTo2eYEj6AnfL9En+1CL4dXfYoZWJuVzGyO03KpIJrkO4rKFPG43AXVZFjDhv0 QF5xKrJK90bMwGD2AMGQUXP4q1d/FaseffvDW/deOuPCWWmcoq+78/CO9E7+cHSCaZGs g6o8vTaXewVVvADdCxRvxFu3MWpsx/vCZ8YPxNJ77rOZjUG2iO1XBVD5kkUnYV21ZOup t4yw== X-Gm-Message-State: ACrzQf3WSg1YJ9Y4T8GnZG63l2hETDKAyITK7vBURz0K13udliWt5bS3 eugQM0J5D/tpfwil8vtCTa9QF9UFhkz5KqeyJ6EfAyXsnrDyiA== X-Google-Smtp-Source: AMsMyM4Sj3vhjwxZe2msfxhFhhbHQW0NKeH5nq1+dNTzMNj32rBoFup60yYlPIrbri0Uj32s0ZNIcfopszQrEmtK5yQ= X-Received: by 2002:a17:907:a425:b0:78d:b3ce:1e43 with SMTP id sg37-20020a170907a42500b0078db3ce1e43mr3782386ejc.95.1665887203797; Sat, 15 Oct 2022 19:26:43 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: =?UTF-8?Q?Robert_Chac=C3=B3n?= Date: Sat, 15 Oct 2022 20:26:32 -0600 Message-ID: To: Herbert Wolverson Cc: libreqos@lists.bufferbloat.net Content-Type: multipart/alternative; boundary="00000000000004c4d505eb1d97d6" Subject: Re: [LibreQoS] In BPF pping - so far X-BeenThere: libreqos@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Oct 2022 02:26:45 -0000 --00000000000004c4d505eb1d97d6 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hey Herbert, Wow. Awesome work! How exciting. We may finally get highly scalable TCP latency tracking in LibreQoS and BracketQoS. Regarding how we receive the data, I suppose whatever is most efficient and scalable for networks with high subscriber counts. In v1.1 we were just parsing some data from the console output: - rtt1 - IP address 1 - IP address 2 I am a big fan of having some sort of JSON structure to pull info from. What do you recommend here for optimal efficiency? Thanks, Robert On Sat, Oct 15, 2022 at 7:59 PM Herbert Wolverson via LibreQoS < libreqos@lists.bufferbloat.net> wrote: > Hey, > > I've had some pretty good success with merging xdp-pping ( > https://github.com/xdp-project/bpf-examples/blob/master/pping/pping.h ) > into xdp-cpumap-tc ( https://github.com/xdp-project/xdp-cpumap-tc ). > > I ported over most of the xdp-pping code, and then changed the entry poin= t > and packet parsing code to make use of the work already done in > xdp-cpumap-tc (it's already parsed a big chunk of the packet, no need to = do > it twice). Then I switched the maps to per-cpu maps, and had to pin them = - > otherwise the two tc instances don't properly share data. Right now, outp= ut > is just stubbed - I've still got to port the perfmap output code. Instead= , > I'm dumping a bunch of extra data to the kernel debug pipe, so I can see > roughly what the output would look like. > > With debug enabled and just logging I'm now getting about 4.9 Gbits/sec o= n > single-stream iperf between two VMs (with a shaper VM in the middle). :-) > > So my question: how would you prefer to receive this data? I'll have to > write a daemon that provides userspace control (periodic cleanup as well = as > reading the performance stream), so the world's kinda our oyster. I can > stick to Kathie's original format (and dump it to a named pipe, perhaps?)= , > a condensed format that only shows what you want to use, an efficient > binary format if you feel like parsing that... > > (I'll post some code soon, getting sleepy) > > Thanks, > Herbert > _______________________________________________ > LibreQoS mailing list > LibreQoS@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/libreqos > --=20 Robert Chac=C3=B3n CEO | JackRabbit Wireless LLC --00000000000004c4d505eb1d97d6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hey Herbert,

Wow. Awesome wo= rk! How exciting. We may finally get highly scalable TCP latency tracking i= n LibreQoS and BracketQoS.
Regarding how we receive the data, I s= uppose whatever is most efficient and scalable for networks with high subsc= riber counts.
In v1.1 we were just parsing some data from the con= sole output:
  • rtt1
  • IP address 1
  • IP address= 2
I am a big fan of having some sort of JSO= N structure to pull info from.
What do you recommend here for= optimal efficiency?

Thanks,
Robert<= /div>

On Sat, Oct 15, 2022 at 7:59 PM Herbert Wolverson via LibreQoS <libreqos@lists.bufferbloat.n= et> wrote:
Hey,

I've had some pre= tty good success with merging xdp-pping ( https://= github.com/xdp-project/bpf-examples/blob/master/pping/pping.h ) into xd= p-cpumap-tc ( https://github.com/xdp-project/xdp-cpumap-tc ).

I ported over most of the xdp-pping code, and then changed = the entry point and packet parsing code to make use of the work already don= e in xdp-cpumap-tc (it's already parsed a big chunk of the packet, no n= eed to do it twice). Then I switched the maps to per-cpu maps, and had to p= in them - otherwise the two tc instances don't properly share data. Rig= ht now, output is just stubbed - I've still got to port the perfmap out= put code. Instead, I'm dumping a bunch of extra data to the kernel debu= g pipe, so I can see roughly what the output would look like.
With debug enabled and just logging I'm now getting about 4= .9 Gbits/sec on single-stream iperf between two VMs (with a shaper VM in th= e middle). :-)

So my question: how would you prefe= r to receive this data? I'll have to write a daemon that provides users= pace control (periodic cleanup as well as reading the performance stream), = so the world's kinda our oyster. I can stick to Kathie's original f= ormat (and dump it to a named pipe, perhaps?), a condensed format that only= shows what you want to use, an efficient binary format if you feel like pa= rsing that...

(I'll post some code soon, getti= ng sleepy)

Thanks,
Herbert
=
_______________________________________________
LibreQoS mailing list
LibreQo= S@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/libreqos

--
Robert Chac=C3=B3n
CEO | JackRabbit Wireless LLC
--00000000000004c4d505eb1d97d6--