Skip to Content.

rare-users - Re: [gn4-3-wp6-t1-wb-RARE] new features --- experimenting

Subject: RARE user and assistance email list

List archive


Re: [gn4-3-wp6-t1-wb-RARE] new features --- experimenting


Chronological Thread 
  • From: mc36 <>
  • To: , "" <>
  • Subject: Re: [gn4-3-wp6-t1-wb-RARE] new features --- experimenting
  • Date: Wed, 3 Feb 2021 13:53:12 +0100

hi,

first of all, i upgraded my dataplane tester vm to sde931 to see if the tests
still passes and new warnings appear,
and i can tell you no surprise here... then i upgraded my stordis to the new
sde bsp sal and bios and pulled the
latest rare/freertr to have the label switched multicast feature... then
started experimenting, first with
the fun part, to play music in 2 of my rooms.... for it to happen seamlessly,
i reverted my edges from bier
to mldp (1,2)... then started vlc receiver on one of them, and also started
streaming with vlc from my notebook,
and the music started playing in the remote room... then started the receiver
on the local room and the music
started playing locally, too.... i recorded this experiment, you can find in
our slack channel (0)... :)
then i started inspecting the tables... in mroute i saw that the vlcs joined
with igmp (3,4),
then it get translated to mldp (5,6) on the routers who got the igmp. these
arrived to my stordis (7),
who did not have any mroutes (8) as it's a label switched multicast and he
had no local receivers...
then the stordis asked the stream from my notebook (9) in mldp and he found
out that it's source
is local so he reconstructed (10) the mroute. when i teared down one
listener, then that part of
the replication stopped as expected, as vlc signaled a prune in igmp which
ended in label withdaral mldp message.
when i restarted the vlc, then the stream reappeared.

after this fun part, i started "iperf -u -T 128 -c 232.2.3.2 -i1 -t999 -b
400m" to have something in the counters....
on my notebook (11) you can see the huge amount of received bytes on sdn999,
which mpls encapped and then unicasted
to sdn1.158 toward the core (12) who replicated to the receivers (13,14) who
mpls decapped and mrouted to the local vlcs on sdn999s.
the label entry shows that all this happened in the p4demu (15,16) forwarders
in case of sender and receivers too...
in case of tofino, it happened in the asic driven by our code...

so the final conclusion is that igmp to mldp, and back, works fine on
rare.... but this achievement is just about
to export the so called duplicated label, which also used in mp2mp mld or
p2mp te, for flood the bum of a big vpls...
and all these covered in interop tests so all these believed to interwork
with the big vendor implementations..

now i'll let it run for a while (and enjoying some music too:) then i'll
proceed to bier.... that will be much more fun:)

regards,
cs





0:
https://app.slack.com/client/TFF0TCCBE/CFF0TDB1S



1:
player#show config-differences
interface template1
no ipv4 pim join-source loopback0
no ipv4 pim bier-tunnel 199
no ipv4 pim enable
ipv4 multicast mldp-enable
exit

player#

2:
mediapc#show config-differences
interface template1
no ipv4 pim join-source loopback0
no ipv4 pim bier-tunnel 80
no ipv4 pim enable
ipv4 multicast mldp-enable
exit

mediapc#



3:
player#show ipv4 mroute inet
source group interface upstream targets
10.8.255.1 232.2.3.2 sdn1.162 10.1.1.218 sdn999

player#



4:
mediapc#show ipv4 mroute inet
source group interface upstream targets
10.8.255.1 232.2.3.2 sdn1.159 10.1.1.230 sdn999

mediapc#


5:
player#show ipv4 ldp inet mpdatabase
type root opaque uplink peers
p2mp 10.10.10.11 03 00 08 0a 08 ff 01 e8 02 03 02 10.1.1.218 local
745011/10.1.1.218/-1

player#


6:
mediapc#show ipv4 ldp inet mpdatabase
type root opaque uplink peers
p2mp 10.10.10.11 03 00 08 0a 08 ff 01 e8 02 03 02 10.1.1.230 local
181881/10.1.1.230/-1

mediapc#


7:
core#show ipv4 ldp inet mpdatabase
type root opaque uplink peers
p2mp 10.10.10.11 03 00 08 0a 08 ff 01 e8 02 03 02 10.1.1.233
null/10.1.1.217/745011 null/10.1.1.229/181881 897453/10.1.1.233/-1

core#



8:
core#show ipv4 mroute inet
source group interface upstream targets

core#




9:
noti#show ipv4 ldp inet mpdatabase
type root opaque uplink peers
p2mp 10.10.10.11 03 00 08 0a 08 ff 01 e8 02 03 02 null
null/10.1.1.234/897453

noti#



10:
noti#show ipv4 mroute inet
source group interface upstream targets
10.8.255.1 232.2.3.2 sdn999 10.8.255.1 label

noti#




11:
noti#show interfaces hwsummary
interface state tx rx drop
hairpin11 up 0 0 0
hairpin12 up 0 0 0
hairpin12.14 up 0 0 0
hairpin21 up 0 0 0
hairpin22 up 0 0 0
hairpin22.21 up 0 0 0
sdn1 up 39993723845 16005295117 25033270
sdn1.158 up 36782321097 18415343 0
sdn1.170 up 2221897949 17142216 0
sdn1.171 up 3365235 2957294 0
sdn1.174 up 3590599 2347375 0
sdn1.175 up 520945677 1098701478 0
sdn1.176 up 6522646 14835514267 0
sdn2 up 769072 0 0
sdn901 up 28055076 20153137 1102
sdn999 up 15805971261 39650673696 0




12:
core#show interfaces hwsummary
interface state tx rx drop
bundle1 up 0 0 0
bundle1.158 up 16499360 40081264784 0
bundle1.159 up 38547036873 20052199 0
bundle1.160 up 6475002 2563272786 0
bundle1.161 up 33591461 122437688 0
bundle1.162 up 38568836594 72343661 0
bundle1.163 up 88521117 1926939 0
bundle1.164 up 4341509841 15405318 0
sdn12 up 0 3864819 0
sdn2 up 0 4555145 0
sdn4 up 0 3061811 0
sdn6 up 0 6337853 0

core#




13:
mediapc#show interfaces hwsummary
interface state tx rx drop
sdn1 up 53621262 40856761704 24575252
sdn1.159 up 20388636 40792226990 0
sdn1.174 up 2439102 3289845 0
sdn1.178 up 2530457 3597948 0
sdn1.180 up 2441566 3261414 0
sdn1.183 up 468802 0 0
sdn1.184 up 658514 2172692 0
sdn1.196 up 25086749 22317023 0
sdn901 up 1680825 1700639 0
sdn902 up 2205144 2269010 0
sdn903 up 1738535 1758902 0
sdn999 up 40571419649 627037 0

mediapc#





14:
player#show interfaces hwsummary
interface state tx rx drop
dialer902 up 0 19160107 0
hairpin91 up 83488870 0 0
hairpin92 up 2223152 0 0
hairpin92.22 up 2223152 77568452 0
sdn1 up 133700253 41217064316 24141347
sdn1.162 up 70305991 41081305653 0
sdn1.171 up 3222893 3450460 0
sdn1.172 up 3220830 3540967 0
sdn1.173 up 3728546 2395474 0
sdn1.177 up 9074131 60527805 0
sdn1.180 up 3709836 2394909 0
sdn1.182 up 39065345 34069433 0
sdn2 up 10233201 101669811 1532
sdn901 up 11816899 13112712 0
sdn902 up 28978119 19160203 3476
sdn903 up 1695070 1705459 0
sdn999 up 40908003982 1396850 0

player#




15:
player#show mpls forwarding 745011
category value
label 745011
key 5-vrf mp2mp
working true
forwarder inet:4
interface null
nexthop null
remote label unlabelled
need local true
duplicated 0
pwe iface null
pwe del 0
pwe add n/a
counter tx=0(0) rx=0(0) drp=0(0)
hardware counter tx=0(0) rx=44423220854(29229562) drp=0(0)

player#



16:
mediapc#show ipv4 ldp inet mpdatabase
type root opaque uplink peers
p2mp 10.10.10.11 03 00 08 0a 08 ff 01 e8 02 03 02 10.1.1.230 local
181881/10.1.1.230/-1

mediapc#show mpls forwarding 181881
category value
label 181881
key 5-vrf mp2mp
working true
forwarder inet:4
interface null
nexthop null
remote label unlabelled
need local true
duplicated 0
pwe iface null
pwe del 0
pwe add n/a
counter tx=0(0) rx=0(0) drp=0(0)
hardware counter tx=0(0) rx=44736174336(29433189) drp=0(0)

mediapc#

On 2/2/21 7:19 PM, mc36 wrote:
hi,
thanks for the wise words... hopefully you'll have the opportunity soon to
tell them some good news... :)
for now, we have mldp arrive to tofino also, so it's on all the platforms as
indicated in the fresh test runs!
first of all, i'll move to the new sde931, then i'll play a bit with the
mldp... but after that, i'll start
adding bier step by step... first the dpdk, then the bmv2, finally the
tofino....
regards,
cs

floui: please update the feature matrix on the wiki!


On 2/2/21 5:13 PM, Tim Chown wrote:
Implementing bier is great.  There’s a few ietf-ers I know who would
be interested to hear :)

On 2 Feb 2021, at 07:48, mc36 <> wrote:

hi,
please find attached the new test runs. news that bmv2
got the mldp core and edge features.
now the tofino to follow that....
regards,
cs


On 2/1/21 10:57 AM, mc36 wrote:
hi,
i just finished (1) the huge refactoring described in my previous mail!
please find attached the fresh test runs with all the dataplanes, the
p4 based ones now use both ingress and egress pipes which was a prerequisite
to continue adding nexthop based multicast features to these...
so for now, i'll do so by removing the 'not applicable' from p4
results the last some test runs....
regards,
cs
https://bitbucket.software.geant.org/projects/RARE/repos/rare/commits/05e54cb91a2f5b3ede7f0e0459109b81c689fe47
<https://bitbucket.software.geant.org/projects/RARE/repos/rare/commits/05e54cb91a2f5b3ede7f0e0459109b81c689fe47>
On 1/30/21 8:07 AM, mc36 wrote:
hi,

please find attached the latest test runs with bmv2...
no new features but the bmv2 p4 code got that huge [1]
refactoring that i described in the previous mail.
that is, the decapsulation & routing decision happens
in ingress and the encapsulation happens in egress.

now i'll proceed with the tofino code too, it'll be a
bit more tricky as that guy have completely separate
two stages with own parser, match-actions and deparser
too, whereas bmv2 share the all the metadata and headers
between the stages...

and after that, when everything passes again on tofino too,
i'll start adding the newest multicast feature to both p4
codes, that triggered this whole rework: the lsm edge&core...
(dpdk code already have it all, so at least i see it working:)

and when i'm done with the lsm, i'll proceed to bier (not i
the but but with [2] :), which could be a game-changer for
those who use multicast because it fully eliminates tree
building in the core elements for multicast, and as far as
i know, we'll be the first one who'll have it in hw...

regards,
cs

1:
https://github.com/frederic-loui/RARE/commit/dfb3ff2f3d52dc58f6c38d2b2ae12ed74cc10302

<https://github.com/frederic-loui/RARE/commit/dfb3ff2f3d52dc58f6c38d2b2ae12ed74cc10302>

2: https://tools.ietf.org/html/rfc8279 <https://tools.ietf.org/html/rfc8279>

On 1/28/21 9:02 PM, mc36 wrote:
hi,
please find attached the fresh runs with dpdk.
news is the label switched multicast features arrived.
it means basically mldp p2mp, mldp mp2mp, rsvp-te p2mp,
pim/igmp-mldp interworking, mldp based mvpn and friends...

regarding the bmv2 and tofino dataplane, there was a long
conversation about it at the intel community (1), and finally
we concluded that we should move away from the current
ingress-pipe-only model and do the encapsulation
exclusively in the egress pipeline...
it'll free up some space in the ingress for more
simultaneous features (or bigger lookup tables)
and will provide us the flexibility needed for
the lsm. the tricky part here is that for pure
multicast, there is no nexthop involved in the
flooding, and as a quick hack, i replicated the
vlan-out table to the egress... but in case of lsm,
we'll need the nexthop rewrite info also because lsm
is basically unicasted on the link, but mostly i
want to get it also done cleanly so nothing left
but to move everything from nexthop to the egress...
regards,
cs

1: https://community.intel.com/t5/Intel-Connectivity-Research/ingress-vs-egress-processing/m-p/1249943#M2025
<https://community.intel.com/t5/Intel-Connectivity-Research/ingress-vs-egress-processing/m-p/1249943#M2025>





Archive powered by MHonArc 2.6.19.

Top of Page