Skip to Content.

rare-dev - Re: [rare-dev] mpolka in rare

Subject: Rare project developers

List archive


Re: [rare-dev] mpolka in rare


Chronological Thread 
  • From: mc36 <>
  • To: Cristina Klippel Dominicini <>, Rafael Silva Guimarães <>
  • Cc: "Moises R. N. Ribeiro" <>, Magnos Martinello <>,
  • Subject: Re: [rare-dev] mpolka in rare
  • Date: Sat, 19 Feb 2022 12:38:26 +0100

hi,
i just enabled mpolka on my homenet and i can tell you that it works as
expected... :)
i've taken the srindex from two nodes to show you that they're the same,
network wide...
(the last show is a graphviz export of the network, just in case if you need
it...)
((but you can login one of the nodes through the geant p4lab at 10.5.5.5 in
the CORE vrf,
or at my dn42 looking glass by telnetting/sshing to dl.nop.hu with any
user/pass...))

next steps will be to add the multi-loopback logic to clntMpolka routeid
generator, but
imho i'll proceed with the dataplanes first, i badly want to see it at
line-rate... :)
the dataplane exporter part was added already yesterday:
https://github.com/mc36/freeRouter/commit/ef3fe4c7bca3ef4536c7ae8493ad45a3dcfa374c
regards,
cs


sid#ping 1.1.1.1 /vrf v1 /interface lo0 /multi
pinging 1.1.1.1, src=10.10.10.227, vrf=v1, cnt=5, len=64, tim=1000, gap=0,
ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
!!!!!!!!!!
result=200%, recv/sent/lost/err=10/5/0/0, rtt min/avg/max/sum=4/10/16/5002,
ttl min/avg/max=55/55/55
sid#
sid#show config-differences
interface tunnel1
no description
tunnel vrf v1
tunnel source loopback0
tunnel destination 1.1.1.111
tunnel domain-name 10.10.10.5 10.10.10.20 10.10.10.199 , 10.10.10.199
10.10.10.1 , 10.10.10.20 10.10.10.1 , 10.10.10.1 10.10.10.1 ,
tunnel mode mpolka
vrf forwarding v1
ipv4 address 1.1.1.2 255.255.255.252
no shutdown
no log-link-change
exit

sid#show ipv4 srindex v1
index conn prefix peers
bytes
10 false 10.10.10.1/32 20 25 26 27 40 50 80 110 181 199 200
0
20 false 10.10.10.2/32 10 199 200
0
24 false 10.10.10.10/32 200
0
25 false 10.1.11.0/32 10
0
26 false 10.10.11.11/32 10
0
27 false 10.10.10.26/32 10 29 31 32 33 40 180 181 190 191 197 210 220
230 240 0
29 false 10.10.10.29/32 27 33
0
31 false 10.10.10.31/32 27 33
0
32 false 10.10.10.24/32 27 33
0
33 false 10.10.10.25/32 27 29 31 32 181 190 191 210 220 240
0
39 false ::/0
0
40 false 10.10.10.4/32 10 27 181
0
50 true 10.10.10.5/32 10 39 54 199 200
0
54 false 10.5.1.9/32 50
0
80 false 10.10.10.8/32 10 199 200
0
110 false 10.10.10.11/32 10 199 200
0
180 false 10.10.10.18/32 27 181
0
181 false 10.10.10.180/32 10 27 33 40 180
0
190 false 10.10.10.19/32 27 33 210
0
191 false 10.10.10.190/32 27 33
0
193 false 10.1.11.198/32 199
0
197 false 10.1.11.197/32 27 199
0
199 false 10.10.10.199/32 10 20 50 80 110 193 197 200
0
200 false 10.10.10.20/32 10 20 24 50 80 110 199
0
210 false 10.10.10.21/32 27 33 190
0
220 false 10.10.10.27/32 27 33
0
230 false 10.26.26.2/32 27
0
240 false 10.10.10.240/32 27 33
0

sid#


noti#show ipv4 srindex inet
info userReader.cmdEnter:userReader.java:1032 command noti#show ipv4 srindex inet
from local:telnet <loop> 23 -> 127.0.0.1 41836
2022-02-19 12:24:13
index | conn | prefix | peers
| bytes
10 | true | 10.10.10.1/32 | 20 25 26 27 40 50 80 110 181 199 200
| 0+0
20 | false | 10.10.10.2/32 | 10 199 200
| 0+0
24 | false | 10.10.10.10/32 | 200
| 0+0
25 | false | 10.1.11.0/32 | 10
| 0+0
26 | false | 10.10.11.11/32 | 10
| 0+0
27 | false | 10.10.10.26/32 | 10 29 31 32 33 40 180 181 190 191 197 210
220 230 240 | 0+0
29 | false | 10.10.10.29/32 | 27 33
| 0+0
31 | false | 10.10.10.31/32 | 27 33
| 0+0
32 | false | 10.10.10.24/32 | 27 33
| 0+0
33 | false | 10.10.10.25/32 | 27 29 31 32 181 190 191 210 220 240
| 0+0
39 | false | 10.10.10.227/32 | 50
| 0+0
40 | false | 10.10.10.4/32 | 10 27 181
| 0+0
50 | false | 10.10.10.5/32 | 10 39 54 199 200
| 0+0
54 | false | 10.5.1.9/32 | 50
| 0+0
80 | false | 10.10.10.8/32 | 10 199 200
| 0+0
110 | false | ::/0 |
| 0+0
180 | false | 10.10.10.18/32 | 27 181
| 0+0
181 | false | 10.10.10.180/32 | 10 27 33 40 180
| 0+0
190 | false | 10.10.10.19/32 | 27 33 210
| 0+0
191 | false | 10.10.10.190/32 | 27 33
| 0+0
193 | false | 10.1.11.198/32 | 199
| 0+0
197 | false | 10.1.11.197/32 | 27 199
| 0+0
199 | true | 10.10.10.199/32 | 10 20 50 80 110 193 197 200
| 0+0
200 | true | 10.10.10.20/32 | 10 20 24 50 80 110 199
| 0+0
210 | false | 10.10.10.21/32 | 27 33 190
| 0+0
220 | false | 10.10.10.27/32 | 27 33
| 0+0
230 | false | 10.26.26.2/32 | 27
| 0+0
240 | false | 10.10.10.240/32 | 27 33
| 0+0

noti#
noti#show ipv4 lsrp 1 graph
info userReader.cmdEnter:userReader.java:1032 command noti#show ipv4 lsrp 1 graph
from local:telnet <loop> 23 -> 127.0.0.1 41836
2022-02-19 12:24:30
sfdp -Tpng > net.png << EOF
graph net {
//wifi
"wifi" -- "mchome" [weight=10] [taillabel="sdn1"]
"wifi" -- "10.1.11.0/32" [weight=0]
//mchome-demo
"mchome-demo" -- "mchome" [weight=10] [taillabel="ethernet11"]
"mchome-demo" -- "10.10.11.11/32" [weight=0]
//rr
"rr" -- "safe" [weight=10] [taillabel="ethernet93"]
"rr" -- "10.5.1.9/32" [weight=0]
"rr" -- "10.5.1.10/32" [weight=0]
//player-dn42
"player-dn42" -- "player" [weight=10] [taillabel="ethernet11"]
"player-dn42" -- "10.1.11.198/32" [weight=0]
//player
"player" -- "p4deb" [weight=33333] [taillabel="hairpin92.22"]
"player" -- "player" [weight=33333] [taillabel="hairpin82"]
"player" -- "10.1.11.197/32" [weight=0]
//mchome
"mchome" -- "wifi" [weight=9] [taillabel="sdn905"]
"mchome" -- "mchome-demo" [weight=9] [taillabel="sdn901"]
"mchome" -- "working" [weight=9] [taillabel="sdn2.189"]
"mchome" -- "parents" [weight=6] [taillabel="hairpin92.33"]
"mchome" -- "safe" [weight=9] [taillabel="sdn2.199"]
"mchome" -- "mediapc" [weight=9] [taillabel="sdn2.196"]
"mchome" -- "noti" [weight=9] [taillabel="sdn2.176"]
"mchome" -- "nas" [weight=9] [taillabel="sdn2.186"]
"mchome" -- "nas" [weight=9] [taillabel="sdn2.170"]
"mchome" -- "p4deb" [weight=5] [taillabel="hairpin82.23"]
"mchome" -- "vpn" [weight=38] [taillabel="hairpin72.15"]
"mchome" -- "player" [weight=9] [taillabel="sdn2.182"]
"mchome" -- "player" [weight=9] [taillabel="sdn2.157"]
"mchome" -- "0.0.0.0/0" [weight=1234]
"mchome" -- "10.10.10.1/32" [weight=0]
//working
"working" -- "mchome" [weight=9] [taillabel="sdn1.189"]
"working" -- "nas" [weight=11] [taillabel="sdn1.179"]
"working" -- "player" [weight=10] [taillabel="sdn1.173"]
"working" -- "10.10.10.2/32" [weight=0]
//parents
"parents" -- "mchome" [weight=5] [taillabel="hairpin92.33"]
"parents" -- "p4deb" [weight=5] [taillabel="hairpin82.24"]
"parents" -- "vpn" [weight=21] [taillabel="hairpin72.16"]
"parents" -- "0.0.0.0/0" [weight=1234]
"parents" -- "10.10.10.4/32" [weight=0]
//safe
"safe" -- "rr" [weight=10] [taillabel="sdn902"]
"safe" -- "mchome" [weight=9] [taillabel="sdn1.199"]
"safe" -- "nas" [weight=11] [taillabel="sdn1.185"]
"safe" -- "player" [weight=10] [taillabel="sdn1.172"]
"safe" -- "sid" [weight=1] [taillabel="sdn903"]
"safe" -- "10.10.10.5/32" [weight=0]
//mediapc
"mediapc" -- "mchome" [weight=9] [taillabel="sdn1.196"]
"mediapc" -- "nas" [weight=11] [taillabel="sdn1.178"]
"mediapc" -- "player" [weight=10] [taillabel="sdn1.180"]
"mediapc" -- "10.10.10.8/32" [weight=0]
//services
"services" -- "nas" [weight=10] [taillabel="ethernet91"]
"services" -- "10.10.10.10/32" [weight=0]
//noti
"noti" -- "mchome" [weight=9] [taillabel="sdn1.176"]
"noti" -- "nas" [weight=11] [taillabel="sdn1.175"]
"noti" -- "player" [weight=10] [taillabel="sdn1.171"]
"noti" -- "10.10.10.11/32" [weight=0]
//www
"www" -- "p4deb" [weight=15] [taillabel="tunnel2"]
"www" -- "p4deb" [weight=14] [taillabel="tunnel4"]
"www" -- "vpn" [weight=24] [taillabel="tunnel1"]
"www" -- "vpn" [weight=19] [taillabel="tunnel3"]
"www" -- "0.0.0.0/0" [weight=999999]
"www" -- "10.10.10.18/32" [weight=0]
//rtr1.c4e
"rtr1.c4e" -- "rtr2.c4e" [weight=10] [taillabel="tunnel1"]
"rtr1.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"rtr1.c4e" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"rtr1.c4e" -- "10.10.10.19/32" [weight=0]
//nas
"nas" -- "mchome" [weight=11] [taillabel="sdn2.186"]
"nas" -- "mchome" [weight=11] [taillabel="sdn2.170"]
"nas" -- "working" [weight=11] [taillabel="sdn2.179"]
"nas" -- "safe" [weight=11] [taillabel="sdn2.185"]
"nas" -- "mediapc" [weight=11] [taillabel="sdn2.178"]
"nas" -- "services" [weight=11] [taillabel="sdn901"]
"nas" -- "noti" [weight=11] [taillabel="sdn2.175"]
"nas" -- "player" [weight=11] [taillabel="sdn2.177"]
"nas" -- "player" [weight=11] [taillabel="sdn2.156"]
"nas" -- "10.10.10.20/32" [weight=0]
//rtr2.c4e
"rtr2.c4e" -- "rtr1.c4e" [weight=10] [taillabel="tunnel1"]
"rtr2.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"rtr2.c4e" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"rtr2.c4e" -- "10.10.10.21/32" [weight=0]
//snoopy.vhpc
"snoopy.vhpc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"snoopy.vhpc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"snoopy.vhpc" -- "10.10.10.24/32" [weight=0]
//nrpe.wdcvhpc
"nrpe.wdcvhpc" -- "rtr1.c4e" [weight=444444] [taillabel="tunnel19"]
"nrpe.wdcvhpc" -- "rtr2.c4e" [weight=444444] [taillabel="tunnel11"]
"nrpe.wdcvhpc" -- "snoopy.vhpc" [weight=444444] [taillabel="tunnel20"]
"nrpe.wdcvhpc" -- "p4deb" [weight=444444] [taillabel="tunnel22"]
"nrpe.wdcvhpc" -- "p4deb" [weight=444444] [taillabel="tunnel23"]
"nrpe.wdcvhpc" -- "snoopy.wdc" [weight=444444] [taillabel="tunnel18"]
"nrpe.wdcvhpc" -- "sniffer.vh" [weight=444444] [taillabel="tunnel21"]
"nrpe.wdcvhpc" -- "vpn" [weight=444444] [taillabel="bvi88.18"]
"nrpe.wdcvhpc" -- "sulinet-cpe.c4e" [weight=444444] [taillabel="tunnel13"]
"nrpe.wdcvhpc" -- "bmp.wdcvhpc" [weight=444444] [taillabel="tunnel14"]
"nrpe.wdcvhpc" -- "rare-cpe" [weight=444444] [taillabel="tunnel12"]
"nrpe.wdcvhpc" -- "0.0.0.0/0" [weight=1234]
"nrpe.wdcvhpc" -- "10.10.10.25/32" [weight=0]
//p4deb
"p4deb" -- "player" [weight=333333] [taillabel="hairpin12.22"]
"p4deb" -- "mchome" [weight=5] [taillabel="hairpin12.23"]
"p4deb" -- "parents" [weight=6] [taillabel="hairpin12.24"]
"p4deb" -- "www" [weight=15] [taillabel="tunnel17"]
"p4deb" -- "www" [weight=14] [taillabel="tunnel28"]
"p4deb" -- "rtr1.c4e" [weight=3] [taillabel="tunnel25"]
"p4deb" -- "rtr2.c4e" [weight=4] [taillabel="tunnel26"]
"p4deb" -- "snoopy.vhpc" [weight=4] [taillabel="tunnel27"]
"p4deb" -- "nrpe.wdcvhpc" [weight=3] [taillabel="tunnel11"]
"p4deb" -- "nrpe.wdcvhpc" [weight=4] [taillabel="tunnel12"]
"p4deb" -- "snoopy.wdc" [weight=3] [taillabel="tunnel16"]
"p4deb" -- "sniffer.vh" [weight=3] [taillabel="tunnel14"]
"p4deb" -- "vpn" [weight=20] [taillabel="tunnel20"]
"p4deb" -- "vpn" [weight=20] [taillabel="tunnel19"]
"p4deb" -- "sulinet-cpe.c4e" [weight=3] [taillabel="tunnel18"]
"p4deb" -- "bmp.wdcvhpc" [weight=3] [taillabel="tunnel15"]
"p4deb" -- "rare-cpe" [weight=3] [taillabel="tunnel29"]
"p4deb" -- "p4deb-rr" [weight=2] [taillabel="sdn4"]
"p4deb" -- "10.10.10.26/32" [weight=0]
//core
"core" -- "mchome" [weight=1] [taillabel="sdn47.164"]
"core" -- "working" [weight=1] [taillabel="sdn47.160"]
"core" -- "safe" [weight=1] [taillabel="sdn47.161"]
"core" -- "mediapc" [weight=1] [taillabel="sdn47.159"]
"core" -- "noti" [weight=1] [taillabel="sdn47.158"]
"core" -- "nas" [weight=1] [taillabel="sdn47.163"]
"core" -- "player" [weight=1] [taillabel="sdn47.162"]
"core" -- "10.10.10.28/32" [weight=0]
//snoopy.wdc
"snoopy.wdc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"snoopy.wdc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"snoopy.wdc" -- "10.10.10.29/32" [weight=0]
//sniffer.vh
"sniffer.vh" -- "nrpe.wdcvhpc" [weight=888888] [taillabel="tunnel1"]
"sniffer.vh" -- "p4deb" [weight=888888] [taillabel="tunnel2"]
"sniffer.vh" -- "10.10.10.31/32" [weight=0]
//vpn
"vpn" -- "mchome" [weight=38] [taillabel="bvi99.15"]
"vpn" -- "parents" [weight=23] [taillabel="bvi99.16"]
"vpn" -- "www" [weight=23] [taillabel="tunnel11"]
"vpn" -- "www" [weight=19] [taillabel="tunnel10"]
"vpn" -- "nrpe.wdcvhpc" [weight=23] [taillabel="bvi99.18"]
"vpn" -- "p4deb" [weight=20] [taillabel="tunnel12"]
"vpn" -- "p4deb" [weight=19] [taillabel="tunnel13"]
"vpn" -- "0.0.0.0/0" [weight=999999]
"vpn" -- "10.10.10.180/32" [weight=0]
//sulinet-cpe.c4e
"sulinet-cpe.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel2"]
"sulinet-cpe.c4e" -- "p4deb" [weight=10] [taillabel="tunnel1"]
"sulinet-cpe.c4e" -- "10.10.10.190/32" [weight=0]
//player
"player" -- "player-dn42" [weight=10] [taillabel="sdn901"]
"player" -- "player" [weight=33333] [taillabel="hairpin81"]
"player" -- "mchome" [weight=9] [taillabel="sdn1.182"]
"player" -- "mchome" [weight=9] [taillabel="sdn1.157"]
"player" -- "working" [weight=10] [taillabel="sdn1.173"]
"player" -- "safe" [weight=10] [taillabel="sdn1.172"]
"player" -- "mediapc" [weight=10] [taillabel="sdn1.180"]
"player" -- "noti" [weight=10] [taillabel="sdn1.171"]
"player" -- "nas" [weight=11] [taillabel="sdn1.177"]
"player" -- "nas" [weight=11] [taillabel="sdn1.156"]
"player" -- "10.10.10.199/32" [weight=0]
//sid
"sid" -- "safe" [weight=1] [taillabel="ethernet1"]
"sid" -- "10.10.10.227/32" [weight=0]
//bmp.wdcvhpc
"bmp.wdcvhpc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"bmp.wdcvhpc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"bmp.wdcvhpc" -- "10.10.10.240/32" [weight=0]
//rare-cpe
"rare-cpe" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
"rare-cpe" -- "p4deb" [weight=10] [taillabel="tunnel9"]
"rare-cpe" -- "10.10.10.27/32" [weight=0]
//p4deb-rr
"p4deb-rr" -- "p4deb" [weight=10] [taillabel="ethernet11"]
"p4deb-rr" -- "10.26.26.2/32" [weight=0]
}
EOF

noti#








On 2/18/22 18:35, mc36 wrote:
hi,
i go inline...
regards,
cs

On 2/18/22 17:53, Cristina Klippel Dominicini wrote:
Hi Csaba,

This is really great news \o/

I talked with the group and the design choices seem very good for an initial
prototype. Thank you very much! We are going to execute the testcases and
provide feedback :-D

Some initial doubts:

tunnel domain-name 1.1.1.2 1.1.1.3 , 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
This represents the links of the multicast tree and the syntax "1.1.1.4
1.1.1.4" indicates a leaf, right?
exactly.. the general format is the following:
<encode for this address> <encode for this neighbor from the index table>+ ,
with the addition that if the neighbor address is itself, which is obviously
not in the index table,
then it'll set the bit0 to indicate a 'and also process locally'....

This representation is very good, because M-PolKA can represent structures
that are not exactly trees. For example, two branches that end in the same
leaf.
Example: If you have an extra link between v2 and v4, the representation of
the multipath would be:
tunnel domain-name 1.1.1.2 1.1.1.3 1.1.1.4, 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
Is that right?
yesss, exactly... you got it right, you can describe arbitrary trees, and
your above encoding should result in what you wanted to achieve....



  sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
I didn't know you had this multicast pin! Super cool! Does it send the ICMP
packet to all the leaves?

this command basically the same pin(g) command, but as we instruct it to wait
for multi(ple) responses within the timeout range...


for the encoding, i reserved bit0 indicating that local processing is needed
(end of tunneling, decapping, etc)
the rest of the bits indicate the need to forward to the peer in the srindex
table, which, as an ordered list of
peers, must be identical on all the nodes executed the shortest path first...
  the routeid seems to be correctly encoded as we find 6 (1st and 2nd
neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
I don't know if I understood correctly... These are bits from the output bitstream of the mod operation, right?
yesss...

So, if bitstream is 110 at v3, it will forward to 1st and 2nd neighbors (v4 and v5, in this example).
exactly....

But how does it correlates the neighbors with the srindex table (that
includes non-neighbors)?

just drop those examples, the output came from an intermediate state... :)
the correct encoding should had be 12 1100, the we're addressing 2nd and 3rd
neighbor of v3...

Regarding the routing, what are the structures we are reusing? How does FreeRouter keep the list of neighbors and compute the routeid that will produce the correct output bitstream in each node? I will explore the commits.
so finally i kept the srindex fields, and, while doing the shortest path,
each index in that table got an ordered
list of neighbor indexes... assuming the link state database flooding
finished completely, must be identical on
each router participating in an igp... and, when freerouter constructs the
bitmap, it just converts the ips to
indexes over this ordered neighbor index table...


If we use that approach of configuring two (or more) nodeids when you have more than 16 igp peers, one needs to configure two (or more) loopbacks. Then, the pipeline would have to combine the bitstreams considering some ordering (the lower IP addr?). Also, it would have to check the number of available loopbacks that have mpolka enabled. Do you already have any plans for this?

agreed... so these sr indexes are bound to addresses, and addresses are bound
to nodes...
so one clearly sees that it have to emit with multiple bitmaps encoded for a
single node...
right now, this check is not yet done.... the only check i do right now if i
have to send
to two interfaces (let's say v3 have a tunnel to v2 and v4) then i can use
two completely
different routeids on the two different interfaces...

regards,
cs


Best regards,
Cristina
________________________________________
De: mc36 <>
Enviado: quinta-feira, 17 de fevereiro de 2022 19:06
Para: Cristina Klippel Dominicini; Rafael Silva Guimarães
Cc:
Assunto: Re: [rare-dev] mpolka in rare

hi,
i've just covered mpolka with some test cases:
https://github.com/mc36/freeRouter/commit/4caf6dc0657aade06d9cd38654b581e77465a971
now i'll wait for your feedback before continuing with the dataplanes...
regards,
cs


On 2/17/22 22:14, mc36 wrote:
hi,
sorry for the spam, but it forwards:

sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
pinging 3.3.3.2, src=1.1.1.2, vrf=v2, cnt=1111, len=1111, tim=11, gap=0,
ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

result=200%, recv/sent/lost/err=2222/1111/0/0, rtt
min/avg/max/sum=0/0/2/12402, ttl min/avg/max=255/255/255
sid#

the 200% success rate indicates that both v4 and v5 got the packets and they
responded...

this is the commit for this to happened:
https://github.com/mc36/freeRouter/commit/4a1d188521fa5a0fe8f8619c92f60dd44afa929e

regards,
cs




On 2/17/22 21:11, mc36 wrote:
hi,
after applying the attached config, i see the following:

sid(cfg-if)#show ipv4 srindex v2
index  prefix      peers  bytes
2      ::/0               0
3      1.1.1.3/32  2 4 5  0
4      1.1.1.4/32  3      0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v3
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      ::/0               0
4      1.1.1.4/32  3      0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v4
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      1.1.1.3/32  2 4 5  0
4      ::/0               0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v5
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      1.1.1.3/32  2 4 5  0
4      1.1.1.4/32  3      0
5      ::/0               0

sid(cfg-if)#
sid(cfg-if)#show mpolka routeid tunnel2
iface      hop      routeid
hairpin11  2.2.2.2  00 00 00 00 00 00 00 00 00 00 74 90 0f 96 e9 fd

index  coeff     poly   crc    equal
0      0001046a  13101  13101  true
1      0001046b  1732   1732   true
2      0001046d  2031   2031   true
3      00010473  6      6      true
4      00010475  1      1      true
5      0001047f  1      1      true
6      00010483  13881  13881  true
7      00010489  55145  55145  true
8      00010491  38366  38366  true
9      0001049d  11451  11451  true

sid(cfg-if)#

the topology is the following:

        v3
v2-v3<
        v4

the tunnel is configured to point to v3 and v4

for the encoding, i reserved bit0 indicating that local processing is needed
(end of tunneling, decapping, etc)
the rest of the bits indicate the need to forward to the peer in the srindex
table, which, as an ordered list of
peers, must be identical on all the nodes executed the shortest path first...

the routeid seems to be correctly encoded as we find 6 (1st and 2nd
neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...

next steps will be the forwarding to happen, then some test cases, and
finally the dataplanes...

any opinion?

thanks,
cs




On 2/16/22 15:44, mc36 wrote:
hi,

On 2/16/22 15:22, Cristina Klippel Dominicini wrote:

Hi Csaba,

Thanks for the feedback! We believe M-PolKA can tackle interesting use cases,
and would be great to have it running on FreeRouter.


yeahh, saw some in the paper and i'm also interested.... :)

my first impression was that after the mod operation, it's basically do what
bier does, that is, we take the result, and interpret it as an outport
bitmap, don't we?
Yes, exactly! We just changed the meaning of the portid polynomial and the
pipeline for cloning the packets according to the bitmap. It will be really
great if we can reuse
part of the bier implementation for that end. Do you think we can do that for
both freeRouter and Tofino? Then, we could run some experiments comparing
BIER with m-PolKA :-)


hopefully you'll able to do that...


and this is where we hit a limitation, depending on the size of crc is in
use, we can only describe 16 output ports, which is clearly not enough...
Is it possible to use CRC32 for M-PolKA's implementation in FreeRouter?


surely yess, we can use crc32, that one was also made parameterizable back
when polka was introduced... :)
but that would reduce the core nodes expressable in the routeid by half...


my idea to overcome the above, what if we interpret mpolka mod result as a
bitmap to this index table? it then raises the limitation to 16 igp neighbors
per core node, which
is more friendly...

As we are already bound to the SR indexes, I think it is a reasonable
reinterpretation.
Another simple way would be to have two nodeids per switch (or more), for
example. Then, with the same routeid we could address half of the ports with
nodeid1 and the other
half with nodeid2. This would incur in two CRC operations to generate the
bitmap.
We could also explore some other encoding techniques for the bitmap. Today is
our weekly meeting at Ufes, so we will discuss with the group about the
possibilities, and we give
you a feedback on this subject.


imho absolutely this is the way to follow instead of doing crc32,
and one can easily have two loopbacks if one have more than 16 igp peers...


an other implementation idea is to use a different ethertype for mpolka to
not confuse the unicast or multicast packets on the wire as they'll differ on
handling...
Agreed! We also have some other ideas for failure protection and chaining
that would change the header, and consequently, would need a different
version code.


fine... then i'll wait until you discuss with your colleges and then i'll
proceed with adding mpolka...
since then i've done some digging into the code and imho mpolka will use
bier-id instead of srid because
that beast already populates a thing bfrlist, which is the per node peer
index table we need for mpolka
to interpret the bitmap over... it's just a config thing like the regular
polka and sr case...
after this, the initial version will be able to address the multihoming use
case you discuss
in your paper, moreover it'll be able to do the (iptv) headend usecase from
bier tests...

regards,
cs


Best regards,
Cristina

________________________________________
De: mc36 <>
Enviado: quarta-feira, 16 de fevereiro de 2022 03:21
Para: Cristina Klippel Dominicini; Rafael Silva Guimar  es
Cc:
Assunto: mpolka in rare

hi,
i went through your mpolka paper, first of all, congrats, nice work!
my first impression was that after the mod operation, it's basically do what
bier
does, that is, we take the result, and interpret it as an outport bitmap,
don't we?
and this is where we hit a limitation, depending on the size of crc is in use,
we can only describe 16 output ports, which is clearly not enough...
in freerouter, polka is bound to segment routing ids, and the result of the
mod
is not a port but an sr index... my idea to overcome the above, what if we
interpret
mpolka mod result as a bitmap to this index table? it then raises the
limitation to
16 igp neighbors per core node, which is more friendly...
an other implementation idea is to use a different ethertype for mpolka to not
confuse the unicast or multicast packets on the wire as they'll differ on
handling...
afterall, i found it feasible both for software and dataplane implementations,
so it could become a drop-in replacement for the current ip multicast over
bier...
any opinion?
thanks,
cs


________________________________

Esta mensagem (incluindo anexos) cont  m informa    o confidencial destinada
a um usu  rio espec  fico e seu conte  do    protegido por lei. Se voc   n  o 
   o destinat  rio
correto deve apagar esta mensagem.

O emitente desta mensagem    respons  vel por seu conte  do e endere  amento.
Cabe ao destinat  rio cuidar quanto ao tratamento adequado. A divulga    o,
reprodu    o e/ou distribui    o sem a devida autoriza    o ou qualquer outra
a    o sem
conformidade com as normas internas do Ifes s  o proibidas e pass  veis de
san    o disciplinar, c  vel e criminal.


________________________________

Esta mensagem (incluindo anexos) contém informação confidencial destinada a um usuário específico e seu conteúdo é protegido por lei. Se você não é o destinatário correto deve apagar esta mensagem.

O emitente desta mensagem é responsável por seu conteúdo e endereçamento.
Cabe ao destinatário cuidar quanto ao tratamento adequado. A divulgação, reprodução e/ou distribuição sem a devida autorização ou qualquer outra ação sem conformidade com as normas internas do Ifes são proibidas e passíveis de sanção disciplinar, cível e criminal.




Archive powered by MHonArc 2.6.19.

Top of Page