Subject: Rare project developers
List archive
- From: Everson Borges <>
- To:
- Subject: Re: [rare-dev] mpolka in rare
- Date: Sat, 19 Feb 2022 12:24:22 -0300
Hi all,
I've already started playing with mpolka and for now I have a doubt about tunnel domain-name, in the file rio0001-sw2.txt it's like that.
tunnel domain-name 20.20.20.1 20.20.20.2 , 20.20.20.1 20.20.20.9 20.20.20.2 , 20.20.20.1 20.20.20.4 20.20.20.3 20.20.20.2
but when I start the topology, in router rio stays like this
tunnel domain-name 20.20.20.1 20.20.20.2 , 20.20.20.1 20.20.20.2 20.20.20.9 , 20.20.20.1 20.20.20.2 20.20.20.3 20.20.20.4
I couldn't figure out what I'm doing wrong.
thanks in advance
Everson
Em sáb., 19 de fev. de 2022 às 08:53, mc36 <> escreveu:
my bad, i missed the most important output, and the explanation of the tunnel stuff:
10.10.10.5 10.10.10.20 10.10.10.199 , ! .5 will replicate the packets to .20 and .199
10.10.10.199 10.10.10.1 , ! .199 will simply forward to .1
10.10.10.20 10.10.10.1 , ! .20 will simply forward to .1
10.10.10.1 10.10.10.1 , ! .1 will decap and route the packets
sid#show mpolka routeid tunnel1
iface hop routeid
ethernet1 10.1.123.254 00 00 00 00 00 00 00 00 cc 13 f9 8e b6 61 64 5a
index coeff poly crc equal
0 00011170 26058 26058 true
1 00011171 29111 29111 true
2 0001117b 60027 60027 true
3 0001117d 7519 7519 true
4 00011181 17366 17366 true
5 00011187 23700 23700 true
6 0001118d 40222 40222 true
7 00011193 47500 47500 true
8 00011195 39708 39708 true
9 0001119f 8510 8510 true
10 000111a3 1 1 true
11 000111a9 62590 62590 true
12 000111b1 57926 57926 true
13 000111b7 2684 2684 true
14 000111bb 28061 28061 true
15 000111c9 62604 62604 true
16 000111db 56600 56600 true
17 000111dd 40571 40571 true
18 000111eb 63485 63485 true
19 000111ed 17372 17372 true
20 000111f3 7392 7392 true
21 000111f9 50796 50796 true
22 00011227 33503 33503 true
23 00011235 12709 12709 true
24 0001123f 30393 30393 true
25 00011241 35167 35167 true
26 00011247 38082 38082 true
27 00011253 46660 46660 true
28 00011255 38491 38491 true
29 00011259 37188 37188 true
30 00011263 6180 6180 true
31 0001126f 42920 42920 true
32 00011277 54459 54459 true
33 0001127d 50718 50718 true
34 00011281 13787 13787 true
35 0001128b 32188 32188 true
36 00011293 42127 42127 true
37 00011299 38862 38862 true
38 000112a5 29547 29547 true
39 000112af 3761 3761 true
40 000112bd 49504 49504 true
41 000112c5 30891 30891 true
42 000112d7 32239 32239 true
43 000112db 17665 17665 true
44 000112e1 57667 57667 true
45 000112ed 24930 24930 true
46 000112f3 49596 49596 true
47 000112ff 1849 1849 true
48 0001130b 54071 54071 true
49 0001130d 34331 34331 true
50 00011319 48 48 true
51 00011323 929 929 true
52 00011331 46053 46053 true
53 0001133b 6916 6916 true
54 0001133d 47318 47318 true
55 0001134f 13447 13447 true
56 00011351 45664 45664 true
57 00011357 48101 48101 true
58 00011361 61554 61554 true
59 0001136b 30266 30266 true
60 00011375 11179 11179 true
61 00011379 5226 5226 true
62 0001139b 35058 35058 true
63 0001139d 40477 40477 true
64 000113a1 42541 42541 true
65 000113ab 13390 13390 true
66 000113ad 54305 54305 true
67 000113df 62241 62241 true
68 000113f1 15777 15777 true
69 000113f7 33121 33121 true
70 00011409 5626 5626 true
71 0001141b 6374 6374 true
72 0001142d 23296 23296 true
73 00011433 38941 38941 true
74 00011435 41967 41967 true
75 00011441 33740 33740 true
76 0001144b 18552 18552 true
77 0001144d 59299 59299 true
78 00011459 61362 61362 true
79 0001145f 32101 32101 true
80 00011465 60851 60851 true
81 00011487 5536 5536 true
82 00011493 54908 54908 true
83 000114a5 52344 52344 true
84 000114af 10028 10028 true
85 000114b1 56989 56989 true
86 000114c9 23348 23348 true
87 000114d1 2075 2075 true
88 000114e7 44309 44309 true
89 000114ed 4714 4714 true
90 000114f9 45070 45070 true
91 00011507 8129 8129 true
92 00011513 49085 49085 true
93 00011545 38871 38871 true
94 0001156d 34837 34837 true
95 00011579 18841 18841 true
96 0001158f 833 833 true
97 00011591 60623 60623 true
98 000115a1 30460 30460 true
99 000115a7 9251 9251 true
100 000115ab 30589 30589 true
101 000115d5 5610 5610 true
102 000115d9 51573 51573 true
103 000115df 3913 3913 true
104 000115e3 38440 38440 true
105 000115fb 19344 19344 true
106 00011601 29722 29722 true
107 0001161f 14352 14352 true
108 0001162f 32880 32880 true
109 00011637 27071 27071 true
110 0001163d 51009 51009 true
111 00011643 16626 16626 true
112 00011645 53806 53806 true
113 00011649 62780 62780 true
114 00011651 62380 62380 true
115 00011675 58217 58217 true
116 00011683 31913 31913 true
117 0001169b 47695 47695 true
118 000116ab 40461 40461 true
119 000116b5 30593 30593 true
120 000116d5 21041 21041 true
121 000116d9 3268 3268 true
122 000116e3 14298 14298 true
123 000116f7 32805 32805 true
124 000116fd 15136 15136 true
125 0001170f 30982 30982 true
126 00011717 26000 26000 true
127 0001172d 21558 21558 true
128 00011733 18937 18937 true
129 0001173f 8241 8241 true
130 00011741 32233 32233 true
131 00011747 50195 50195 true
132 0001174d 39596 39596 true
133 00011753 64893 64893 true
134 00011765 43379 43379 true
135 00011795 27270 27270 true
136 00011799 49270 49270 true
137 000117a3 25509 25509 true
138 000117a9 7674 7674 true
139 000117d7 2971 2971 true
140 000117eb 38600 38600 true
141 000117f5 5089 5089 true
142 00011821 51300 51300 true
143 0001182b 62178 62178 true
144 00011833 24837 24837 true
145 00011855 10074 10074 true
146 00011863 50619 50619 true
147 0001186f 63719 63719 true
148 00011887 14109 14109 true
149 0001188d 25716 25716 true
150 00011893 16007 16007 true
151 000118a9 65220 65220 true
152 000118b1 1836 1836 true
153 000118b7 31547 31547 true
154 000118bb 6616 6616 true
155 000118cf 60684 60684 true
156 000118db 24553 24553 true
157 000118f9 12376 12376 true
158 000118ff 11354 11354 true
159 0001190b 63525 63525 true
160 00011923 16155 16155 true
161 00011925 2477 2477 true
162 00011931 30734 30734 true
163 00011937 29348 29348 true
164 0001195b 31705 31705 true
165 0001196b 62253 62253 true
166 0001196d 42444 42444 true
167 00011975 34615 34615 true
168 00011979 28658 28658 true
169 00011991 13725 13725 true
170 000119b3 25281 25281 true
171 000119cb 20434 20434 true
172 000119cd 41564 41564 true
173 000119e5 54095 54095 true
174 000119fb 65033 65033 true
175 000119fd 58249 58249 true
176 00011a07 24543 24543 true
177 00011a0d 56107 56107 true
178 00011a23 5891 5891 true
179 00011a31 38761 38761 true
180 00011a43 31998 31998 true
181 00011a51 17595 17595 true
182 00011a57 30965 30965 true
183 00011a5d 54492 54492 true
184 00011a6b 54990 54990 true
185 00011a73 5516 5516 true
186 00011a79 35876 35876 true
187 00011a89 6151 6151 true
188 00011a9b 52401 52401 true
189 00011ab3 19308 19308 true
190 00011abf 14423 14423 true
191 00011ae3 2830 2830 true
192 00011ae5 45469 45469 true
193 00011b09 33891 33891 true
194 00011b1d 54503 54503 true
195 00011b27 31337 31337 true
196 00011b2b 63038 63038 true
197 00011b47 60367 60367 true
198 00011b4b 44913 44913 true
199 00011b55 2 2 true
200 00011b59 2 2 true
201 00011b65 676 676 true
202 00011b7d 64267 64267 true
203 00011b8b 24460 24460 true
204 00011b8d 18726 18726 true
205 00011b93 21881 21881 true
206 00011baf 58244 58244 true
207 00011bbb 6505 6505 true
208 00011bbd 65041 65041 true
209 00011bc3 47896 47896 true
210 00011bd7 4247 4247 true
211 00011be1 35558 35558 true
212 00011bff 41581 41581 true
213 00011c07 61986 61986 true
214 00011c13 45776 45776 true
215 00011c23 65417 65417 true
216 00011c29 14701 14701 true
217 00011c45 4852 4852 true
218 00011c4f 45276 45276 true
219 00011c57 46213 46213 true
220 00011c5d 28423 28423 true
221 00011c61 7623 7623 true
222 00011c7f 39424 39424 true
223 00011c85 38907 38907 true
224 00011c9d 29622 29622 true
225 00011ca1 65192 65192 true
226 00011cb5 5345 5345 true
227 00011ccb 15308 15308 true
228 00011ccd 22341 22341 true
229 00011cd9 60718 60718 true
230 00011ce9 2487 2487 true
231 00011cef 43768 43768 true
232 00011cf1 30587 30587 true
233 00011d17 52213 52213 true
234 00011d1b 36950 36950 true
235 00011d2d 45746 45746 true
236 00011d3f 14076 14076 true
237 00011d4b 58986 58986 true
238 00011d53 44028 44028 true
239 00011d59 58667 58667 true
240 00011d5f 49197 49197 true
241 00011d65 26351 26351 true
242 00011d69 45679 45679 true
243 00011d77 15089 15089 true
244 00011d81 28311 28311 true
245 00011d87 24719 24719 true
246 00011dc9 24130 24130 true
247 00011ded 47133 47133 true
248 00011dff 43118 43118 true
249 00011e21 49388 49388 true
250 00011e2d 45821 45821 true
251 00011e3f 50464 50464 true
252 00011e5f 58357 58357 true
253 00011e71 30107 30107 true
254 00011e7b 2605 2605 true
255 00011e99 48820 48820 true
sid#
On 2/19/22 12:38, mc36 wrote:
> hi,
> i just enabled mpolka on my homenet and i can tell you that it works as expected... :)
> i've taken the srindex from two nodes to show you that they're the same, network wide...
> (the last show is a graphviz export of the network, just in case if you need it...)
> ((but you can login one of the nodes through the geant p4lab at 10.5.5.5 in the CORE vrf,
> or at my dn42 looking glass by telnetting/sshing to dl.nop.hu with any user/pass...))
>
> next steps will be to add the multi-loopback logic to clntMpolka routeid generator, but
> imho i'll proceed with the dataplanes first, i badly want to see it at line-rate... :)
> the dataplane exporter part was added already yesterday:
> https://github.com/mc36/freeRouter/commit/ef3fe4c7bca3ef4536c7ae8493ad45a3dcfa374c
> regards,
> cs
>
>
> sid#ping 1.1.1.1 /vrf v1 /interface lo0 /multi
> pinging 1.1.1.1, src="10.10.10.227," vrf=v1, cnt=5, len=64, tim=1000, gap=0, ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
> !!!!!!!!!!
> result=200%, recv/sent/lost/err=10/5/0/0, rtt min/avg/max/sum=4/10/16/5002, ttl min/avg/max=55/55/55
> sid#
> sid#show config-differences
> interface tunnel1
> no description
> tunnel vrf v1
> tunnel source loopback0
> tunnel destination 1.1.1.111
> tunnel domain-name 10.10.10.5 10.10.10.20 10.10.10.199 , 10.10.10.199 10.10.10.1 , 10.10.10.20 10.10.10.1 , 10.10.10.1 10.10.10.1 ,
> tunnel mode mpolka
> vrf forwarding v1
> ipv4 address 1.1.1.2 255.255.255.252
> no shutdown
> no log-link-change
> exit
>
> sid#show ipv4 srindex v1
> index conn prefix peers bytes
> 10 false 10.10.10.1/32 20 25 26 27 40 50 80 110 181 199 200 0
> 20 false 10.10.10.2/32 10 199 200 0
> 24 false 10.10.10.10/32 200 0
> 25 false 10.1.11.0/32 10 0
> 26 false 10.10.11.11/32 10 0
> 27 false 10.10.10.26/32 10 29 31 32 33 40 180 181 190 191 197 210 220 230 240 0
> 29 false 10.10.10.29/32 27 33 0
> 31 false 10.10.10.31/32 27 33 0
> 32 false 10.10.10.24/32 27 33 0
> 33 false 10.10.10.25/32 27 29 31 32 181 190 191 210 220 240 0
> 39 false ::/0 0
> 40 false 10.10.10.4/32 10 27 181 0
> 50 true 10.10.10.5/32 10 39 54 199 200 0
> 54 false 10.5.1.9/32 50 0
> 80 false 10.10.10.8/32 10 199 200 0
> 110 false 10.10.10.11/32 10 199 200 0
> 180 false 10.10.10.18/32 27 181 0
> 181 false 10.10.10.180/32 10 27 33 40 180 0
> 190 false 10.10.10.19/32 27 33 210 0
> 191 false 10.10.10.190/32 27 33 0
> 193 false 10.1.11.198/32 199 0
> 197 false 10.1.11.197/32 27 199 0
> 199 false 10.10.10.199/32 10 20 50 80 110 193 197 200 0
> 200 false 10.10.10.20/32 10 20 24 50 80 110 199 0
> 210 false 10.10.10.21/32 27 33 190 0
> 220 false 10.10.10.27/32 27 33 0
> 230 false 10.26.26.2/32 27 0
> 240 false 10.10.10.240/32 27 33 0
>
> sid#
>
>
> noti#show ipv4 srindex inet
> info userReader.cmdEnter:userReader.java:1032 command noti#show ipv4 srindex inet from local:telnet <loop> 23 -> 127.0.0.1 41836
> 2022-02-19 12:24:13
> index | conn | prefix | peers | bytes
> 10 | true | 10.10.10.1/32 | 20 25 26 27 40 50 80 110 181 199 200 | 0+0
> 20 | false | 10.10.10.2/32 | 10 199 200 | 0+0
> 24 | false | 10.10.10.10/32 | 200 | 0+0
> 25 | false | 10.1.11.0/32 | 10 | 0+0
> 26 | false | 10.10.11.11/32 | 10 | 0+0
> 27 | false | 10.10.10.26/32 | 10 29 31 32 33 40 180 181 190 191 197 210 220 230 240 | 0+0
> 29 | false | 10.10.10.29/32 | 27 33 | 0+0
> 31 | false | 10.10.10.31/32 | 27 33 | 0+0
> 32 | false | 10.10.10.24/32 | 27 33 | 0+0
> 33 | false | 10.10.10.25/32 | 27 29 31 32 181 190 191 210 220 240 | 0+0
> 39 | false | 10.10.10.227/32 | 50 | 0+0
> 40 | false | 10.10.10.4/32 | 10 27 181 | 0+0
> 50 | false | 10.10.10.5/32 | 10 39 54 199 200 | 0+0
> 54 | false | 10.5.1.9/32 | 50 | 0+0
> 80 | false | 10.10.10.8/32 | 10 199 200 | 0+0
> 110 | false | ::/0 | | 0+0
> 180 | false | 10.10.10.18/32 | 27 181 | 0+0
> 181 | false | 10.10.10.180/32 | 10 27 33 40 180 | 0+0
> 190 | false | 10.10.10.19/32 | 27 33 210 | 0+0
> 191 | false | 10.10.10.190/32 | 27 33 | 0+0
> 193 | false | 10.1.11.198/32 | 199 | 0+0
> 197 | false | 10.1.11.197/32 | 27 199 | 0+0
> 199 | true | 10.10.10.199/32 | 10 20 50 80 110 193 197 200 | 0+0
> 200 | true | 10.10.10.20/32 | 10 20 24 50 80 110 199 | 0+0
> 210 | false | 10.10.10.21/32 | 27 33 190 | 0+0
> 220 | false | 10.10.10.27/32 | 27 33 | 0+0
> 230 | false | 10.26.26.2/32 | 27 | 0+0
> 240 | false | 10.10.10.240/32 | 27 33 | 0+0
>
> noti#
> noti#show ipv4 lsrp 1 graph
> info userReader.cmdEnter:userReader.java:1032 command noti#show ipv4 lsrp 1 graph from local:telnet <loop> 23 -> 127.0.0.1 41836
> 2022-02-19 12:24:30
> sfdp -Tpng > net.png << EOF
> graph net {
> //wifi
> "wifi" -- "mchome" [weight=10] [taillabel="sdn1"]
> "wifi" -- "10.1.11.0/32" [weight=0]
> //mchome-demo
> "mchome-demo" -- "mchome" [weight=10] [taillabel="ethernet11"]
> "mchome-demo" -- "10.10.11.11/32" [weight=0]
> //rr
> "rr" -- "safe" [weight=10] [taillabel="ethernet93"]
> "rr" -- "10.5.1.9/32" [weight=0]
> "rr" -- "10.5.1.10/32" [weight=0]
> //player-dn42
> "player-dn42" -- "player" [weight=10] [taillabel="ethernet11"]
> "player-dn42" -- "10.1.11.198/32" [weight=0]
> //player
> "player" -- "p4deb" [weight=33333] [taillabel="hairpin92.22"]
> "player" -- "player" [weight=33333] [taillabel="hairpin82"]
> "player" -- "10.1.11.197/32" [weight=0]
> //mchome
> "mchome" -- "wifi" [weight=9] [taillabel="sdn905"]
> "mchome" -- "mchome-demo" [weight=9] [taillabel="sdn901"]
> "mchome" -- "working" [weight=9] [taillabel="sdn2.189"]
> "mchome" -- "parents" [weight=6] [taillabel="hairpin92.33"]
> "mchome" -- "safe" [weight=9] [taillabel="sdn2.199"]
> "mchome" -- "mediapc" [weight=9] [taillabel="sdn2.196"]
> "mchome" -- "noti" [weight=9] [taillabel="sdn2.176"]
> "mchome" -- "nas" [weight=9] [taillabel="sdn2.186"]
> "mchome" -- "nas" [weight=9] [taillabel="sdn2.170"]
> "mchome" -- "p4deb" [weight=5] [taillabel="hairpin82.23"]
> "mchome" -- "vpn" [weight=38] [taillabel="hairpin72.15"]
> "mchome" -- "player" [weight=9] [taillabel="sdn2.182"]
> "mchome" -- "player" [weight=9] [taillabel="sdn2.157"]
> "mchome" -- "0.0.0.0/0" [weight=1234]
> "mchome" -- "10.10.10.1/32" [weight=0]
> //working
> "working" -- "mchome" [weight=9] [taillabel="sdn1.189"]
> "working" -- "nas" [weight=11] [taillabel="sdn1.179"]
> "working" -- "player" [weight=10] [taillabel="sdn1.173"]
> "working" -- "10.10.10.2/32" [weight=0]
> //parents
> "parents" -- "mchome" [weight=5] [taillabel="hairpin92.33"]
> "parents" -- "p4deb" [weight=5] [taillabel="hairpin82.24"]
> "parents" -- "vpn" [weight=21] [taillabel="hairpin72.16"]
> "parents" -- "0.0.0.0/0" [weight=1234]
> "parents" -- "10.10.10.4/32" [weight=0]
> //safe
> "safe" -- "rr" [weight=10] [taillabel="sdn902"]
> "safe" -- "mchome" [weight=9] [taillabel="sdn1.199"]
> "safe" -- "nas" [weight=11] [taillabel="sdn1.185"]
> "safe" -- "player" [weight=10] [taillabel="sdn1.172"]
> "safe" -- "sid" [weight=1] [taillabel="sdn903"]
> "safe" -- "10.10.10.5/32" [weight=0]
> //mediapc
> "mediapc" -- "mchome" [weight=9] [taillabel="sdn1.196"]
> "mediapc" -- "nas" [weight=11] [taillabel="sdn1.178"]
> "mediapc" -- "player" [weight=10] [taillabel="sdn1.180"]
> "mediapc" -- "10.10.10.8/32" [weight=0]
> //services
> "services" -- "nas" [weight=10] [taillabel="ethernet91"]
> "services" -- "10.10.10.10/32" [weight=0]
> //noti
> "noti" -- "mchome" [weight=9] [taillabel="sdn1.176"]
> "noti" -- "nas" [weight=11] [taillabel="sdn1.175"]
> "noti" -- "player" [weight=10] [taillabel="sdn1.171"]
> "noti" -- "10.10.10.11/32" [weight=0]
> //www
> "www" -- "p4deb" [weight=15] [taillabel="tunnel2"]
> "www" -- "p4deb" [weight=14] [taillabel="tunnel4"]
> "www" -- "vpn" [weight=24] [taillabel="tunnel1"]
> "www" -- "vpn" [weight=19] [taillabel="tunnel3"]
> "www" -- "0.0.0.0/0" [weight=999999]
> "www" -- "10.10.10.18/32" [weight=0]
> //rtr1.c4e
> "rtr1.c4e" -- "rtr2.c4e" [weight=10] [taillabel="tunnel1"]
> "rtr1.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "rtr1.c4e" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "rtr1.c4e" -- "10.10.10.19/32" [weight=0]
> //nas
> "nas" -- "mchome" [weight=11] [taillabel="sdn2.186"]
> "nas" -- "mchome" [weight=11] [taillabel="sdn2.170"]
> "nas" -- "working" [weight=11] [taillabel="sdn2.179"]
> "nas" -- "safe" [weight=11] [taillabel="sdn2.185"]
> "nas" -- "mediapc" [weight=11] [taillabel="sdn2.178"]
> "nas" -- "services" [weight=11] [taillabel="sdn901"]
> "nas" -- "noti" [weight=11] [taillabel="sdn2.175"]
> "nas" -- "player" [weight=11] [taillabel="sdn2.177"]
> "nas" -- "player" [weight=11] [taillabel="sdn2.156"]
> "nas" -- "10.10.10.20/32" [weight=0]
> //rtr2.c4e
> "rtr2.c4e" -- "rtr1.c4e" [weight=10] [taillabel="tunnel1"]
> "rtr2.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "rtr2.c4e" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "rtr2.c4e" -- "10.10.10.21/32" [weight=0]
> //snoopy.vhpc
> "snoopy.vhpc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "snoopy.vhpc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "snoopy.vhpc" -- "10.10.10.24/32" [weight=0]
> //nrpe.wdcvhpc
> "nrpe.wdcvhpc" -- "rtr1.c4e" [weight=444444] [taillabel="tunnel19"]
> "nrpe.wdcvhpc" -- "rtr2.c4e" [weight=444444] [taillabel="tunnel11"]
> "nrpe.wdcvhpc" -- "snoopy.vhpc" [weight=444444] [taillabel="tunnel20"]
> "nrpe.wdcvhpc" -- "p4deb" [weight=444444] [taillabel="tunnel22"]
> "nrpe.wdcvhpc" -- "p4deb" [weight=444444] [taillabel="tunnel23"]
> "nrpe.wdcvhpc" -- "snoopy.wdc" [weight=444444] [taillabel="tunnel18"]
> "nrpe.wdcvhpc" -- "sniffer.vh" [weight=444444] [taillabel="tunnel21"]
> "nrpe.wdcvhpc" -- "vpn" [weight=444444] [taillabel="bvi88.18"]
> "nrpe.wdcvhpc" -- "sulinet-cpe.c4e" [weight=444444] [taillabel="tunnel13"]
> "nrpe.wdcvhpc" -- "bmp.wdcvhpc" [weight=444444] [taillabel="tunnel14"]
> "nrpe.wdcvhpc" -- "rare-cpe" [weight=444444] [taillabel="tunnel12"]
> "nrpe.wdcvhpc" -- "0.0.0.0/0" [weight=1234]
> "nrpe.wdcvhpc" -- "10.10.10.25/32" [weight=0]
> //p4deb
> "p4deb" -- "player" [weight=333333] [taillabel="hairpin12.22"]
> "p4deb" -- "mchome" [weight=5] [taillabel="hairpin12.23"]
> "p4deb" -- "parents" [weight=6] [taillabel="hairpin12.24"]
> "p4deb" -- "www" [weight=15] [taillabel="tunnel17"]
> "p4deb" -- "www" [weight=14] [taillabel="tunnel28"]
> "p4deb" -- "rtr1.c4e" [weight=3] [taillabel="tunnel25"]
> "p4deb" -- "rtr2.c4e" [weight=4] [taillabel="tunnel26"]
> "p4deb" -- "snoopy.vhpc" [weight=4] [taillabel="tunnel27"]
> "p4deb" -- "nrpe.wdcvhpc" [weight=3] [taillabel="tunnel11"]
> "p4deb" -- "nrpe.wdcvhpc" [weight=4] [taillabel="tunnel12"]
> "p4deb" -- "snoopy.wdc" [weight=3] [taillabel="tunnel16"]
> "p4deb" -- "sniffer.vh" [weight=3] [taillabel="tunnel14"]
> "p4deb" -- "vpn" [weight=20] [taillabel="tunnel20"]
> "p4deb" -- "vpn" [weight=20] [taillabel="tunnel19"]
> "p4deb" -- "sulinet-cpe.c4e" [weight=3] [taillabel="tunnel18"]
> "p4deb" -- "bmp.wdcvhpc" [weight=3] [taillabel="tunnel15"]
> "p4deb" -- "rare-cpe" [weight=3] [taillabel="tunnel29"]
> "p4deb" -- "p4deb-rr" [weight=2] [taillabel="sdn4"]
> "p4deb" -- "10.10.10.26/32" [weight=0]
> //core
> "core" -- "mchome" [weight=1] [taillabel="sdn47.164"]
> "core" -- "working" [weight=1] [taillabel="sdn47.160"]
> "core" -- "safe" [weight=1] [taillabel="sdn47.161"]
> "core" -- "mediapc" [weight=1] [taillabel="sdn47.159"]
> "core" -- "noti" [weight=1] [taillabel="sdn47.158"]
> "core" -- "nas" [weight=1] [taillabel="sdn47.163"]
> "core" -- "player" [weight=1] [taillabel="sdn47.162"]
> "core" -- "10.10.10.28/32" [weight=0]
> //snoopy.wdc
> "snoopy.wdc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "snoopy.wdc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "snoopy.wdc" -- "10.10.10.29/32" [weight=0]
> //sniffer.vh
> "sniffer.vh" -- "nrpe.wdcvhpc" [weight=888888] [taillabel="tunnel1"]
> "sniffer.vh" -- "p4deb" [weight=888888] [taillabel="tunnel2"]
> "sniffer.vh" -- "10.10.10.31/32" [weight=0]
> //vpn
> "vpn" -- "mchome" [weight=38] [taillabel="bvi99.15"]
> "vpn" -- "parents" [weight=23] [taillabel="bvi99.16"]
> "vpn" -- "www" [weight=23] [taillabel="tunnel11"]
> "vpn" -- "www" [weight=19] [taillabel="tunnel10"]
> "vpn" -- "nrpe.wdcvhpc" [weight=23] [taillabel="bvi99.18"]
> "vpn" -- "p4deb" [weight=20] [taillabel="tunnel12"]
> "vpn" -- "p4deb" [weight=19] [taillabel="tunnel13"]
> "vpn" -- "0.0.0.0/0" [weight=999999]
> "vpn" -- "10.10.10.180/32" [weight=0]
> //sulinet-cpe.c4e
> "sulinet-cpe.c4e" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel2"]
> "sulinet-cpe.c4e" -- "p4deb" [weight=10] [taillabel="tunnel1"]
> "sulinet-cpe.c4e" -- "10.10.10.190/32" [weight=0]
> //player
> "player" -- "player-dn42" [weight=10] [taillabel="sdn901"]
> "player" -- "player" [weight=33333] [taillabel="hairpin81"]
> "player" -- "mchome" [weight=9] [taillabel="sdn1.182"]
> "player" -- "mchome" [weight=9] [taillabel="sdn1.157"]
> "player" -- "working" [weight=10] [taillabel="sdn1.173"]
> "player" -- "safe" [weight=10] [taillabel="sdn1.172"]
> "player" -- "mediapc" [weight=10] [taillabel="sdn1.180"]
> "player" -- "noti" [weight=10] [taillabel="sdn1.171"]
> "player" -- "nas" [weight=11] [taillabel="sdn1.177"]
> "player" -- "nas" [weight=11] [taillabel="sdn1.156"]
> "player" -- "10.10.10.199/32" [weight=0]
> //sid
> "sid" -- "safe" [weight=1] [taillabel="ethernet1"]
> "sid" -- "10.10.10.227/32" [weight=0]
> //bmp.wdcvhpc
> "bmp.wdcvhpc" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "bmp.wdcvhpc" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "bmp.wdcvhpc" -- "10.10.10.240/32" [weight=0]
> //rare-cpe
> "rare-cpe" -- "nrpe.wdcvhpc" [weight=10] [taillabel="tunnel8"]
> "rare-cpe" -- "p4deb" [weight=10] [taillabel="tunnel9"]
> "rare-cpe" -- "10.10.10.27/32" [weight=0]
> //p4deb-rr
> "p4deb-rr" -- "p4deb" [weight=10] [taillabel="ethernet11"]
> "p4deb-rr" -- "10.26.26.2/32" [weight=0]
> }
> EOF
>
> noti#
>
>
>
>
>
>
>
>
> On 2/18/22 18:35, mc36 wrote:
>> hi,
>> i go inline...
>> regards,
>> cs
>>
>> On 2/18/22 17:53, Cristina Klippel Dominicini wrote:
>>> Hi Csaba,
>>>
>>> This is really great news \o/
>>>
>>> I talked with the group and the design choices seem very good for an initial prototype. Thank you very much! We are going to execute the testcases and provide feedback :-D
>>>
>>> Some initial doubts:
>>>
>>>>> tunnel domain-name 1.1.1.2 1.1.1.3 , 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4 1.1.1.4 , 1.1.1.5 1.1.1.5
>>> This represents the links of the multicast tree and the syntax "1.1.1.4 1.1.1.4" indicates a leaf, right?
>> exactly.. the general format is the following:
>> <encode for this address> <encode for this neighbor from the index table>+ ,
>> with the addition that if the neighbor address is itself, which is obviously not in the index table,
>> then it'll set the bit0 to indicate a 'and also process locally'....
>>
>>> This representation is very good, because M-PolKA can represent structures that are not exactly trees. For example, two branches that end in the same leaf.
>>> Example: If you have an extra link between v2 and v4, the representation of the multipath would be:
>>> tunnel domain-name 1.1.1.2 1.1.1.3 1.1.1.4, 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4 1.1.1.4 , 1.1.1.5 1.1.1.5
>>> Is that right?
>> yesss, exactly... you got it right, you can describe arbitrary trees, and your above encoding should result in what you wanted to achieve....
>>
>>
>>>
>>>>> sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
>>> I didn't know you had this multicast pin! Super cool! Does it send the ICMP packet to all the leaves?
>>>
>> this command basically the same pin(g) command, but as we instruct it to wait for multi(ple) responses within the timeout range...
>>
>>
>>>>> for the encoding, i reserved bit0 indicating that local processing is needed (end of tunneling, decapping, etc)
>>>>> the rest of the bits indicate the need to forward to the peer in the srindex table, which, as an ordered list of
>>>>> peers, must be identical on all the nodes executed the shortest path first...
>>>>> the routeid seems to be correctly encoded as we find 6 (1st and 2nd neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
>>> I don't know if I understood correctly... These are bits from the output bitstream of the mod operation, right?
>> yesss...
>>
>>> So, if bitstream is 110 at v3, it will forward to 1st and 2nd neighbors (v4 and v5, in this example).
>> exactly....
>>
>>> But how does it correlates the neighbors with the srindex table (that includes non-neighbors)?
>>>
>> just drop those examples, the output came from an intermediate state... :)
>> the correct encoding should had be 12 1100, the we're addressing 2nd and 3rd neighbor of v3...
>>
>>> Regarding the routing, what are the structures we are reusing? How does FreeRouter keep the list of neighbors and compute the routeid that will produce the correct output
>>> bitstream in each node? I will explore the commits.
>> so finally i kept the srindex fields, and, while doing the shortest path, each index in that table got an ordered
>> list of neighbor indexes... assuming the link state database flooding finished completely, must be identical on
>> each router participating in an igp... and, when freerouter constructs the bitmap, it just converts the ips to
>> indexes over this ordered neighbor index table...
>>
>>>
>>> If we use that approach of configuring two (or more) nodeids when you have more than 16 igp peers, one needs to configure two (or more) loopbacks. Then, the pipeline would have
>>> to combine the bitstreams considering some ordering (the lower IP addr?). Also, it would have to check the number of available loopbacks that have mpolka enabled. Do you already
>>> have any plans for this?
>>>
>> agreed... so these sr indexes are bound to addresses, and addresses are bound to nodes...
>> so one clearly sees that it have to emit with multiple bitmaps encoded for a single node...
>> right now, this check is not yet done.... the only check i do right now if i have to send
>> to two interfaces (let's say v3 have a tunnel to v2 and v4) then i can use two completely
>> different routeids on the two different interfaces...
>>
>> regards,
>> cs
>>
>>
>>> Best regards,
>>> Cristina
>>> ________________________________________
>>> De: mc36 <>
>>> Enviado: quinta-feira, 17 de fevereiro de 2022 19:06
>>> Para: Cristina Klippel Dominicini; Rafael Silva Guimarães
>>> Cc:
>>> Assunto: Re: [rare-dev] mpolka in rare
>>>
>>> hi,
>>> i've just covered mpolka with some test cases:
>>> https://github.com/mc36/freeRouter/commit/4caf6dc0657aade06d9cd38654b581e77465a971
>>> now i'll wait for your feedback before continuing with the dataplanes...
>>> regards,
>>> cs
>>>
>>>
>>> On 2/17/22 22:14, mc36 wrote:
>>>> hi,
>>>> sorry for the spam, but it forwards:
>>>>
>>>> sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
>>>> pinging 3.3.3.2, src="1.1.1.2," vrf=v2, cnt=1111, len=1111, tim=11, gap=0, ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
>>>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>>>
>>>>
>>>> result=200%, recv/sent/lost/err=2222/1111/0/0, rtt min/avg/max/sum=0/0/2/12402, ttl min/avg/max=255/255/255
>>>> sid#
>>>>
>>>> the 200% success rate indicates that both v4 and v5 got the packets and they responded...
>>>>
>>>> this is the commit for this to happened:
>>>> https://github.com/mc36/freeRouter/commit/4a1d188521fa5a0fe8f8619c92f60dd44afa929e
>>>>
>>>> regards,
>>>> cs
>>>>
>>>>
>>>>
>>>>
>>>> On 2/17/22 21:11, mc36 wrote:
>>>>> hi,
>>>>> after applying the attached config, i see the following:
>>>>>
>>>>> sid(cfg-if)#show ipv4 srindex v2
>>>>> index prefix peers bytes
>>>>> 2 ::/0 0
>>>>> 3 1.1.1.3/32 2 4 5 0
>>>>> 4 1.1.1.4/32 3 0
>>>>> 5 1.1.1.5/32 3 0
>>>>>
>>>>> sid(cfg-if)#show ipv4 srindex v3
>>>>> index prefix peers bytes
>>>>> 2 1.1.1.2/32 3 0
>>>>> 3 ::/0 0
>>>>> 4 1.1.1.4/32 3 0
>>>>> 5 1.1.1.5/32 3 0
>>>>>
>>>>> sid(cfg-if)#show ipv4 srindex v4
>>>>> index prefix peers bytes
>>>>> 2 1.1.1.2/32 3 0
>>>>> 3 1.1.1.3/32 2 4 5 0
>>>>> 4 ::/0 0
>>>>> 5 1.1.1.5/32 3 0
>>>>>
>>>>> sid(cfg-if)#show ipv4 srindex v5
>>>>> index prefix peers bytes
>>>>> 2 1.1.1.2/32 3 0
>>>>> 3 1.1.1.3/32 2 4 5 0
>>>>> 4 1.1.1.4/32 3 0
>>>>> 5 ::/0 0
>>>>>
>>>>> sid(cfg-if)#
>>>>> sid(cfg-if)#show mpolka routeid tunnel2
>>>>> iface hop routeid
>>>>> hairpin11 2.2.2.2 00 00 00 00 00 00 00 00 00 00 74 90 0f 96 e9 fd
>>>>>
>>>>> index coeff poly crc equal
>>>>> 0 0001046a 13101 13101 true
>>>>> 1 0001046b 1732 1732 true
>>>>> 2 0001046d 2031 2031 true
>>>>> 3 00010473 6 6 true
>>>>> 4 00010475 1 1 true
>>>>> 5 0001047f 1 1 true
>>>>> 6 00010483 13881 13881 true
>>>>> 7 00010489 55145 55145 true
>>>>> 8 00010491 38366 38366 true
>>>>> 9 0001049d 11451 11451 true
>>>>>
>>>>> sid(cfg-if)#
>>>>>
>>>>> the topology is the following:
>>>>>
>>>>> v3
>>>>> v2-v3<
>>>>> v4
>>>>>
>>>>> the tunnel is configured to point to v3 and v4
>>>>>
>>>>> for the encoding, i reserved bit0 indicating that local processing is needed (end of tunneling, decapping, etc)
>>>>> the rest of the bits indicate the need to forward to the peer in the srindex table, which, as an ordered list of
>>>>> peers, must be identical on all the nodes executed the shortest path first...
>>>>>
>>>>> the routeid seems to be correctly encoded as we find 6 (1st and 2nd neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
>>>>>
>>>>> next steps will be the forwarding to happen, then some test cases, and finally the dataplanes...
>>>>>
>>>>> any opinion?
>>>>>
>>>>> thanks,
>>>>> cs
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 2/16/22 15:44, mc36 wrote:
>>>>>> hi,
>>>>>>
>>>>>> On 2/16/22 15:22, Cristina Klippel Dominicini wrote:
>>>>>>>
>>>>>>> Hi Csaba,
>>>>>>>
>>>>>>> Thanks for the feedback! We believe M-PolKA can tackle interesting use cases, and would be great to have it running on FreeRouter.
>>>>>>>
>>>>>>
>>>>>> yeahh, saw some in the paper and i'm also interested.... :)
>>>>>>
>>>>>>>>> my first impression was that after the mod operation, it's basically do what bier does, that is, we take the result, and interpret it as an outport bitmap, don't we?
>>>>>>> Yes, exactly! We just changed the meaning of the portid polynomial and the pipeline for cloning the packets according to the bitmap. It will be really great if we can reuse
>>>>>>> part of the bier implementation for that end. Do you think we can do that for both freeRouter and Tofino? Then, we could run some experiments comparing BIER with m-PolKA :-)
>>>>>>>
>>>>>>
>>>>>> hopefully you'll able to do that...
>>>>>>
>>>>>>
>>>>>>>>> and this is where we hit a limitation, depending on the size of crc is in use, we can only describe 16 output ports, which is clearly not enough...
>>>>>>> Is it possible to use CRC32 for M-PolKA's implementation in FreeRouter?
>>>>>>>
>>>>>>
>>>>>> surely yess, we can use crc32, that one was also made parameterizable back when polka was introduced... :)
>>>>>> but that would reduce the core nodes expressable in the routeid by half...
>>>>>>
>>>>>>
>>>>>>>>> my idea to overcome the above, what if we interpret mpolka mod result as a bitmap to this index table? it then raises the limitation to 16 igp neighbors per core node, which
>>>>>>>>> is more friendly...
>>>>>>>
>>>>>>> As we are already bound to the SR indexes, I think it is a reasonable reinterpretation.
>>>>>>> Another simple way would be to have two nodeids per switch (or more), for example. Then, with the same routeid we could address half of the ports with nodeid1 and the other
>>>>>>> half with nodeid2. This would incur in two CRC operations to generate the bitmap.
>>>>>>> We could also explore some other encoding techniques for the bitmap. Today is our weekly meeting at Ufes, so we will discuss with the group about the possibilities, and we give
>>>>>>> you a feedback on this subject.
>>>>>>>
>>>>>>
>>>>>> imho absolutely this is the way to follow instead of doing crc32,
>>>>>> and one can easily have two loopbacks if one have more than 16 igp peers...
>>>>>>
>>>>>>
>>>>>>>>> an other implementation idea is to use a different ethertype for mpolka to not confuse the unicast or multicast packets on the wire as they'll differ on handling...
>>>>>>> Agreed! We also have some other ideas for failure protection and chaining that would change the header, and consequently, would need a different version code.
>>>>>>>
>>>>>>
>>>>>> fine... then i'll wait until you discuss with your colleges and then i'll proceed with adding mpolka...
>>>>>> since then i've done some digging into the code and imho mpolka will use bier-id instead of srid because
>>>>>> that beast already populates a thing bfrlist, which is the per node peer index table we need for mpolka
>>>>>> to interpret the bitmap over... it's just a config thing like the regular polka and sr case...
>>>>>> after this, the initial version will be able to address the multihoming use case you discuss
>>>>>> in your paper, moreover it'll be able to do the (iptv) headend usecase from bier tests...
>>>>>>
>>>>>> regards,
>>>>>> cs
>>>>>>
>>>>>>
>>>>>>> Best regards,
>>>>>>> Cristina
>>>>>>>
>>>>>>> ________________________________________
>>>>>>> De: mc36 <>
>>>>>>> Enviado: quarta-feira, 16 de fevereiro de 2022 03:21
>>>>>>> Para: Cristina Klippel Dominicini; Rafael Silva Guimar es
>>>>>>> Cc:
>>>>>>> Assunto: mpolka in rare
>>>>>>>
>>>>>>> hi,
>>>>>>> i went through your mpolka paper, first of all, congrats, nice work!
>>>>>>> my first impression was that after the mod operation, it's basically do what bier
>>>>>>> does, that is, we take the result, and interpret it as an outport bitmap, don't we?
>>>>>>> and this is where we hit a limitation, depending on the size of crc is in use,
>>>>>>> we can only describe 16 output ports, which is clearly not enough...
>>>>>>> in freerouter, polka is bound to segment routing ids, and the result of the mod
>>>>>>> is not a port but an sr index... my idea to overcome the above, what if we interpret
>>>>>>> mpolka mod result as a bitmap to this index table? it then raises the limitation to
>>>>>>> 16 igp neighbors per core node, which is more friendly...
>>>>>>> an other implementation idea is to use a different ethertype for mpolka to not
>>>>>>> confuse the unicast or multicast packets on the wire as they'll differ on handling...
>>>>>>> afterall, i found it feasible both for software and dataplane implementations,
>>>>>>> so it could become a drop-in replacement for the current ip multicast over bier...
>>>>>>> any opinion?
>>>>>>> thanks,
>>>>>>> cs
>>>>>>>
>>>>>>>
>>>>>>> ________________________________
>>>>>>>
>>>>>>> Esta mensagem (incluindo anexos) cont m informa o confidencial destinada a um usu rio espec fico e seu conte do protegido por lei. Se voc n o o destinat rio
>>>>>>> correto deve apagar esta mensagem.
>>>>>>>
>>>>>>> O emitente desta mensagem respons vel por seu conte do e endere amento.
>>>>>>> Cabe ao destinat rio cuidar quanto ao tratamento adequado. A divulga o, reprodu o e/ou distribui o sem a devida autoriza o ou qualquer outra a o sem
>>>>>>> conformidade com as normas internas do Ifes s o proibidas e pass veis de san o disciplinar, c vel e criminal.
>>>>>>>
>>>
>>> ________________________________
>>>
>>> Esta mensagem (incluindo anexos) contém informação confidencial destinada a um usuário específico e seu conteúdo é protegido por lei. Se você não é o destinatário correto deve
>>> apagar esta mensagem.
>>>
>>> O emitente desta mensagem é responsável por seu conteúdo e endereçamento.
>>> Cabe ao destinatário cuidar quanto ao tratamento adequado. A divulgação, reprodução e/ou distribuição sem a devida autorização ou qualquer outra ação sem conformidade com as
>>> normas internas do Ifes são proibidas e passíveis de sanção disciplinar, cível e criminal.
>>>
Att.
Everson Scherrer Borges- [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/16/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, Everson Borges, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/21/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/22/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/16/2022
Archive powered by MHonArc 2.6.19.