Skip to Content.
Sympa Menu

rare-users - Re: [RARE-users] freerouter elsewhere

Subject: RARE user and assistance email list

List archive

Re: [RARE-users] freerouter elsewhere


Chronological Thread 
  • From: mc36 <>
  • To: "Moises R. N. Ribeiro" <>
  • Cc: "" <>
  • Subject: Re: [RARE-users] freerouter elsewhere
  • Date: Sat, 19 Feb 2022 17:06:23 +0100

hi,
it was easy-peasy, but a lot to follow on the road here:
https://github.com/mc36/freeRouter/commit/34f41f5bf6ae6c15b4fae33341cc1c9600b47ac8
what you got here is the basic encapsulation for the thing cisco calls
trustsec...
the tests covering the basic packet io and the interop with an ios-xe vm...
the next steps will be to introduce some matching logic to acls and
policy-maps
for it, then finally the dataplane support for the encapsulation as well...
it'll nicely complete the stateful firewall recently introduced to the
dataplanes... :)
finally i just added you to the hall of fame
(http://www.freertr.net/greet.html)
for pushing this idea to the team.... :)
regards,
cs


On 2/19/22 14:48, mc36 wrote:
hi,
sounds interesting... yeahhh you can count on us if you have any missing
features or questions during the planning/implementation...
right now, we do support what cisco calls performance-measures, that is, a
configurable regular latency check fed back to igps
to be considered during the spf... in practice, it optimizes the latency to
half compared to clearnet on my vpn overlay across europe...
the only missing piece for your sase plans is the role based access model...
but at least i got something to play with for the upcoming weeks...:)
regards,
cs



On 2/19/22 14:01, Moises R. N. Ribeiro wrote:
Csaba,

I am teasing RARE's team (in particular Eoin and Frederic) to help Cape Verde (the country/island on the west cost of Africa where the new BELLA cable bridging Europe and South America hops) to create their own NREN. Everything early stage...we have a team in Cape verde, Geant is involved/interested, WACREN (west africa NREN federation) and Mozambique NREN too.

I am proposing a clean-slate approach, and assuming they have reasonable connectivity already, simple services (like EDUROAM) for community building but already done in a cloud-native way. Frederic suggested RARE solutions (including Freerouter and appliaces) would fit well my plans to design the whole thing as a SASE:
https://telcocloudbridge.com/blog/what-is-sase-a-beginners-tutorial/

I think there is a window of opportunity here for a pilot:
https://community.geant.org/community-programme-portfolio/innovation-programme/

As a seed to that pilot...we can use a past development under a "Advanced
Cyberinfrastructure" grant from Brazilian NREN (RNP) the following project
(Sorry...it is in Portuguese...but technical stuff is readable).
https://wrnp.rnp.br/sites/wrnp2019/files/NosFVerato.pdf
Short Talk (skip to 19:40):
https://eduplay.rnp.br/portal/video/51981

In the roadshow, did also a demo using a 2000km VLAN connecting to our virtual CPE that lead to the internal Radius equipment for providing EDUROAM. But here VPN and SD-WAN would come to using multi-homing to virtual PoPs over the public Internet.

Here how I see what we could do:
-------------------------------
Objective:

Multi-NREN (Brazil and Mozambique +WACREN helping Cape Verde) collaborating with GEANT in order to replicate/expand/evolve the EDUROAM on (private) cloud (and with a virtualized PoP) we did in Brazil.

Novelty:

Within 5 / 6 months demonstrate functional aspects:

1) "Connectivity" will be provided this time by overlaying (a VPN) using public internet at Cape Verde University with wireguard-native running appliance from RARE (freerouter in control);
2) dual-homing: a private cloud in Brazil (RNP's PoP-ES) doing the virtual PoP tasks (AAA and firewall) and "another" virtual PoP "elsewhere" (that could be private or public and sitting in Africa, Europe, US or even in Brazil).
3) Inteligent traffic ballancing (application, location and "link"-state
awareness routing)

Extension (reaching up 9 month):
Sure, multiple sources of extra latency will be there...and performance evaluation is important. We could evaluate performace in details and test how to provide a set of "services" such as IaaS (VMs to users), PaaS (such as IoT platforms) and SaaS (VC), for instance besides connectivity.

------------------------------------------------

In case you have any reservation/comment/suggestion please let us know.
Evidently, you're more than welcome to join us.

Regards,
Moises

----- Mensagem original -----
De: "cs" <>
Para: "Moises Renato Nunes Ribeiro" <>
Enviadas: Sábado, 19 de fevereiro de 2022 8:59:29
Assunto: Re: [rare-dev] mpolka in rare

:) you're welcome!

On 2/18/22 19:45, Moises R. N. Ribeiro wrote:

Polstina & CsaKA....two as one...thanks once again to make things happen!

----- Mensagem original -----
De: "cs" <>
Para: "rare-dev" <>, "Cristina Klippel Dominicini"
<>, "rafaelg" <>
Cc: "Moises Renato Nunes Ribeiro" <>, "Magnos Martinello"
<>
Enviadas: Sexta-feira, 18 de fevereiro de 2022 14:35:12
Assunto: Re: [rare-dev] mpolka in rare

hi,
i go inline...
regards,
cs

On 2/18/22 17:53, Cristina Klippel Dominicini wrote:
Hi Csaba,

This is really great news \o/

I talked with the group and the design choices seem very good for an initial
prototype. Thank you very much! We are going to execute the testcases and
provide feedback :-D

Some initial doubts:

tunnel domain-name 1.1.1.2 1.1.1.3 , 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
This represents the links of the multicast tree and the syntax "1.1.1.4
1.1.1.4" indicates a leaf, right?
exactly.. the general format is the following:
<encode for this address> <encode for this neighbor from the index table>+ ,
with the addition that if the neighbor address is itself, which is obviously
not in the index table,
then it'll set the bit0 to indicate a 'and also process locally'....

This representation is very good, because M-PolKA can represent structures
that are not exactly trees. For example, two branches that end in the same
leaf.
Example: If you have an extra link between v2 and v4, the representation of
the multipath would be:
tunnel domain-name 1.1.1.2 1.1.1.3 1.1.1.4, 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
Is that right?
yesss, exactly... you got it right, you can describe arbitrary trees, and
your above encoding should result in what you wanted to achieve....



    sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
I didn't know you had this multicast pin! Super cool! Does it send the ICMP
packet to all the leaves?

this command basically the same pin(g) command, but as we instruct it to wait
for multi(ple) responses within the timeout range...


for the encoding, i reserved bit0 indicating that local processing is needed
(end of tunneling, decapping, etc)
the rest of the bits indicate the need to forward to the peer in the srindex
table, which, as an ordered list of
peers, must be identical on all the nodes executed the shortest path first...
    the routeid seems to be correctly encoded as we find 6 (1st and 2nd
neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
I don't know if I understood correctly... These are bits from the output
bitstream of the mod operation, right?
yesss...

So, if bitstream is 110 at v3, it will forward to 1st and 2nd neighbors (v4
and v5, in this example).
exactly....

But how does it correlates the neighbors with the srindex table (that
includes non-neighbors)?

just drop those examples, the output came from an intermediate state... :)
the correct encoding should had be 12 1100, the we're addressing 2nd and 3rd
neighbor of v3...

Regarding the routing, what are the structures we are reusing? How does FreeRouter keep the list of neighbors and compute the routeid that will produce the correct output bitstream in each node? I will explore the commits.
so finally i kept the srindex fields, and, while doing the shortest path,
each index in that table got an ordered
list of neighbor indexes... assuming the link state database flooding
finished completely, must be identical on
each router participating in an igp... and, when freerouter constructs the
bitmap, it just converts the ips to
indexes over this ordered neighbor index table...


If we use that approach of configuring two (or more) nodeids when you have more than 16 igp peers, one needs to configure two (or more) loopbacks. Then, the pipeline would have to combine the bitstreams considering some ordering (the lower IP addr?). Also, it would have to check the number of available loopbacks that have mpolka enabled. Do you already have any plans for this?

agreed... so these sr indexes are bound to addresses, and addresses are bound
to nodes...
so one clearly sees that it have to emit with multiple bitmaps encoded for a
single node...
right now, this check is not yet done.... the only check i do right now if i
have to send
to two interfaces (let's say v3 have a tunnel to v2 and v4) then i can use
two completely
different routeids on the two different interfaces...

regards,
cs


Best regards,
Cristina
________________________________________
De: mc36 <>
Enviado: quinta-feira, 17 de fevereiro de 2022 19:06
Para: Cristina Klippel Dominicini; Rafael Silva Guimarães
Cc:
Assunto: Re: [rare-dev] mpolka in rare

hi,
i've just covered mpolka with some test cases:
https://github.com/mc36/freeRouter/commit/4caf6dc0657aade06d9cd38654b581e77465a971
now i'll wait for your feedback before continuing with the dataplanes...
regards,
cs


On 2/17/22 22:14, mc36 wrote:
hi,
sorry for the spam, but it forwards:

sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
pinging 3.3.3.2, src=1.1.1.2, vrf=v2, cnt=1111, len=1111, tim=11, gap=0,
ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

result=200%, recv/sent/lost/err=2222/1111/0/0, rtt
min/avg/max/sum=0/0/2/12402, ttl min/avg/max=255/255/255
sid#

the 200% success rate indicates that both v4 and v5 got the packets and they
responded...

this is the commit for this to happened:
https://github.com/mc36/freeRouter/commit/4a1d188521fa5a0fe8f8619c92f60dd44afa929e

regards,
cs




On 2/17/22 21:11, mc36 wrote:
hi,
after applying the attached config, i see the following:

sid(cfg-if)#show ipv4 srindex v2
index  prefix      peers  bytes
2      ::/0               0
3      1.1.1.3/32  2 4 5  0
4      1.1.1.4/32  3      0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v3
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      ::/0               0
4      1.1.1.4/32  3      0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v4
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      1.1.1.3/32  2 4 5  0
4      ::/0               0
5      1.1.1.5/32  3      0

sid(cfg-if)#show ipv4 srindex v5
index  prefix      peers  bytes
2      1.1.1.2/32  3      0
3      1.1.1.3/32  2 4 5  0
4      1.1.1.4/32  3      0
5      ::/0               0

sid(cfg-if)#
sid(cfg-if)#show mpolka routeid tunnel2
iface      hop      routeid
hairpin11  2.2.2.2  00 00 00 00 00 00 00 00 00 00 74 90 0f 96 e9 fd

index  coeff     poly   crc    equal
0      0001046a  13101  13101  true
1      0001046b  1732   1732   true
2      0001046d  2031   2031   true
3      00010473  6      6      true
4      00010475  1      1      true
5      0001047f  1      1      true
6      00010483  13881  13881  true
7      00010489  55145  55145  true
8      00010491  38366  38366  true
9      0001049d  11451  11451  true

sid(cfg-if)#

the topology is the following:

          v3
v2-v3<
          v4

the tunnel is configured to point to v3 and v4

for the encoding, i reserved bit0 indicating that local processing is needed
(end of tunneling, decapping, etc)
the rest of the bits indicate the need to forward to the peer in the srindex
table, which, as an ordered list of
peers, must be identical on all the nodes executed the shortest path first...

the routeid seems to be correctly encoded as we find 6 (1st and 2nd
neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...

next steps will be the forwarding to happen, then some test cases, and
finally the dataplanes...

any opinion?

thanks,
cs




On 2/16/22 15:44, mc36 wrote:
hi,

On 2/16/22 15:22, Cristina Klippel Dominicini wrote:

Hi Csaba,

Thanks for the feedback! We believe M-PolKA can tackle interesting use cases,
and would be great to have it running on FreeRouter.


yeahh, saw some in the paper and i'm also interested.... :)

my first impression was that after the mod operation, it's basically do what
bier does, that is, we take the result, and interpret it as an outport
bitmap, don't we?
Yes, exactly! We just changed the meaning of the portid polynomial and the
pipeline for cloning the packets according to the bitmap. It will be really
great if we can reuse
part of the bier implementation for that end. Do you think we can do that for
both freeRouter and Tofino? Then, we could run some experiments comparing
BIER with m-PolKA :-)


hopefully you'll able to do that...


and this is where we hit a limitation, depending on the size of crc is in
use, we can only describe 16 output ports, which is clearly not enough...
Is it possible to use CRC32 for M-PolKA's implementation in FreeRouter?


surely yess, we can use crc32, that one was also made parameterizable back
when polka was introduced... :)
but that would reduce the core nodes expressable in the routeid by half...


my idea to overcome the above, what if we interpret mpolka mod result as a
bitmap to this index table? it then raises the limitation to 16 igp neighbors
per core node, which
is more friendly...

As we are already bound to the SR indexes, I think it is a reasonable
reinterpretation.
Another simple way would be to have two nodeids per switch (or more), for
example. Then, with the same routeid we could address half of the ports with
nodeid1 and the other
half with nodeid2. This would incur in two CRC operations to generate the
bitmap.
We could also explore some other encoding techniques for the bitmap. Today is our weekly meeting at Ufes, so we will discuss with the group about the possibilities, and we give
you a feedback on this subject.


imho absolutely this is the way to follow instead of doing crc32,
and one can easily have two loopbacks if one have more than 16 igp peers...


an other implementation idea is to use a different ethertype for mpolka to
not confuse the unicast or multicast packets on the wire as they'll differ on
handling...
Agreed! We also have some other ideas for failure protection and chaining
that would change the header, and consequently, would need a different
version code.


fine... then i'll wait until you discuss with your colleges and then i'll
proceed with adding mpolka...
since then i've done some digging into the code and imho mpolka will use
bier-id instead of srid because
that beast already populates a thing bfrlist, which is the per node peer
index table we need for mpolka
to interpret the bitmap over... it's just a config thing like the regular
polka and sr case...
after this, the initial version will be able to address the multihoming use
case you discuss
in your paper, moreover it'll be able to do the (iptv) headend usecase from
bier tests...

regards,
cs


Best regards,
Cristina

________________________________________
De: mc36 <>
Enviado: quarta-feira, 16 de fevereiro de 2022 03:21
Para: Cristina Klippel Dominicini; Rafael Silva Guimar  es
Cc:
Assunto: mpolka in rare

hi,
i went through your mpolka paper, first of all, congrats, nice work!
my first impression was that after the mod operation, it's basically do what
bier
does, that is, we take the result, and interpret it as an outport bitmap,
don't we?
and this is where we hit a limitation, depending on the size of crc is in use,
we can only describe 16 output ports, which is clearly not enough...
in freerouter, polka is bound to segment routing ids, and the result of the
mod
is not a port but an sr index... my idea to overcome the above, what if we
interpret
mpolka mod result as a bitmap to this index table? it then raises the
limitation to
16 igp neighbors per core node, which is more friendly...
an other implementation idea is to use a different ethertype for mpolka to not
confuse the unicast or multicast packets on the wire as they'll differ on
handling...
afterall, i found it feasible both for software and dataplane implementations,
so it could become a drop-in replacement for the current ip multicast over
bier...
any opinion?
thanks,
cs


________________________________

Esta mensagem (incluindo anexos) cont  m informa    o confidencial destinada
a um usu  rio espec  fico e seu conte  do    protegido por lei. Se voc   n  o 
   o destinat  rio
correto deve apagar esta mensagem.

O emitente desta mensagem    respons  vel por seu conte  do e endere  amento.
Cabe ao destinat  rio cuidar quanto ao tratamento adequado. A divulga    o,
reprodu    o e/ou distribui    o sem a devida autoriza    o ou qualquer outra
a    o sem
conformidade com as normas internas do Ifes s  o proibidas e pass  veis de
san    o disciplinar, c  vel e criminal.


________________________________

Esta mensagem (incluindo anexos) contém informação confidencial destinada a um usuário específico e seu conteúdo é protegido por lei. Se você não é o destinatário correto deve apagar esta mensagem.

O emitente desta mensagem é responsável por seu conteúdo e endereçamento.
Cabe ao destinatário cuidar quanto ao tratamento adequado. A divulgação, reprodução e/ou distribuição sem a devida autorização ou qualquer outra ação sem conformidade com as normas internas do Ifes são proibidas e passíveis de sanção disciplinar, cível e criminal.




Archive powered by MHonArc 2.6.19.

Top of Page