Skip to Content.
Sympa Menu

rare-dev - Re: [rare-dev] mpolka in rare

Subject: Rare project developers

List archive

Re: [rare-dev] mpolka in rare


Chronological Thread 
  • From: mc36 <>
  • To: Cristina Klippel Dominicini <>, Rafael Silva Guimarães <>
  • Cc:
  • Subject: Re: [rare-dev] mpolka in rare
  • Date: Thu, 17 Feb 2022 21:11:31 +0100

hi,
after applying the attached config, i see the following:

sid(cfg-if)#show ipv4 srindex v2
index prefix peers bytes
2 ::/0 0
3 1.1.1.3/32 2 4 5 0
4 1.1.1.4/32 3 0
5 1.1.1.5/32 3 0

sid(cfg-if)#show ipv4 srindex v3
index prefix peers bytes
2 1.1.1.2/32 3 0
3 ::/0 0
4 1.1.1.4/32 3 0
5 1.1.1.5/32 3 0

sid(cfg-if)#show ipv4 srindex v4
index prefix peers bytes
2 1.1.1.2/32 3 0
3 1.1.1.3/32 2 4 5 0
4 ::/0 0
5 1.1.1.5/32 3 0

sid(cfg-if)#show ipv4 srindex v5
index prefix peers bytes
2 1.1.1.2/32 3 0
3 1.1.1.3/32 2 4 5 0
4 1.1.1.4/32 3 0
5 ::/0 0

sid(cfg-if)#
sid(cfg-if)#show mpolka routeid tunnel2
iface hop routeid
hairpin11 2.2.2.2 00 00 00 00 00 00 00 00 00 00 74 90 0f 96 e9 fd

index coeff poly crc equal
0 0001046a 13101 13101 true
1 0001046b 1732 1732 true
2 0001046d 2031 2031 true
3 00010473 6 6 true
4 00010475 1 1 true
5 0001047f 1 1 true
6 00010483 13881 13881 true
7 00010489 55145 55145 true
8 00010491 38366 38366 true
9 0001049d 11451 11451 true

sid(cfg-if)#

the topology is the following:

v3
v2-v3<
v4

the tunnel is configured to point to v3 and v4

for the encoding, i reserved bit0 indicating that local processing is needed
(end of tunneling, decapping, etc)
the rest of the bits indicate the need to forward to the peer in the srindex
table, which, as an ordered list of
peers, must be identical on all the nodes executed the shortest path first...

the routeid seems to be correctly encoded as we find 6 (1st and 2nd
neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...

next steps will be the forwarding to happen, then some test cases, and
finally the dataplanes...

any opinion?

thanks,
cs




On 2/16/22 15:44, mc36 wrote:
hi,

On 2/16/22 15:22, Cristina Klippel Dominicini wrote:

Hi Csaba,

Thanks for the feedback! We believe M-PolKA can tackle interesting use cases,
and would be great to have it running on FreeRouter.


yeahh, saw some in the paper and i'm also interested.... :)

my first impression was that after the mod operation, it's basically do what
bier does, that is, we take the result, and interpret it as an outport
bitmap, don't we?
Yes, exactly! We just changed the meaning of the portid polynomial and the pipeline for cloning the packets according to the bitmap. It will be really great if we can reuse part of the bier implementation for that end. Do you think we can do that for both freeRouter and Tofino? Then, we could run some experiments comparing BIER with m-PolKA :-)


hopefully you'll able to do that...


and this is where we hit a limitation, depending on the size of crc is in
use, we can only describe 16 output ports, which is clearly not enough...
Is it possible to use CRC32 for M-PolKA's implementation in FreeRouter?


surely yess, we can use crc32, that one was also made parameterizable back
when polka was introduced... :)
but that would reduce the core nodes expressable in the routeid by half...


my idea to overcome the above, what if we interpret mpolka mod result as a bitmap to this index table? it then raises the limitation to 16 igp neighbors per core node, which is more friendly...

As we are already bound to the SR indexes, I think it is a reasonable
reinterpretation.
Another simple way would be to have two nodeids per switch (or more), for example. Then, with the same routeid we could address half of the ports with nodeid1 and the other half with nodeid2. This would incur in two CRC operations to generate the bitmap.
We could also explore some other encoding techniques for the bitmap. Today is our weekly meeting at Ufes, so we will discuss with the group about the possibilities, and we give you a feedback on this subject.


imho absolutely this is the way to follow instead of doing crc32,
and one can easily have two loopbacks if one have more than 16 igp peers...


an other implementation idea is to use a different ethertype for mpolka to
not confuse the unicast or multicast packets on the wire as they'll differ on
handling...
Agreed! We also have some other ideas for failure protection and chaining
that would change the header, and consequently, would need a different
version code.


fine... then i'll wait until you discuss with your colleges and then i'll
proceed with adding mpolka...
since then i've done some digging into the code and imho mpolka will use
bier-id instead of srid because
that beast already populates a thing bfrlist, which is the per node peer
index table we need for mpolka
to interpret the bitmap over... it's just a config thing like the regular
polka and sr case...
after this, the initial version will be able to address the multihoming use
case you discuss
in your paper, moreover it'll be able to do the (iptv) headend usecase from
bier tests...

regards,
cs


Best regards,
Cristina

________________________________________
De: mc36 <>
Enviado: quarta-feira, 16 de fevereiro de 2022 03:21
Para: Cristina Klippel Dominicini; Rafael Silva Guimar es
Cc:
Assunto: mpolka in rare

hi,
i went through your mpolka paper, first of all, congrats, nice work!
my first impression was that after the mod operation, it's basically do what
bier
does, that is, we take the result, and interpret it as an outport bitmap,
don't we?
and this is where we hit a limitation, depending on the size of crc is in use,
we can only describe 16 output ports, which is clearly not enough...
in freerouter, polka is bound to segment routing ids, and the result of the
mod
is not a port but an sr index... my idea to overcome the above, what if we
interpret
mpolka mod result as a bitmap to this index table? it then raises the
limitation to
16 igp neighbors per core node, which is more friendly...
an other implementation idea is to use a different ethertype for mpolka to not
confuse the unicast or multicast packets on the wire as they'll differ on
handling...
afterall, i found it feasible both for software and dataplane implementations,
so it could become a drop-in replacement for the current ip multicast over
bier...
any opinion?
thanks,
cs


________________________________

Esta mensagem (incluindo anexos) cont m informa o confidencial destinada a um usu rio espec fico e seu conte do protegido por lei. Se voc n o o destinat rio correto deve apagar esta mensagem.

O emitente desta mensagem respons vel por seu conte do e endere
amento.
Cabe ao destinat rio cuidar quanto ao tratamento adequado. A divulga o, reprodu o e/ou distribui o sem a devida autoriza o ou qualquer outra a o sem conformidade com as normas internas do Ifes s o proibidas e pass veis de san o disciplinar, c vel e criminal.




conf t
router lsrp4 2
vrf v2
router-id 1.1.1.2
segrout 10 2
redistribute connected
exit
interface loopback2
no description
vrf forwarding v2
ipv4 address 1.1.1.2 255.255.255.255
exit
router lsrp4 3
vrf v3
router-id 1.1.1.3
segrout 10 3
redistribute connected
exit
interface loopback3
vrf forwarding v3
ipv4 address 1.1.1.3 255.255.255.255
exit
router lsrp4 4
vrf v4
router-id 1.1.1.4
segrout 10 4
redistribute connected
exit
interface loopback4
vrf forwarding v4
ipv4 address 1.1.1.4 255.255.255.255
exit
router lsrp4 5
vrf v5
router-id 1.1.1.5
segrout 10 5
redistribute connected
exit
interface loopback5
vrf forwarding v5
ipv4 address 1.1.1.5 255.255.255.255
exit
hairpin 1
exit
interface hairpin11
no description
vrf forwarding v2
ipv4 address 2.2.2.1 255.255.255.252
router lsrp4 2 enable
mpolka enable 2 66666 10
mpls enable
exit
interface hairpin12
no description
vrf forwarding v3
ipv4 address 2.2.2.2 255.255.255.252
router lsrp4 3 enable
mpolka enable 3 66666 10
mpls enable
exit
hairpin 2
exit
interface hairpin21
no description
vrf forwarding v3
ipv4 address 2.2.2.5 255.255.255.252
router lsrp4 3 enable
mpolka enable 3 66666 10
mpls enable
exit
interface hairpin22
no description
vrf forwarding v4
ipv4 address 2.2.2.6 255.255.255.252
router lsrp4 4 enable
mpolka enable 4 66666 10
mpls enable
exit
hairpin 3
exit
interface hairpin31
no description
vrf forwarding v3
ipv4 address 2.2.2.9 255.255.255.252
router lsrp4 3 enable
mpolka enable 3 66666 10
mpls enable
exit
interface hairpin32
no description
vrf forwarding v5
ipv4 address 2.2.2.10 255.255.255.252
router lsrp4 5 enable
mpolka enable 5 66666 10
mpls enable
exit
int tun2
tunnel vrf v2
tunnel source loopback2
tunnel destination 1.1.1.99
tunnel domain-name 1.1.1.2 1.1.1.3 , 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
tunnel mode mpolka
vrf for v2
ipv4 addr 3.3.3.1 /30





Archive powered by MHonArc 2.6.19.

Top of Page