Subject: Rare project developers
List archive
- From: Cristina Klippel Dominicini <>
- To: Rafael Silva Guimarães <>, "" <>
- Cc: "Moises R. N. Ribeiro" <>, Magnos Martinello <>
- Subject: Re: [rare-dev] mpolka in rare
- Date: Fri, 18 Feb 2022 16:53:12 +0000
- Accept-language: pt-BR, en-US
Hi Csaba,
This is really great news \o/
I talked with the group and the design choices seem very good for an initial
prototype. Thank you very much! We are going to execute the testcases and
provide feedback :-D
Some initial doubts:
>> tunnel domain-name 1.1.1.2 1.1.1.3 , 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
>> 1.1.1.4 , 1.1.1.5 1.1.1.5
This represents the links of the multicast tree and the syntax "1.1.1.4
1.1.1.4" indicates a leaf, right?
This representation is very good, because M-PolKA can represent structures
that are not exactly trees. For example, two branches that end in the same
leaf.
Example: If you have an extra link between v2 and v4, the representation of
the multipath would be:
tunnel domain-name 1.1.1.2 1.1.1.3 1.1.1.4, 1.1.1.3 1.1.1.4 1.1.1.5 , 1.1.1.4
1.1.1.4 , 1.1.1.5 1.1.1.5
Is that right?
>> sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
I didn't know you had this multicast pin! Super cool! Does it send the ICMP
packet to all the leaves?
>> for the encoding, i reserved bit0 indicating that local processing is
>> needed (end of tunneling, decapping, etc)
>> the rest of the bits indicate the need to forward to the peer in the
>> srindex table, which, as an ordered list of
>> peers, must be identical on all the nodes executed the shortest path
>> first...
>> the routeid seems to be correctly encoded as we find 6 (1st and 2nd
>> neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
I don't know if I understood correctly... These are bits from the output
bitstream of the mod operation, right? So, if bitstream is 110 at v3, it will
forward to 1st and 2nd neighbors (v4 and v5, in this example). But how does
it correlates the neighbors with the srindex table (that includes
non-neighbors)?
Regarding the routing, what are the structures we are reusing? How does
FreeRouter keep the list of neighbors and compute the routeid that will
produce the correct output bitstream in each node? I will explore the commits.
If we use that approach of configuring two (or more) nodeids when you have
more than 16 igp peers, one needs to configure two (or more) loopbacks. Then,
the pipeline would have to combine the bitstreams considering some ordering
(the lower IP addr?). Also, it would have to check the number of available
loopbacks that have mpolka enabled. Do you already have any plans for this?
Best regards,
Cristina
________________________________________
De: mc36 <>
Enviado: quinta-feira, 17 de fevereiro de 2022 19:06
Para: Cristina Klippel Dominicini; Rafael Silva Guimarães
Cc:
Assunto: Re: [rare-dev] mpolka in rare
hi,
i've just covered mpolka with some test cases:
https://github.com/mc36/freeRouter/commit/4caf6dc0657aade06d9cd38654b581e77465a971
now i'll wait for your feedback before continuing with the dataplanes...
regards,
cs
On 2/17/22 22:14, mc36 wrote:
> hi,
> sorry for the spam, but it forwards:
>
> sid#pin 3.3.3.2 /si 1111 /re 1111 /tim 11 /vr v2 /int lo2 /mul
> pinging 3.3.3.2, src=1.1.1.2, vrf=v2, cnt=1111, len=1111, tim=11, gap=0,
> ttl=255, tos=0, flow=0, fill=0, sweep=false, multi=true, detail=false
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>
> result=200%, recv/sent/lost/err=2222/1111/0/0, rtt
> min/avg/max/sum=0/0/2/12402, ttl min/avg/max=255/255/255
> sid#
>
> the 200% success rate indicates that both v4 and v5 got the packets and
> they responded...
>
> this is the commit for this to happened:
> https://github.com/mc36/freeRouter/commit/4a1d188521fa5a0fe8f8619c92f60dd44afa929e
>
> regards,
> cs
>
>
>
>
> On 2/17/22 21:11, mc36 wrote:
>> hi,
>> after applying the attached config, i see the following:
>>
>> sid(cfg-if)#show ipv4 srindex v2
>> index prefix peers bytes
>> 2 ::/0 0
>> 3 1.1.1.3/32 2 4 5 0
>> 4 1.1.1.4/32 3 0
>> 5 1.1.1.5/32 3 0
>>
>> sid(cfg-if)#show ipv4 srindex v3
>> index prefix peers bytes
>> 2 1.1.1.2/32 3 0
>> 3 ::/0 0
>> 4 1.1.1.4/32 3 0
>> 5 1.1.1.5/32 3 0
>>
>> sid(cfg-if)#show ipv4 srindex v4
>> index prefix peers bytes
>> 2 1.1.1.2/32 3 0
>> 3 1.1.1.3/32 2 4 5 0
>> 4 ::/0 0
>> 5 1.1.1.5/32 3 0
>>
>> sid(cfg-if)#show ipv4 srindex v5
>> index prefix peers bytes
>> 2 1.1.1.2/32 3 0
>> 3 1.1.1.3/32 2 4 5 0
>> 4 1.1.1.4/32 3 0
>> 5 ::/0 0
>>
>> sid(cfg-if)#
>> sid(cfg-if)#show mpolka routeid tunnel2
>> iface hop routeid
>> hairpin11 2.2.2.2 00 00 00 00 00 00 00 00 00 00 74 90 0f 96 e9 fd
>>
>> index coeff poly crc equal
>> 0 0001046a 13101 13101 true
>> 1 0001046b 1732 1732 true
>> 2 0001046d 2031 2031 true
>> 3 00010473 6 6 true
>> 4 00010475 1 1 true
>> 5 0001047f 1 1 true
>> 6 00010483 13881 13881 true
>> 7 00010489 55145 55145 true
>> 8 00010491 38366 38366 true
>> 9 0001049d 11451 11451 true
>>
>> sid(cfg-if)#
>>
>> the topology is the following:
>>
>> v3
>> v2-v3<
>> v4
>>
>> the tunnel is configured to point to v3 and v4
>>
>> for the encoding, i reserved bit0 indicating that local processing is
>> needed (end of tunneling, decapping, etc)
>> the rest of the bits indicate the need to forward to the peer in the
>> srindex table, which, as an ordered list of
>> peers, must be identical on all the nodes executed the shortest path
>> first...
>>
>> the routeid seems to be correctly encoded as we find 6 (1st and 2nd
>> neighbors) for nodeid 3, and 1 (decap) for nodeid 4 and 5...
>>
>> next steps will be the forwarding to happen, then some test cases, and
>> finally the dataplanes...
>>
>> any opinion?
>>
>> thanks,
>> cs
>>
>>
>>
>>
>> On 2/16/22 15:44, mc36 wrote:
>>> hi,
>>>
>>> On 2/16/22 15:22, Cristina Klippel Dominicini wrote:
>>>>
>>>> Hi Csaba,
>>>>
>>>> Thanks for the feedback! We believe M-PolKA can tackle interesting use
>>>> cases, and would be great to have it running on FreeRouter.
>>>>
>>>
>>> yeahh, saw some in the paper and i'm also interested.... :)
>>>
>>>>>> my first impression was that after the mod operation, it's basically
>>>>>> do what bier does, that is, we take the result, and interpret it as an
>>>>>> outport bitmap, don't we?
>>>> Yes, exactly! We just changed the meaning of the portid polynomial and
>>>> the pipeline for cloning the packets according to the bitmap. It will be
>>>> really great if we can reuse
>>>> part of the bier implementation for that end. Do you think we can do
>>>> that for both freeRouter and Tofino? Then, we could run some experiments
>>>> comparing BIER with m-PolKA :-)
>>>>
>>>
>>> hopefully you'll able to do that...
>>>
>>>
>>>>>> and this is where we hit a limitation, depending on the size of crc is
>>>>>> in use, we can only describe 16 output ports, which is clearly not
>>>>>> enough...
>>>> Is it possible to use CRC32 for M-PolKA's implementation in FreeRouter?
>>>>
>>>
>>> surely yess, we can use crc32, that one was also made parameterizable
>>> back when polka was introduced... :)
>>> but that would reduce the core nodes expressable in the routeid by half...
>>>
>>>
>>>>>> my idea to overcome the above, what if we interpret mpolka mod result
>>>>>> as a bitmap to this index table? it then raises the limitation to 16
>>>>>> igp neighbors per core node, which
>>>>>> is more friendly...
>>>>
>>>> As we are already bound to the SR indexes, I think it is a reasonable
>>>> reinterpretation.
>>>> Another simple way would be to have two nodeids per switch (or more),
>>>> for example. Then, with the same routeid we could address half of the
>>>> ports with nodeid1 and the other
>>>> half with nodeid2. This would incur in two CRC operations to generate
>>>> the bitmap.
>>>> We could also explore some other encoding techniques for the bitmap.
>>>> Today is our weekly meeting at Ufes, so we will discuss with the group
>>>> about the possibilities, and we give
>>>> you a feedback on this subject.
>>>>
>>>
>>> imho absolutely this is the way to follow instead of doing crc32,
>>> and one can easily have two loopbacks if one have more than 16 igp
>>> peers...
>>>
>>>
>>>>>> an other implementation idea is to use a different ethertype for
>>>>>> mpolka to not confuse the unicast or multicast packets on the wire as
>>>>>> they'll differ on handling...
>>>> Agreed! We also have some other ideas for failure protection and
>>>> chaining that would change the header, and consequently, would need a
>>>> different version code.
>>>>
>>>
>>> fine... then i'll wait until you discuss with your colleges and then i'll
>>> proceed with adding mpolka...
>>> since then i've done some digging into the code and imho mpolka will use
>>> bier-id instead of srid because
>>> that beast already populates a thing bfrlist, which is the per node peer
>>> index table we need for mpolka
>>> to interpret the bitmap over... it's just a config thing like the regular
>>> polka and sr case...
>>> after this, the initial version will be able to address the multihoming
>>> use case you discuss
>>> in your paper, moreover it'll be able to do the (iptv) headend usecase
>>> from bier tests...
>>>
>>> regards,
>>> cs
>>>
>>>
>>>> Best regards,
>>>> Cristina
>>>>
>>>> ________________________________________
>>>> De: mc36 <>
>>>> Enviado: quarta-feira, 16 de fevereiro de 2022 03:21
>>>> Para: Cristina Klippel Dominicini; Rafael Silva Guimar es
>>>> Cc:
>>>> Assunto: mpolka in rare
>>>>
>>>> hi,
>>>> i went through your mpolka paper, first of all, congrats, nice work!
>>>> my first impression was that after the mod operation, it's basically do
>>>> what bier
>>>> does, that is, we take the result, and interpret it as an outport
>>>> bitmap, don't we?
>>>> and this is where we hit a limitation, depending on the size of crc is
>>>> in use,
>>>> we can only describe 16 output ports, which is clearly not enough...
>>>> in freerouter, polka is bound to segment routing ids, and the result of
>>>> the mod
>>>> is not a port but an sr index... my idea to overcome the above, what if
>>>> we interpret
>>>> mpolka mod result as a bitmap to this index table? it then raises the
>>>> limitation to
>>>> 16 igp neighbors per core node, which is more friendly...
>>>> an other implementation idea is to use a different ethertype for mpolka
>>>> to not
>>>> confuse the unicast or multicast packets on the wire as they'll differ
>>>> on handling...
>>>> afterall, i found it feasible both for software and dataplane
>>>> implementations,
>>>> so it could become a drop-in replacement for the current ip multicast
>>>> over bier...
>>>> any opinion?
>>>> thanks,
>>>> cs
>>>>
>>>>
>>>> ________________________________
>>>>
>>>> Esta mensagem (incluindo anexos) cont m informa o confidencial
>>>> destinada a um usu rio espec fico e seu conte do protegido por
>>>> lei. Se voc n o o destinat rio
>>>> correto deve apagar esta mensagem.
>>>>
>>>> O emitente desta mensagem respons vel por seu conte do e endere
>>>> amento.
>>>> Cabe ao destinat rio cuidar quanto ao tratamento adequado. A divulga
>>>> o, reprodu o e/ou distribui o sem a devida autoriza o ou
>>>> qualquer outra a o sem
>>>> conformidade com as normas internas do Ifes s o proibidas e pass veis
>>>> de san o disciplinar, c vel e criminal.
>>>>
________________________________
Esta mensagem (incluindo anexos) contém informação confidencial destinada a
um usuário específico e seu conteúdo é protegido por lei. Se você não é o
destinatário correto deve apagar esta mensagem.
O emitente desta mensagem é responsável por seu conteúdo e endereçamento.
Cabe ao destinatário cuidar quanto ao tratamento adequado. A divulgação,
reprodução e/ou distribuição sem a devida autorização ou qualquer outra ação
sem conformidade com as normas internas do Ifes são proibidas e passíveis de
sanção disciplinar, cível e criminal.
- [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/16/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, Everson Borges, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/19/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/21/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/22/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/18/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/17/2022
- Re: [rare-dev] mpolka in rare, mc36, 02/16/2022
- Re: [rare-dev] mpolka in rare, Cristina Klippel Dominicini, 02/16/2022
Archive powered by MHonArc 2.6.19.