Subject: Rare project developers
List archive
- From: mc36 <>
- To: Antoni Przygienda <>, "" <>
- Subject: Re: [rare-dev] FW: rift in freerouter
- Date: Mon, 19 Dec 2022 06:11:02 +0100
hi,
thanks... well, we can show off the results at ietf surely...
but i really don\t have too much, all i can offer is some screenshots... :)
which working group do you think it would fit?
thanks,
cs
On 12/18/22 09:01, Antoni Przygienda wrote:
Okey, as I suspected just bits of docker config stuff missing. Impressive you implemented that much that fast but it s almost expected with you and modelling saves enormous amount of ser/deser work & debugging it
Depends how far you want to stretch it further. Read section 6 in the draft. You basically have leaf implementation already if you implement proper FIB installation for negatives/positives (KV is optional largely but we find it very useful in lots use cases by now). It s easy to prevent peple to run it as something else by basically hard-coding leaf flag & level 0 in the code then. Spine gets bit complex and ToF is of course lots of additional stuff
If you think of stripping then some interop preso for IETF that would be most welcome of course
* Tony
*From: *mc36 <>
*Date: *Sunday, 18 December 2022 at 04:35
*To: *Antoni Przygienda <>,
<>
*Subject: *Re: [rare-dev] FW: rift in freerouter
[External Email. Be cautious of content]
hi,
yeahhh, thanks for the help, it was it...
now i see both sides the database to converged and my side accepted your
prefix to my rib!
for now, imho we can say that we speak the same protocol, and this is all
what i wanted to see for now... :)
thanks,
cs
root@crpd01> show rift database content
Dir Originator Type ID SeqNr
Lifetime Origin Creation Time Origin Content Key ID
---+----------------+---------+--------+----------------+--------+-----------------------+-Lifetime-+--Size-+--------
S 0000000000bc4ff2 Node 00000001
4 604782
190 0
S 0000000000bc614e Node 10000000 639e86e6d13c
604783 2022/12/18 03:28:30.733 604800 None
S 0000000000bc614e Prefix 20000007 639e86e429f9 604783
2022/12/18 03:28:30.733 604800 None
N 0000000000bc614e Node 10000000 639e86e6d227
604783 2022/12/18 03:28:30.733 604800 299 0
N 0000000000bc614e Prefix 20000023 639e86e450ff 604783
2022/12/18 03:28:30.733 604800 174 0
root@crpd01> show rift topology nodes
.
+------ Links ----------------+--- TIEs ----+- Prefixs -+
Lvl Name Originator Ovld Dir|3way| v4 | v6
|Mscb|Sec |BFD | Auth | Non | V4 | V6 |Newest TIE Issued
---+----------+-----------------+----+---+----+----+---------+----+----+------+------+-----+-----+-----------------
24 sid 0000000000bc4ff2 1
1
24 crpd01 0000000000bc614e N 1 1
4 2
2022/12/18 03:28:30.733
root@crpd01>
sid#show ipv4 rift 2 neighbor
iface nodeid name peer
uptime
pwether1 12345678 crpd01:ens4 10.123.123.123 00:00:05
sid#show ipv4 rift 2 database
dir origin num typ seq left
s 12341234 1 2 4
6d23h
s 12341234 2 3 3
6d23h
n 12341234 1 2 4
6d23h
n 12341234 2 3 3
6d23h
n 12345678 268435456 2 109532519256615 6d23h
n 12345678 536870947 3 109532519092479 6d23h
sid#
sid#show ipv4 rift 2 route
typ prefix metric iface hop
time
F 10.123.123.0/24 100/11 pwether1 10.123.123.123 00:00:10
sid#
On 12/17/22 22:47, Antoni Przygienda wrote:
Well, your interfaces need at least ipv6 enable<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html
Look at
Show interface routing
And
Show interface terse
To see whether the stuff looks properly
(BTW, rift is really ZTP, the default group is installed on package install
and I exposed tons of constants like mcast addresses and so on to allow for
easy mucking around. And we
normally apply an interface group catching all ether stuff so there is no
interface config [assuming they ip and/or ipv6 already])
Unless interface is under rift and at least v4 or v6 enabled you won t
see it under rift. This is an optimization, we have stuff with literally 100s
of ports and we don t want to
clutter e thing with ports that aren t even configured but under rift
config
BTW, here you see I assigned bunch v4 addresses since running v6 on docker is
iffy you may try it but it has tons of limitations in linux kernel we
used to hit. On proper boxes
stuff comes up with V6 LLC only and fwd s easily v4 over those nexthops.
AFAIR works on cRPD as well but there were some kernel magic things we had to
set/do so it may work or not
on your vlavor of docker over your flavor of kernel
root@j_tof_1_2_1> show interfaces routing
Interface State Addresses
eth4 Up MPLS
enabled
ISO enabled
INET 10.106.0.66
eth3 Up MPLS
enabled
ISO enabled
INET 10.106.0.58
eth2 Up MPLS
enabled
ISO enabled
INET 10.106.0.50
irb Up MPLS
enabled
ISO enabled
tunl0 Up MPLS
enabled
ISO enabled
sit0 Up MPLS
enabled
ISO enabled
lsi Up MPLS
enabled
ISO enabled
lo.0 Up MPLS
enabled
ISO enabled
ISO 49.0001.1720.0102.0001
ip6tnl0 Up MPLS enabled
ISO enabled
gretap0 Down MPLS enabled
ISO enabled
gre0 Up MPLS
enabled
ISO enabled
eth1 Up MPLS
enabled
ISO enabled
INET 10.106.0.42
eth0 Up MPLS
enabled
ISO enabled
INET 10.106.0.34
erspan0 Down MPLS enabled
ISO enabled
root@j_tof_1_2_1> show rift interface status
Link ID: 257, Interface: eth0
Status Admin: True, Platform: True, State: ThreeWay, 3-Way Uptime: 6 hours,
33 minutes, 49 seconds
LIE TX V4: 224.0.0.120, LIE TX V6: ff02::a1f7, LIE TX Port: 914, TIE RX Port:
915
PoD: 0, Nonce: 27393
Neighbor: ID 0000000041075000, Link ID: 259, Name: p_0_1_1:eth2, Level: 23
TIE V4: 10.106.0.33, TIE Port: 915, BW: 1000 MBits/s
PoD: None, Nonce: 9887, Outer Key: 0, Holdtime: 3 secs, Fabric ID: None
Link ID: 258, Interface: eth1
Status Admin: True, Platform: True, State: ThreeWay, 3-Way Uptime: 6 hours,
33 minutes, 49 seconds
LIE TX V4: 224.0.0.120, LIE TX V6: ff02::a1f7, LIE TX Port: 914, TIE RX Port:
915
PoD: 0, Nonce: 13776
Neighbor: ID 00000000410b5000, Link ID: 259, Name: p_0_1_2:eth2, Level: 23
TIE V4: 10.106.0.41, TIE Port: 915, BW: 1000 MBits/s
PoD: None, Nonce: 18396, Outer Key: 0, Holdtime: 3 secs, Fabric ID: None
Link ID: 259, Interface: eth2
Status Admin: True, Platform: True, State: ThreeWay, 3-Way Uptime: 6 hours,
33 minutes, 49 seconds
LIE TX V4: 224.0.0.120, LIE TX V6: ff02::a1f7, LIE TX Port: 914, TIE RX Port:
915
PoD: 0, Nonce: 8339
Neighbor: ID 0000000042075000, Link ID: 259, Name: p_0_2_1:eth2, Level: 23
TIE V4: 10.106.0.49, TIE Port: 915, BW: 1000 MBits/s
PoD: None, Nonce: 26122, Outer Key: 0, Holdtime: 3 secs, Fabric ID: None
Link ID: 260, Interface: eth3
Status Admin: True, Platform: True, State: ThreeWay, 3-Way Uptime: 6 hours,
33 minutes, 49 seconds
LIE TX V4: 224.0.0.120, LIE TX V6: ff02::a1f7, LIE TX Port: 914, TIE RX Port:
915
PoD: 0, Nonce: 23478
Neighbor: ID 00000000420b5000, Link ID: 259, Name: p_0_2_2:eth2, Level: 23
TIE V4: 10.106.0.57, TIE Port: 915, BW: 1000 MBits/s
PoD: None, Nonce: 1672, Outer Key: 0, Holdtime: 3 secs, Fabric ID: None
Link ID: 261, Interface: eth4
Status Admin: True, Platform: True, State: OneWay
LIE TX V4: 224.0.0.120, LIE TX V6: ff02::a1f7, LIE TX Port: 914, TIE RX Port:
915
PoD: 0, Nonce: 17310
root@j_tof_1_2_1> show route
inet.0: 12 destinations, 12 routes (12 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
0.0.0.0/0 *[Static/20/100] 06:36:24
Discard
10.106.0.32/30 *[Direct/0] 06:36:28
> via eth0
10.106.0.34/32 *[Local/0] 06:36:28
Local via eth0
10.106.0.40/30 *[Direct/0] 06:36:28
> via eth1
10.106.0.42/32 *[Local/0] 06:36:28
Local via eth1
10.106.0.48/30 *[Direct/0] 06:36:28
> via eth2
10.106.0.50/32 *[Local/0] 06:36:28
Local via eth2
10.106.0.56/30 *[Direct/0] 06:36:28
> via eth3
10.106.0.58/32 *[Local/0] 06:36:28
Local via eth3
10.106.0.64/30 *[Direct/0] 06:36:28
> via eth4
10.106.0.66/32 *[Local/0] 06:36:28
Local via eth4
224.0.0.120/32 *[RIFT/20/100] 06:36:24
MultiRecv
iso.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
49.0001.1720.0102.0001/72
*[Direct/0] 06:36:26
> via lo.0
inet6.0: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
::/0 *[Static/20/100]
06:36:24
Discard
ff02::2/128 *[INET6/0] 06:36:35
MultiRecv
ff02::a1f7/128 *[RIFT/20/100] 06:36:24
MultiRecv
inet6.3: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
fe80:100::/32 *[Static/20/100] 06:36:24
Discard
fe80:200::/32 *[Static/20/100] 06:36:24
Discard
--- tony
*From: *mc36 <>
*Date: *Saturday, 17 December 2022 at 22:27
*To: *Antoni Przygienda <>,
<>
*Subject: *Re: [rare-dev] FW: rift in freerouter
[External Email. Be cautious of content]
nevermind, the kill helped... but for now, i dont see it accepting my config
or so?
root@crpd01> show rift node statistics
Starttime: 2022/12/17 21:15:44.333
Service Requests: 16, Failed Requests: 0
root@crpd01> show rift interface statistics
<---------------- it's empty and does not
send lie to me.....
root@crpd01> show configuration | display set
set version 20221123.183731_builder.r1297844
set groups rift-defaults protocols rift traceoptions file size 1000000
set groups rift-defaults protocols rift traceoptions file files 4
set groups rift-defaults protocols rift traceoptions level notice
set groups rift-defaults protocols rift node-id 1108037632
set groups rift-defaults protocols rift level auto
set groups rift-defaults protocols rift lie-receive-address family inet
224.0.0.120
set groups rift-defaults protocols rift lie-receive-address family inet6
ff02::a1f7
set groups rift-defaults protocols rift interface <*> lie-transmit-address
family inet 224.0.0.120
set groups rift-defaults protocols rift interface <*> lie-transmit-address
family inet6 ff02::a1f7
set policy-options policy-statement ps1 from protocol direct
set policy-options policy-statement ps1 then accept
set protocols rift apply-groups rift-defaults
set protocols rift node-id 12345678
set protocols rift level top-of-fabric
set protocols rift interface ens4 mode advertise-subnets
root@crpd01>
On 12/17/22 22:07, Antoni Przygienda wrote:
Maybe a bug outstanding on 22.1 on cRPD with rift
Just kill rift-proxyd once from the bash. On 2^nd run it grabs the config
* Tony
*From: *mc36 <>
*Date: *Saturday, 17 December 2022 at 21:58
*To: *Antoni Przygienda <>,
<>
*Subject: *Re: [rare-dev] FW: rift in freerouter
[External Email. Be cautious of content]
however i see this too:
root@p4emu:/home/mc36# docker exec -it crpd01 bash
===>
Containerized Routing Protocols Daemon (CRPD)
Copyright (C) 2020-2022, Juniper Networks, Inc. All rights
reserved.
<===
===============================================
ROUTING IN FAT TREES (RIFT) Environment
Copyright (c) 2016-2023, Juniper Networks, Inc.
All rights reserved.
===============================================
root@crpd01:/# ps aux | grep rift
root 132
0.0 0.0 4412 852 ?
Ss 20:39
0:00 runsv rift-proxyd
root 143
0.0 0.0 790744 12956 ?
S 20:39 0:00
/usr/sbin/rift-proxyd -N
root 392
0.0 0.0 11472 1128 pts/1
S+ 20:58 0:00 grep --color=auto rift
root@crpd01:/#
On 12/17/22 21:50, mc36 wrote:
just another thingy, i can ping between my sid and crpd but i get this:
root@crpd01> show configuration | display set
set version 20221123.183731_builder.r1297844
set groups rift-defaults protocols rift traceoptions file size 1000000
set groups rift-defaults protocols rift traceoptions file files 4
set groups rift-defaults protocols rift traceoptions level notice
set groups rift-defaults protocols rift node-id 1108037632
set groups rift-defaults protocols rift level auto
set groups rift-defaults protocols rift lie-receive-address family inet
224.0.0.120
set groups rift-defaults protocols rift lie-receive-address family inet6
ff02::a1f7
set groups rift-defaults protocols rift interface <*> lie-transmit-address
family inet 224.0.0.120
set groups rift-defaults protocols rift interface <*> lie-transmit-address
family inet6 ff02::a1f7
set policy-options policy-statement ps1 from protocol direct
set policy-options policy-statement ps1 then accept
set protocols rift apply-groups rift-defaults
set protocols rift node-id 12345678
set protocols rift level top-of-fabric
set protocols rift export
root@crpd01> show rift node status
CRIT: RIFT not running: not all arguments converted during string formatting
root@crpd01>
and basically every rift command fails with this message...
On 12/17/22 21:36, mc36 wrote:
thank you soo much, now i'm on my way: the documentation helped a lot to
figure out the proper docker run knobs:
root@p4emu:/home/mc36# docker load -i
junos-routing-crpd-amd64-22.1I20221216_1827.docker.save.gz
0890eec52556: Loading layer
[==================================================>]
489MB/489MB
Loaded image ID:
sha256:b40e122aeb60af2a772f8fed3e30f54730ce2bc8f61fae35c2d3d8a166ec0728
root@p4emu:/home/mc36# docker volume create crpd01-varlog
crpd01-varlog
root@p4emu:/home/mc36# docker images
REPOSITORY TAG
IMAGE ID
CREATED
SIZE
<none>
<none>
b40e122aeb60 26 hours ago
482MB
root@p4emu:/home/mc36# docker run --rm --detach --name crpd01 -h crpd01
--privileged --net=host -v crpd01-config:/config -v crpd01-varlog:/var/log
-it b40e122aeb60
2a750945a0cb2bef2545ea92789456c7a6d4d1bdb037fddc501cb233aca02803
root@p4emu:/home/mc36# docker exec -it crpd01 cli
root@crpd01> show rift versions info
Package: 1.4.1.1298669
Built On: 2022-12-15T12:00:11.463004567+00:00
Built In: JUNOS_221_R3_BRANCH
Encoding Version: 6.1
Statistics Version: 4.0
Services Version: 12.0
Auto EVPN Version: 1.0
root@crpd01>
now i'll be able do the interop! i'll keep you posted... :)
br,
cs
On 12/17/22 20:39, Antoni Przygienda wrote:
Nah, vmx as product been discontinued for quite a while now, we use it
internally heavily but most use cases moved to cRPD by now
We have tons folks who run cRPD on all kind of setups, extensive doc around
as well , e.g.
https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html <https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html>
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html>>
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html>
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html>>>
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/index.html>>>>
especially
https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html>>
<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html>>><https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html<https://www.juniper.net/documentation/us/en/software/crpd/crpd-deployment/topics/task/crpd-linux-server-install.html
<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$>
if you want to plug it into your tool and that doesn t help
let me know I poke internally though as I said, this is very fresh stuff you
re consuming, especially on rift side.
The cRPD 22.1 per se been around for a while and getting into CLI shouldn
t be a problem so I m sure it s
something fairly simple. Possibly you dind t give it the min
required
interfaces or required volumes, go check /var/log otherwise
I see your env is similar to what we have with JSON, if you want to plug our
stuff into it you d need to run sideways container and put
the config into it. How you hook it into
DPKK, no idea, we have cRPD over sonic and all kind of stuff but I
m staying out the fwd path to large extent
-00 tony
*From: *mc36 <>
*Date: *Saturday, 17 December 2022 at 19:01
*To: *Antoni Przygienda <>,
<>
*Subject: *Re: [rare-dev] FW: rift in freerouter
[External Email. Be cautious of content]
hi,
i go inline....
br,
cs
On 12/17/22 18:19, Antoni Przygienda wrote:
Sorry, I cannot drop you vmx images easily. Product not supported, onlyokk, then i'll abandon this vmx idea... i just hoped if it's listed on the
internal stuff, images not supposed to go out and I
m not sure you ll be able to run 22.x
stuff
anyway, tons
stuff has changed since we supported the product.
public juniper.net then it's something one can use...:)
Kithara is trivial, just grab a standard ub18 container, throw your stuff
onto it, snapshot the image and give it to kithara on a lab.
there is a project a team member started back in the days at
https://urldefense.com/v3/__https://github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$
<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$>>
<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$>>><https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$
<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$>>>>hopefully<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$<https://urldefense.com/v3/__https:/github.com/rare-freertr/freeRtr-docker__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G078QvNzw$
it
<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$>still works... :)
To test rift stuff properly you ll bewell, i'm not against the kithara nor anything but i like the tooling...
building pretty soon lots tooling yourself to bring up CLOS networks and then
realize you cannot run it with VMs at any reasonable size and end up with
containers anyway (or
namespaces as Bruno s code can partially do
but that s also super limited). RIFT only
starts to really cranck once you re at couple
hundred nodes at least
i already have templates like this:
https://urldefense.com/v3/__http://src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$
<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$>>
<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$>>><https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$
<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$>>>><https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$<https://urldefense.com/v3/__http:/src.mchome.nop.hu/cfg/temp-isis.tmpl__;!!NEt6yMaO-gk!BIIor96kX1heIiq-3mnHYA-9xI33nS8dJx34DODtiVqHg7YQs8oqHVK-O4Mu7G2pAb6ItQ$
then i can simply use a csv with the intended connections, repetitions, etc...
Very simplest stuff is to run within my Jason environ just like Bruno does,well, freerouter dont have global so i can run 2 rift instances and peer them
there is no virtualization/data plane at all but you can build very large
topologies within seconds. We
could easily extend it from
don t run the passive nodes
to
run only nodes with this simulation-partition-id
if you want and then can mix all implementations in
any fashion
over a hairpin interface...
moreover we have a lightweight dataplane with uses simple udp sockets to pass
ethernet frames... the
forwarder is the same found in rare's dpdk dataplane, just the packetio is
swapped out here... as it's
a simple process in linux, one can have 1000s of freerouters with dataplanes
in a single computer...
for example on a single 2 cpu xeon i run with the test cases with paralell
100, and we can safely
assume that each test case spans up at least 4 routers soo it's there
already... :)
but aggreed upon, for rift self testing, the hairpin is more than enough,
then it's a single java process,
much like with bruno's stuff, but here java hotspot quickly produces native
executable
(or graalvm can do in at compile time if one needs the quick warmup times
too, like me:)
sid#show config-differences
hairpin 1
exit
router rift4 2
vrf v2
router-id 12345678
redistribute connected
exit
router rift4 3
vrf v3
router-id 87654321
redistribute connected
exit
interface loopback2
vrf forwarding v2
ipv4 address 2.2.2.2 255.255.255.255
no shutdown
no log-link-change
exit
interface loopback3
vrf forwarding v3
ipv4 address 2.2.2.3 255.255.255.255
no shutdown
no log-link-change
exit
interface hairpin11
vrf forwarding v2
ipv4 address 1.1.1.1 255.255.255.252
router rift4 2 enable
no shutdown
no log-link-change
exit
interface hairpin12
vrf forwarding v3
ipv4 address 1.1.1.2 255.255.255.252
router rift4 3 enable
no shutdown
no log-link-change
exit
sid#
sid#
sid#show ipv4 rift 2 database
dir origin
num typ seq left
s 12345678 1
2
2 6d23h
s 12345678 2
3
3 6d23h
s 12345678 3
3
1 6d23h
s 87654321 1
2
4 6d23h
s 87654321 2
3
2 6d23h
s 87654321 3
3
1 6d23h
n 12345678 1
2
2 6d23h
n 12345678 2
3
3 6d23h
n 12345678 3
3
1 6d23h
n 87654321 1
2
4 6d23h
n 87654321 2
3
2 6d23h
n 87654321 3
3
1 6d23h
sid#show ipv4 rift 3 database
dir origin
num typ seq left
s 12345678 1
2
2 6d23h
s 12345678 2
3
3 6d23h
s 12345678 3
3
1 6d23h
s 87654321 1
2
4 6d23h
s 87654321 2
3
2 6d23h
s 87654321 3
3
1 6d23h
n 12345678 1
2
2 6d23h
n 12345678 2
3
3 6d23h
n 12345678 3
3
1 6d23h
n 87654321 1
2
4 6d23h
n 87654321 2
3
2 6d23h
n 87654321 3
3
1 6d23h
sid#
sid#
sid#show ipv4 route v2
typ prefix
metric iface
hop
time
C 1.1.1.0/30
0/0 hairpin11
null
00:01:00
LOC 1.1.1.1/32 0/1
hairpin11 null
00:01:00
C 2.2.2.2/32
0/0 loopback2
null
00:01:14
F 2.2.2.3/32
100/10 hairpin11 1.1.1.2
00:00:07
sid#
sid#
sid#show ipv4 route v3
typ prefix
metric iface
hop
time
C 1.1.1.0/30
0/0 hairpin12
null
00:00:29
LOC 1.1.1.2/32 0/1
hairpin12 null
00:00:29
F 2.2.2.2/32
100/10 hairpin12 1.1.1.1
00:00:21
C 2.2.2.3/32
0/0 loopback3
null
00:00:09
sid#ping 2.2.2.2 vrf v3 source loopback3
pinging 2.2.2.2, src=2.2.2.3, vrf=v3, cnt=5, len=64, df=false, tim=1000,
gap=0, ttl=255, tos=0, sgt=0, flow=0, fill=0, alrt=-1, sweep=false,
multi=false
!!!!!
result=100.0%, recv/sent/lost/err=5/5/0/0, took 18, min/avg/max/dev
rtt=0/0.3/1/0.2, ttl 255/255/255/0.0, tos 0/0.0/0/0.0
sid#trace 2.2.2.2 vrf v3 source loopback3
tracing 2.2.2.2, src=2.2.2.3, vrf=v3, prt=0/33440, tim=1000, tos=0, flow=0,
len=64
via 2.2.2.2/32 100/10 hairpin12 1.1.1.1 00:00:35
1 2.2.2.2 time=0
sid#
If you need kithara help I can hook you up directly with one of the authors,
I know them well, they ll be more than happy
to bring freerouter in
ahhh, hopefully i'll figure it out alone.. i'll let you know how i progress...
thanks,
cs
--t ony
*From: *mc36 <>
*Date: *Saturday, 17 December 2022 at 16:19
*To: *Antoni Przygienda <>,
<>
*Subject: *Re: [rare-dev] FW: rift in freerouter
[External Email. Be cautious of content]
On 12/17/22 16:18, mc36 wrote:
that way i'm pretty sure will work and i already have a rift build for thatsurely a typo, 2022
vmx from 2020-10-xx...
Juniper Business Use Only
Juniper Business Use Only
Juniper Business Use Only
Juniper Business Use Only
Juniper Business Use Only
- Re: [rare-dev] FW: rift in freerouter, (continued)
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/17/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/18/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/18/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, Antoni Przygienda, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- RE: [rare-dev] FW: rift in freerouter, Jeffrey (Zhaohui) Zhang, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
- Re: [rare-dev] FW: rift in freerouter, mc36, 12/19/2022
Archive powered by MHonArc 2.6.19.