Skip to Content.
Sympa Menu

rare-users - [RARE-users] [freertr] freeRouter rpms for CentOS and Fedora

Subject: RARE user and assistance email list

List archive

[RARE-users] [freertr] freeRouter rpms for CentOS and Fedora


Chronological Thread 
  • From: "mc36" <>
  • To: "" <>
  • Subject: [RARE-users] [freertr] freeRouter rpms for CentOS and Fedora
  • Date: Wed, 9 Feb 2022 07:31:30 +0100
  • List-id: <freertr.groups.io>
  • Mailing-list: list ; contact

documenting....


-------- Forwarded Message --------
Subject: Re: freeRouter rpms for CentOS and Fedora
Date: Wed, 9 Feb 2022 05:57:06 +0100
From: mc36 <>
Reply-To:
To:

hi,


On 2/9/22 00:13, wrote:
Hi Csaba,

With options -9 256 0 result increased from 1.7 to 2.0 Gbit/s.
so it's the amount of packets get taken from the nic, then processed, then
given back to the nic...
all the dpdk examples goes with low values, my rationale here is if you
increase, you may run out of level1 dcache...

But if also added 0 0 1 1 2 3 then the result gets worse 1.8 Gbit/s
so it splits the things to do between the vcpus, but exactly the same is
done, on exactly the same loop...
is it a multi-socket environment?

Other options had no effect (maybe little effect of -4 512 0).

thanks for the trials... my secret bet would be -10 0 0, did that changed
anything?
mbuf_cache is a black magic (anyway the others are too:)) tbh dunno what it
exactly does :)

anywhere there are the rest of the tunables:
https://github.com/mc36/freeRouter/blob/f2a3e32489a2245c28b33eb6bf11af32f45c65b2/misc/native/p4dpdk.h#L314
desc_tx twice in -6 and -5 there

thanks for spotting, -6 wanted to be desc_rx.... :)

regards,
cs


Cheers
Alexey

2022-02-08 19:47 GMT+02:00, mc36 <>:
>> i was thinking about it further, and that how can p4dpdk introduce some
delay on a single-session but on on multi-session....
i have an other bet, if you tweak the "-9 <burstsize> 0" to something bigger
or smaller?
anywhere there are the rest of the tunables:
https://github.com/mc36/freeRouter/blob/f2a3e32489a2245c28b33eb6bf11af32f45c65b2/misc/native/p4dpdk.h#L314
regards,
cs


On 2/8/22 18:01, mc36 wrote:
hi,

On 2/8/22 16:54, wrote:
Hi Csaba,

If I add to vpp option "poll-sleep-usec 10" then vpp_main consumes
4.7% CPU without traffic and 95% when I run iperf3. Results with and
without this option the same. Results with virtio-net-pci better 2.4
Gbits/sec (with virtio-net-pci p4dpdk hangs after some time).

Without traffic when p4dpdk consumes 13% CPU qemu process consumes
100% (the same with vpp).

if you do iperf on multiple sessions and you see same results
If I run iperf3 with 2 streams results the same for vpp and p4dodk.

okk, so if multi-session does almost the same then
it clearly indicates the tp=bw*del thingy...
you can set the p4dpdk to do the same with "-10 10 0",
it should be the same as vpp's usec 10 option...
if you still see differences on single-thread,
well, that's a good question then... :)
maybe some linux magic niceing up p4dpdk or something?

regards,
cs



Cheers.
Alexey

2022-02-08 15:31 GMT+02:00, mc36 <>:
hi,
thanks for pushing further! first of all, both are laughingly low
numbers,
but meh...
my idea is that vpp does not sleeps (both the host and the vm uses
constant
100%)
whereas p4dpdk does usleep if no packet for a while ( < 10% cpu on both
host
and vm)...
but this obviously results in some delays, and the throughput=bw*delay
answers the rest...
if you do iperf on multiple sessions and you see same results (step 1),
(step 2) then p4dpdk have a magic knob (-10 par 0) to set usleep's
parameters,
and as far as i know, invalid values will return immediately... if not,
i'll
quickly add the extra if...
regards,
cs


On 2/8/22 14:21, wrote:
Hi Csaba,

>>>>> I compared simple iperf3 results for vpp and freerouter (two iperf3
run on host, traffic forwarded between two interfaces in vm with
e1000e nic's, mtu 1500).

For vpp (using one core) results are better than for freerouter with
p4dpdk
vpp - 2.2 Gbit/s, freerouter - 1.8 Gbit/s

If I add to p4dpdk options 0 0 1 1 2 3 -1 4 4 then result mostly
the
same (1.8 without this options and 1.85 with options).

Do you have any idea why vpp give better result?

I tried to add kernel parameter isolcpus=1-4 recommended for vpp but
p4dpdk with 0 1 2 1 3 4 does not work.

After adding parameter -2 8192 0 ping -s 8150 works but further
increase of this parameter does not allow increasing the packet size
more than 8150.

Cheers.
Alexey

2022-02-03 10:14 GMT+02:00, mc36 <>:
hi,

On 2/3/22 08:46, mc36 wrote:
i can't tell you any eta on this... :)

the more i'm thinking about the stuff the more i hate the resulting
code
so
for now all i did is noted the miss of this feature:
https://github.com/mc36/freeRouter/commit/86ea1d96e8d8ce2aef79278394f4e1c794ac4b57

basically i consider it under the same question as
why freerouter even does not fragment ipv4 in even in software...
and why ipv6 rfc declared to not fragment in the path...

regards,
cs





-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#149): https://groups.io/g/freertr/message/149
Mute This Topic: https://groups.io/mt/89016789/6413194
Group Owner:
Unsubscribe: https://groups.io/g/freertr/unsub []
-=-=-=-=-=-=-=-=-=-=-=-





Archive powered by MHonArc 2.6.19.

Top of Page