Skip to Content.
Sympa Menu

rare-users - Re: [RARE-users] [freertr] freeRouter rpms for CentOS and Fedora

Subject: RARE user and assistance email list

List archive

Re: [RARE-users] [freertr] freeRouter rpms for CentOS and Fedora


Chronological Thread 
  • From: "mc36" <>
  • To:
  • Cc: "" <>,
  • Subject: Re: [RARE-users] [freertr] freeRouter rpms for CentOS and Fedora
  • Date: Wed, 9 Feb 2022 08:26:37 +0100
  • List-id: <freertr.groups.io>
  • Mailing-list: list ; contact

and to introduce the other party, the guy in cc did some measures and
compared them to vpp...
in multi-session iperf, both codebases performed equally around 2.2 gbps
whereas on single-session
iperfing, p4dpdk was about 20% below vpp, somehow... my suggestions were to
play with the tunables,
(over the regular nic->socket, mem->socket bindings in qemu-kvm)
and seemingly the burst of packets did ~ 10% increase, the rest were
unmeasurable...
thanks,
cs


On 2/9/22 08:22, mc36 wrote:
hi,
so there is an other guy (cc-ed) who started recently playing with the
p4dpdk... i bounced the today's mailing to the ...
he already did 6gbps in a vmware env and recently moved to a new
environemnt... please consider discussing your findings on-list, maybe
you could help you each others.... i'm personally not a big expert of
fine-tuning this stuff, and more interested in developing the code,
but seemingly you two are interested in the same topic, right now... :)
thanks,
cs


On 2/9/22 06:24, mc36 wrote:


On 2/9/22 06:22, mc36 wrote:
hi,

On 2/9/22 06:20, mc36 wrote:
hi,

On 2/9/22 05:59, mc36 wrote:

Maybe vpp runs workers in some specific way, it can run on cores
listed in isolcpus=1-4 kernel.

to do that, 0 1 2 1 3 4 would do the trick...
but as your previous mail, maybe 0 1 1 1 2 2 ? :)

so i use the dpdk builtin from their examples to spwan the forwarding loop...
https://github.com/mc36/freeRouter/blob/c6c22bada52209759197736aaceed80ffe063878/misc/native/p4dpdk.h#L475
could you verify please if it pins itself to the isolcpus?

i personally don't do such because my vcpus easily forward
at 5gbps from an af_packet interface to my ixgbe,
(it's expensive because of the user-kernel switching)
and does the 10gbps between the two ports of the nic...
but it's native...
so my questions:
-you said kvm, but is it on a multi-socket server?
-is the qemu taskset on the host too, to the nic's socket?
-is the nic->socket on the same numa node on the host?
-is the ram->socket on the same numa node on the host?

^^^^^ so all these for the qemu, on the host....


just asking because all these are taken into consideration when allocating
the mbufs in p4dpdk,
and all the <port> <rxcore> <txcore> is all about that... (mostly...:)

regards,
cs







-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#153): https://groups.io/g/freertr/message/153
Mute This Topic: https://groups.io/mt/89016789/6413194
Group Owner:
Unsubscribe: https://groups.io/g/freertr/unsub []
-=-=-=-=-=-=-=-=-=-=-=-





Archive powered by MHonArc 2.6.19.

Top of Page