Skip to Content.
Sympa Menu

rare-dev - Re: [rare-dev] Question from RARE/freeRtr team

Subject: Rare project developers

List archive

Re: [rare-dev] Question from RARE/freeRtr team


Chronological Thread 
  • From: Frédéric LOUI <>
  • To: 张云云 <>
  • Cc: 齐航 <>, Alexander Gall <>, 孙正 <>, , 张春风 <>,
  • Subject: Re: [rare-dev] Question from RARE/freeRtr team
  • Date: Tue, 3 Oct 2023 17:40:39 +0200
  • Dkim-filter: OpenDKIM Filter v2.10.3 zmtaauth02.partage.renater.fr 0F72CA0238

[+1] Adding rare-dev mailing list in case some of use try to also have X312T
working

Hi Zhang,

I tried to use my AsterFusion credential but have a consistent UNAUTHORIZED
access.

I also tried the forgot password button but could not receive any reset email
(And yes I also checked my SPAM folder …)

Can you please let us know how to access this support portal ?

Also can you please send us the documentation so that we can install this
image on the DPU ?

I quite puzzled, because since we received and installed these equipments on
our LAB.

None of the AsterFusion X312T did work correctly.
DPU become unexpectedly unavailable and crash which makes the equipment
simply unusuable.

Thanks,
Frederic

PNG image



> Le 27 sept. 2023 à 05:24, 张云云 <> a écrit :
>
> Dear Frederic,
>
> Good day and wish everything's going well there!
>
> We've uploaded Debian 12 on our help portal: https://help.cloudswit.ch/
> Please register and inform us to open the permission to download for you.
>
> Best Regards,

JPEG image

>
> Ivy Zhang Overseas Marketing Manager
> Mob/ Whatsapp/ Telegram: +86 18916960201
> https://cloudswit.ch/ |
> Add: Floor 4, Building A2, Shahutiandi Park, No.192 Tinglan Rd, Suzhou,
> China
> From: "Frédéric LOUI"<>
> Date: Fri, Jul 7, 2023, 20:15
> Subject: Re: Question from RARE/freeRtr team
> To: "齐航"<>
> Cc: "Alexander Gall"<>, "张云云"<>,
> "孙正"<>, <>,
> "张春风"<>
> That would be great if you could provide Debian 12 support. Not sure if you
> progressed, since our last discussion with Zhang Zhang who indicated that
> Debian 12 support (with rootfs) would be supported on July 2023 ? > Le 7
> juil. 2023 à 14:05, 齐航 <> a écrit : > > Hi Alex, >
> Sure, I will share you a pure debian. > And any more issues to your
> implementation on dpu we can discuss next Monday. > > From: "Alexander
> Gall"<> > Date: Fri, Jul 7, 2023, 20:01 > Subject: Re:
> Question from RARE/freeRtr team > To: "张云云"<> >
> Cc: "Frédéric LOUI"<>,
> "齐航"<>, "孙正"<>,
> <>, "张春风"<> > On
> Fri, 07 Jul 2023 17:55:39 +0800, 张云云 <> said: >
> > > Please find below 2 solutions for your current problems: > > > * To
> minimize the hugepage, please configure by sysctl -w vm.nr_hugepages=xx (xx
> is quantity, could be 0). > > * To delete VPP, configure by: > > > docker
> stop FusionNOS > > docker rm FusionNOS > > docker rmi fusionnos > > VPP is
> a simple systemd unit on our DPUs. And docker isn't running > because it
> fails to start (see messages below if you're interested). > > However, we
> believe that our use-case doesn't match the current DPU > setup well
> (overlay fs setup, docker/VPP that we don't need etc.), > i.e. debugging
> these problems doesn't seem useful to us. We would be > more interested to
> install a basic Debian with DPDK support for the > OCTEON. Would you be
> able to help us with that? We could set up a VC > to discuss this further.
> > > Regards, > Alex > > Feb 14 14:14:44 OCTEONTX systemd[1]: Starting
> Docker Application Container Engine... > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.783684760Z" level=info
> msg="libcontainerd: started new docker-containerd process" pid=7890 > Feb
> 14 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.784741940Z"
> level=info msg="parsed scheme: \"unix\"" module=grpc > Feb 14 14:14:44
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.785005670Z" level=info
> msg="scheme \"unix\" not registered, fallback to default scheme"
> module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.785318610Z" level=info msg="ccResolverWrapper:
> sending new addresses to cc:
> [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}]" module=grpc
> > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.785594560Z" level=info msg="ClientConn switching
> balancer to \"pick_first\"" module=grpc > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.785907280Z" level=info
> msg="pickfirstBalancer: HandleSubConnStateChange: 0x400090d9b0, CONNECTING"
> module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.821406500Z" level=info msg="starting containerd"
> revision=9754871865f7fe2f4e74d43e2fc7ccd237edcbce version=18.09.1 > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.822605540Z"
> level=info msg="loading plugin \"io.containerd.content.v1.content\"..."
> type=io.containerd.content.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.822925930Z" level=info msg="loading plugin
> \"io.containerd.snapshotter.v1.btrfs\"..."
> type=io.containerd.snapshotter.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.823915610Z" level=warning msg="failed to load
> plugin io.containerd.snapshotter.v1.btrfs" error="path
> /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must
> be a btrfs filesystem to be used with the btrfs snapshotter" > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.824226090Z"
> level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..."
> type=io.containerd.snapshotter.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.830787620Z" level=warning msg="failed to load
> plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed:
> \"modprobe: FATAL: Module aufs not found in directory
> /lib/modules/4.14.76-17.0.1\\n\": exit status 1" > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.831102070Z" level=info
> msg="loading plugin \"io.containerd.snapshotter.v1.native\"..."
> type=io.containerd.snapshotter.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.831401700Z" level=info msg="loading plugin
> \"io.containerd.snapshotter.v1.overlayfs\"..."
> type=io.containerd.snapshotter.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.831801870Z" level=info msg="loading plugin
> \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
> > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.832536040Z" level=warning msg="failed to load
> plugin io.containerd.snapshotter.v1.zfs" error="path
> /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be
> a zfs filesystem to be used with the zfs snapshotter" > Feb 14 14:14:44
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.832807130Z" level=info
> msg="loading plugin \"io.containerd.metadata.v1.bolt\"..."
> type=io.containerd.metadata.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.833073190Z" level=warning msg="could not use
> snapshotter btrfs in metadata plugin" error="path
> /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must
> be a btrfs filesystem to be used with the btrfs snapshotter" > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.833322640Z"
> level=warning msg="could not use snapshotter aufs in metadata plugin"
> error="modprobe aufs failed: \"modprobe: FATAL: Module aufs not found in
> directory /lib/modules/4.14.76-17.0.1\\n\": exit status 1" > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.833581600Z"
> level=warning msg="could not use snapshotter zfs in metadata plugin"
> error="path
> /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be
> a zfs filesystem to be used with the zfs snapshotter" > Feb 14 14:14:44
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.833972870Z" level=info
> msg="loading plugin \"io.containerd.differ.v1.walking\"..."
> type=io.containerd.differ.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.834229440Z" level=info msg="loading plugin
> \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.834510360Z"
> level=info msg="loading plugin
> \"io.containerd.service.v1.containers-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.834766600Z" level=info msg="loading plugin
> \"io.containerd.service.v1.content-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.835019510Z" level=info msg="loading plugin
> \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
> > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.835273960Z" level=info msg="loading plugin
> \"io.containerd.service.v1.images-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.835523500Z" level=info msg="loading plugin
> \"io.containerd.service.v1.leases-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.835776180Z" level=info msg="loading plugin
> \"io.containerd.service.v1.namespaces-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.836059720Z" level=info msg="loading plugin
> \"io.containerd.service.v1.snapshots-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.836312780Z" level=info msg="loading plugin
> \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 > Feb
> 14 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.836693580Z"
> level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..."
> type=io.containerd.runtime.v2 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.837028640Z" level=info msg="loading plugin
> \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 >
> Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.837955800Z" level=info msg="loading plugin
> \"io.containerd.service.v1.tasks-service\"..."
> type=io.containerd.service.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.838223970Z" level=info msg="loading plugin
> \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 >
> Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.838532260Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 > Feb
> 14 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.838781140Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.839033960Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.839284610Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.839534640Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 > Feb
> 14 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.839783250Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.840097390Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.840349560Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.840597030Z" level=info msg="loading plugin
> \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 > Feb
> 14 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.840900730Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.841147370Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.841397130Z"
> level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..."
> type=io.containerd.grpc.v1 > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.841644680Z" level=info msg="loading plugin
> \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 >
> Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.842129620Z" level=info msg=serving...
> address=/var/run/docker/containerd/containerd-debug.sock > Feb 14 14:14:44
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.842597060Z" level=info
> msg=serving... address=/var/run/docker/containerd/containerd.sock > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.842838490Z"
> level=info msg="containerd successfully booted in 0.022594s" > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.852699480Z"
> level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x400090d9b0,
> READY" module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.870504920Z" level=info msg="parsed scheme:
> \"unix\"" module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.870803160Z" level=info msg="scheme \"unix\" not
> registered, fallback to default scheme" module=grpc > Feb 14 14:14:44
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.871088410Z" level=info
> msg="parsed scheme: \"unix\"" module=grpc > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.871328050Z" level=info msg="scheme
> \"unix\" not registered, fallback to default scheme" module=grpc > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.881752130Z"
> level=info msg="ccResolverWrapper: sending new addresses to cc:
> [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}]" module=grpc
> > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.882093370Z" level=info msg="ClientConn switching
> balancer to \"pick_first\"" module=grpc > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.882702630Z" level=info
> msg="pickfirstBalancer: HandleSubConnStateChange: 0x40006b43b0, CONNECTING"
> module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.884150340Z" level=info msg="pickfirstBalancer:
> HandleSubConnStateChange: 0x40006b43b0, READY" module=grpc > Feb 14
> 14:14:44 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:44.884456790Z"
> level=info msg="ccResolverWrapper: sending new addresses to cc:
> [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}]" module=grpc
> > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.884735820Z" level=info msg="ClientConn switching
> balancer to \"pick_first\"" module=grpc > Feb 14 14:14:44 OCTEONTX
> dockerd[7884]: time="2019-02-14T14:14:44.885324290Z" level=info
> msg="pickfirstBalancer: HandleSubConnStateChange: 0x40006b4670, CONNECTING"
> module=grpc > Feb 14 14:14:44 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:44.886657710Z" level=info msg="pickfirstBalancer:
> HandleSubConnStateChange: 0x40006b4670, READY" module=grpc > Feb 14
> 14:14:45 OCTEONTX dockerd[7884]: time="2019-02-14T14:14:45.052097130Z"
> level=error msg="[graphdriver] prior storage driver devicemapper failed:
> devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed" >
> Feb 14 14:14:45 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:45.054738690Z" level=info msg="stopping event stream
> following graceful shutdown" error="context canceled" module=libcontainerd
> namespace=plugins.moby > Feb 14 14:14:45 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:45.055027440Z" level=info msg="stopping healthcheck
> following graceful shutdown" module=libcontainerd > Feb 14 14:14:45
> OCTEONTX dockerd[7884]: time="2019-02-14T14:14:45.056623350Z" level=info
> msg="pickfirstBalancer: HandleSubConnStateChange: 0x40006b4670,
> TRANSIENT_FAILURE" module=grpc > Feb 14 14:14:45 OCTEONTX dockerd[7884]:
> time="2019-02-14T14:14:45.056899180Z" level=info msg="pickfirstBalancer:
> HandleSubConnStateChange: 0x40006b4670, CONNECTING" module=grpc > Feb 14
> 14:14:46 OCTEONTX dockerd[7884]: Error starting daemon: error initializing
> graphdriver: devicemapper: Error running deviceCreate (CreatePool)
> dm_task_run failed > Feb 14 14:14:46 OCTEONTX systemd[1]: docker.service:
> Main process exited, code=exited, status=1/FAILURE > Feb 14 14:14:46
> OCTEONTX systemd[1]: docker.service: Failed with result 'exit-code'. > Feb
> 14 14:14:46 OCTEONTX systemd[1]: Failed to start Docker Application
> Container Engine. > > > > Any questions, please inform us freely. > >
> Thanks & Best Regards, > > [cid] Ivy Zhang > > Overseas Marketing Manager >
> > Mob/ Whatsapp/ Telegram: +86 18916960201 > > https://cloudswit.ch/ |
> > > Add: Floor 4, Building A2, Shahutiandi
> Park, No.192 Tinglan Rd, Suzhou, China > > > From: "Frédéric
> LOUI"<> > > Date: Fri, Jul 7, 2023, 15:38 > >
> Subject: Re: Question from RARE/freeRtr team > > To:
> "齐航"<>, "孙正"<> > > Cc:
> "Alexander Gall"<>, "Ivy"<>,
> <> > > Hi, > > > If it is possible. I’d rather
> start from a fresh install/clean skate of the DPU > > > What I would
> suggest, is the following approach: > > > 1- Please indicate us all
> artefact to download before the operation > > 2- Convene a zoom session
> altogether (or only with SunZheng) > > 3- Perform re-installation of the
> whole DPU together > > > Alex did a bit of analysis and he noticed that you
> are running setup with overlays. > > Which is … volatile. > > > I our case,
> we’d rather start with a Debian 12 OS and DPDK obviously compatible with
> Marvell OCTEON NIC. > > > Please let mw know your availability. > > > All
> the best > > Frederic > > >> Le 7 juil. 2023 à 09:21, 齐航
> <> a écrit : > >> > >> Hi Frederic, > >> VPP is not
> required in this case. > >> > >> Hi Sunzheng, please help to disable even
> more uninstall VPP applications persistently. > >> > >> Thanks, > >> Tsihang





Archive powered by MHonArc 2.6.24.

Top of Page