Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FS#3733 - packet loss, link loss, bad rx status / Multihomed box with Huawei E3372 HiLink #8753

Closed
openwrt-bot opened this issue Apr 9, 2021 · 1 comment
Labels
flyspray kernel pull request/issue with Linux kernel related changes release/19.07 pull request/issue targeted (also) for OpenWrt 19.07 release

Comments

@openwrt-bot
Copy link

ckowarzik:

  • Device problem occurs on
    Turris Omnia 2020
  • Software versions of OpenWrt/LEDE release, packages, etc.
    OpenWrt 19.07.7 a5672f6b96f393145070ad17c8eb1d15ef49ad2e

  • Problem description:

Network setup is multihomed with configured onboard WAN (eth2) device and Huawei E3372 (eth3) device and both distinct gateways and metrics.
As soon I start pinging from eth3 beyond its gateway I observe packet loss, link loss and bad rx status crc error on eth2.

  • Configuration:

Onboard WAN device eth2:
# dmesg | grep eth2
[ 4.473584] mvneta f1034000.ethernet eth2: Using hardware mac address d8:58:d7:01:14:fc
[ 17.582029] mvneta f1034000.ethernet eth2: PHY [f1072004.mdio-mii:01] driver [Marvell 88E1510]
[ 17.591455] mvneta f1034000.ethernet eth2: configuring for phy/sgmii link mode
[ 21.831900] mvneta f1034000.ethernet eth2: Link is Up - 1Gbps/Full - flow control rx/tx

Huawei E3372 USB-Modem with recent firmware (22.328.62.00.1217) in HiLink mode
# lsusb | grep Huawei
Bus 004 Device 002: ID 12d1:14dc Huawei Technologies Co., Ltd. E33372 LTE/UMTS/GSM HiLink Modem/Networkcard

System with kmod-usb-net-cdc-ether registers successfully new network device eth3
# dmesg | grep "CDC Ethernet Device"
[ 12.553958] cdc_ether 4-1:1.0 eth3: register 'cdc_ether' at usb-f10f8000.usb3-1, CDC Ethernet Device, 0c:5b:8f:27:9a:64

Network configuration for eth2 and eth3
# ip addr show eth2
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 532
link/ether d8:58:d7:01:14:fc brd ff:ff:ff:ff:ff:ff
inet xxx.xxx.xxx.xxx/29 brd xxx.xxx.xxx.xxx scope global eth2
valid_lft forever preferred_lft forever

# ip addr show eth3
14: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 0c:5b:8f:27:9a:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.8.100/24 brd 192.168.8.255 scope global eth3
valid_lft forever preferred_lft forever

# route -n | egrep "Destination|eth"
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 xxx.xxx.xxx.xxx 0.0.0.0 UG 10 0 0 eth2
0.0.0.0 192.168.8.1 0.0.0.0 UG 20 0 0 eth3
192.168.8.0 0.0.0.0 255.255.255.0 U 20 0 0 eth3
xxx.xxx.xxx.xxx 0.0.0.0 255.255.255.248 U 10 0 0 eth2

  • Steps to reproduce
  1. Produce traffic originating from eth3 with destination beyond its gateway:
    (pinging the gateway from eth3 will not produce the error)
    # ping -I eth3 -i 0.5 -q 1.1.1.1
  2. From different terminal window start pinging from eth2 and observe packet loss
    # ping -I eth2 -c 20 8.8.8.8
    PING 8.8.8.8 (8.8.8.8) from xxx.xxx.xxx.xxx eth2: 56(84) bytes of data.
    64 bytes from 8.8.8.8: icmp_req=1 ttl=119 time=2.13 ms
    64 bytes from 8.8.8.8: icmp_req=2 ttl=119 time=2.15 ms
    64 bytes from 8.8.8.8: icmp_req=3 ttl=119 time=2.12 ms
    64 bytes from 8.8.8.8: icmp_req=4 ttl=119 time=2.12 ms
    64 bytes from 8.8.8.8: icmp_req=6 ttl=119 time=2.11 ms
    64 bytes from 8.8.8.8: icmp_req=13 ttl=119 time=2.13 ms
    64 bytes from 8.8.8.8: icmp_req=14 ttl=119 time=2.17 ms
    64 bytes from 8.8.8.8: icmp_req=15 ttl=119 time=2.13 ms
    64 bytes from 8.8.8.8: icmp_req=16 ttl=119 time=2.10 ms
    64 bytes from 8.8.8.8: icmp_req=17 ttl=119 time=2.11 ms
    64 bytes from 8.8.8.8: icmp_req=18 ttl=119 time=2.12 ms
    64 bytes from 8.8.8.8: icmp_req=19 ttl=119 time=2.11 ms
    64 bytes from 8.8.8.8: icmp_req=20 ttl=119 time=2.10 ms

--- 8.8.8.8 ping statistics ---
20 packets transmitted, 13 received, 35% packet loss, time 19364ms
rtt min/avg/max/mdev = 2.104/2.129/2.177/0.040 ms

3. Check kernel ring buffer for related entries
# dmesg
[ 540.372989] mvneta f1034000.ethernet eth2: Link is Down
[ 542.449648] mvneta f1034000.ethernet eth2: Link is Up - 1Gbps/Full - flow control rx/tx
[ 605.886917] mvneta f1034000.ethernet eth2: Link is Down
[ 607.963595] mvneta f1034000.ethernet eth2: Link is Up - 1Gbps/Full - flow control rx/tx
[ 1164.426903] device eth2 entered promiscuous mode
[ 1166.118663] device eth2 left promiscuous mode
[ 1175.666308] device eth2 entered promiscuous mode
[ 1194.086756] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=391
[ 1195.740476] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=94
[ 1196.963670] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=82
[ 1197.536843] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=157
[ 1199.641764] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=81
[ 1204.552005] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=298
[ 1204.683304] mvneta f1034000.ethernet eth2: bad rx status 0c410000 (crc error), size=156
[ 1237.241446] device eth2 left promiscuous mode

@aparcar aparcar added release/19.07 pull request/issue targeted (also) for OpenWrt 19.07 release kernel pull request/issue with Linux kernel related changes labels Feb 22, 2022
@ynezz
Copy link
Member

ynezz commented May 24, 2022

OpenWrt 19.07 release is EOL, try to reproduce the issue with latest supported release and feel free to ask for issue reopening if the problem is still present, thanks.

@ynezz ynezz closed this as completed May 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flyspray kernel pull request/issue with Linux kernel related changes release/19.07 pull request/issue targeted (also) for OpenWrt 19.07 release
Projects
None yet
Development

No branches or pull requests

3 participants