Groups | Search | Server Info | Login | Register


Groups > muc.lists.netbsd.tech.net > #5917

Re: Odd TCP snd_cwnd behaviour, resulting in pathetic throughput?

From Greg Troxel <gdt@lexort.com>
Newsgroups muc.lists.netbsd.tech.net
Subject Re: Odd TCP snd_cwnd behaviour, resulting in pathetic throughput?
Date 2022-02-09 07:47 -0500
Organization Newsgate at muc.de e.V.
Message-ID <rmih798qkaf.fsf@s1.lexort.com> (permalink)
References <YgNZokCV+FD6lGzg@slave.private>

Show all headers | View raw


[Multipart message — attachments visible in raw view] - view raw

Paul Ripke <stix@stix.id.au> writes:

> I've recently been trying to get duplicity running for backups of my
> main home server, out to the magical cloud (aws s3 to wasabi.com).
>
> I have discovered that bandwidth oscillates in a sawtooth fashion,
> ramping up from near nothing, to saturate my uplink (somewhere around
> 20-30Mbit/s), then I get a dropped packet, and it drops to near nothing
> and repeats, each cycle taking around a minute. Looking at netstat -P
> for the PCB in question, I see the snd_cwnd following the pattern,
> which makes sense. I've flipped between reno, newreno & cubic, and
> while subtly different, they all have the snd_cwnd dropping to near
> nothing after a single dropped packet. I didn't think this was
> expected behaviour, especially with SACKs enabled.

Long ago I almost found a bug in how our TCP retransmits, and didn't
manage, for reasons not really about the bug, to identify and land a fix.

> Reading tcpdump, the only odd thing I see is that the duplicate ack
> triggering the fast retransmit is repeated 70+ times. But tracing
> other flows, this doesn't seem abnormal.

That is normal.   However on the third dupack it should cause "fast
recovery", clocking out one new (or retransmitted) packet every dupack.

> It's worth noting that running their "speedtest" thru firefox running
> on the same machine is fine - and bandwidth is as I'd expect.

huh.

> Is there anyone willing to take a look at a pcap and tell me what
> I'm missing? ie. cluebat, please?

Sure, send it to me, or put it up for download.


The right tool for this is in pkgsrc as xplot-devel, and it may be that
you need a not-packaged modified tcpdump2xplot to keep up with tcpdump
drift.

> fwiw, I do have npf and altq configured, but disabling altq doesn't
> appear to change the behaviour.
>
> fwiw#2, I briefly toyed with the idea of bringing BBR from FreeBSD,
> but I think we'd need more infrastructure for doing pacing? And while
> it might "fix" this, I think we're better off fixing whatever is
> actually broken.

What you should be seeing is a sawtooth in speed, but from half to full
and then dropping to half, very very roughly.

Back to muc.lists.netbsd.tech.net | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

Odd TCP snd_cwnd behaviour, resulting in pathetic throughput? Paul Ripke <stix@stix.id.au> - 2022-02-09 17:05 +1100
  Re: Odd TCP snd_cwnd behaviour, resulting in pathetic throughput? mlelstv@serpens.de (Michael van Elst) - 2022-02-09 06:33 +0000
  Re: Odd TCP snd_cwnd behaviour, resulting in pathetic throughput? Greg Troxel <gdt@lexort.com> - 2022-02-09 07:47 -0500
  Re: Odd TCP snd_cwnd behaviour, resulting in pathetic throughput? Paul Ripke <stix@stix.id.au> - 2022-02-11 10:29 +1100

csiph-web