Groups | Search | Server Info | Login | Register


Groups > comp.dcom.net-analysis > #22

WAN Performance Analysis

X-Received by 2002:a24:a048:: with SMTP id o69-v6mr3856244ite.39.1539962025117; Fri, 19 Oct 2018 08:13:45 -0700 (PDT)
X-Received by 2002:aca:fdc7:: with SMTP id b190-v6mr607318oii.3.1539962025015; Fri, 19 Oct 2018 08:13:45 -0700 (PDT)
Path csiph.com!weretis.net!feeder6.news.weretis.net!feeder.usenetexpress.com!feeder-in1.iad1.usenetexpress.com!border1.nntp.dca1.giganews.com!nntp.giganews.com!z5-v6no69298ite.0!news-out.google.com!l70-v6ni96itb.0!nntp.google.com!z5-v6no69292ite.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail
Newsgroups comp.dcom.net-analysis
Date Fri, 19 Oct 2018 08:13:44 -0700 (PDT)
Complaints-To groups-abuse@google.com
Injection-Info glegroupsg2000goo.googlegroups.com; posting-host=184.186.0.169; posting-account=m2LXvgoAAACNQoezaorVPHbOesPgCRpm
NNTP-Posting-Host 184.186.0.169
User-Agent G2/1.0
MIME-Version 1.0
Message-ID <9d6daf0f-74cb-4187-b94d-8aaaeeb65caa@googlegroups.com> (permalink)
Subject WAN Performance Analysis
From bobneworleans@gmail.com
Injection-Date Fri, 19 Oct 2018 15:13:45 +0000
Content-Type text/plain; charset="UTF-8"
Content-Transfer-Encoding quoted-printable
Lines 26
Xref csiph.com comp.dcom.net-analysis:22

Show key headers only | View raw


A local backup of a server created a 200GB file in under 40 minutes.  Subsequently, a backup copy job took over a week to copy this data through the VPN to a remote site.  I’d like to figure out how to reduce this time by increasing the efficiency of the backup copy job.  (I’d also like to improve my understanding of TCP to more effectively troubleshoot this type of issue.)

The slower of the two ISP links is 10 Mbps.   If I could transfer 200GB at that rate, this job would complete in under 44 hours.  The average RTT from three pings is 42mS so I believe that the bandwidth * delay product is 52,500B.

Packet captures show that the backup copy job utilizes six concurrent streams.  With 1410B frame size, that’s 8,460B so the pipe is only 16% full.  Does this suggest that I should increase the number of streams from six to 36 to fill the pipe or does TCP automatically open up the window (with no packet loss) to accomplish this?

WireShark shows 20-24% DUP ACKs depending on which side of the link these are captured.  Typically, for those segments that have DUP ACKs, there are less than a dozen but I found one #25.  I presume these are caused by packets getting dropped due to congestion; are there other likely reasons?  Would this level of dropped packets, by itself account for the poor transfer performance this job achieves?  Is the VPN a likely cause?

Please offer tuning suggestions and any information I’m apparently missing that would help answer these questions.  THANKS!

Back to comp.dcom.net-analysis | Previous | Find similar


Thread

WAN Performance Analysis bobneworleans@gmail.com - 2018-10-19 08:13 -0700

csiph-web