Groups | Search | Server Info | Login | Register


Groups > comp.databases.mysql > #509

Re: Performance difference connecting locally vs. remotely

From Andrew Gideon <c182driver1@gideon.org>
Subject Re: Performance difference connecting locally vs. remotely
Newsgroups comp.databases.mysql
References <Slkmp.153673$2k4.24412@news.usenetserver.com> <h06p68-stm.ln1@xl.homelinux.org>
Organization UseNetServer - www.usenetserver.com
Message-ID <Aknmp.155352$2k4.87970@news.usenetserver.com> (permalink)
Date 2011-04-04 17:36 +0000

Show all headers | View raw


On Mon, 04 Apr 2011 17:51:13 +0200, Axel Schwenke wrote:

> As others already said: SSH encrypts, optionally compresses and
> multiplexes packets between endpoints. This is slow.

I know.  This all adds to latency.  But since I don't normally notice it, 
I thought I'd not notice it here.  I suspect, as I wrote before, that I'm 
noticing it here only because there are so many "transactions" (in the 
communication sense of the word) that those little pauses are adding up.

[...]

> 1. you can upload (scp) your SQL file to the remote machine and
>    then load it there

That's what I did for "local" testing.  In production this won't be an 
option.

> 2. you can use netcat to open an unencrypted (fast) tunnel:
>    remote: netcat -l -p 4711 | mysql -u... -p... local:  cat sqlfile |
>    netcat remote 4711

The server in question in behind a firewall and in RFC1918 address 
space.  Beyond doing some DNAT in our routers, my only access is via a VPN 
(with its own encryption).

On the other hand, this makes me realize that I was SSHing over the VPN.  
Thus: two steps of encryption.  So I can try using netcat to eliminate 
one of those steps, and see what happens.

I'm running that test now.

It'll take a while to run.  But, just from cursory examination of "show 
processlist", I'm not seeing the same occasional "query doing nothing" 
states.  So this is promising.

I'm also seeing higher bandwidth utilization on the link than normal.  
That could just be a coincidence, but if it isn't then it is another good 
sign.

>> One possibility would be to let the queries run asynchronously, where
>> query N could run before the response to query N-1 had been received.
> 
> This will not work. The MySQL wire protocol is synchronous. The server
> will not accept a request packet before it has sent the last response
> packet.

That's unfortunate.  I was hoping.  

Thanks...

	Andrew


Back to comp.databases.mysql | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

Performance difference connecting locally vs. remotely Andrew Gideon <c182driver1@gideon.org> - 2011-04-04 14:13 +0000
  Re: Performance difference connecting locally vs. remotely The Natural Philosopher <tnp@invalid.invalid> - 2011-04-04 15:35 +0100
  Re: Performance difference connecting locally vs. remotely Axel Schwenke <axel.schwenke@gmx.de> - 2011-04-04 17:51 +0200
    Re: Performance difference connecting locally vs. remotely Andrew Gideon <c182driver1@gideon.org> - 2011-04-04 17:36 +0000
      Re: Performance difference connecting locally vs. remotely Andrew Gideon <c182driver1@gideon.org> - 2011-04-07 13:56 +0000
        Re: Performance difference connecting locally vs. remotely Axel Schwenke <axel.schwenke@gmx.de> - 2011-04-07 17:57 +0200
    Re: Performance difference connecting locally vs. remotely Thomas 'PointedEars' Lahn <PointedEars@web.de> - 2011-04-05 07:46 +0200

csiph-web