Groups | Search | Server Info | Login | Register


Groups > comp.databases > #67

Re: Need suggestions on transaction performance comparison

From Keve Nagy <keve@see.my.sig>
Newsgroups comp.databases
Subject Re: Need suggestions on transaction performance comparison
Date 2011-04-27 11:42 +0200
Message-ID <91q6pcFoq3U1@mid.individual.net> (permalink)
References <8umcroF55oU1@mid.individual.net> <201104071452.UTC.inkj2o$ogv$1@tioat.net>

Show all headers | View raw


> Hard to imagine a single system (not to mention the communication
> hardware) for that. 1) Do the math, say 50 million votes over 12 hours,
> that would be over one thousand 'insert transactions' per second on
> average. The peaks would likely be several or many times that. If there
> are more elected positions per voter than just a single parliamentary
> seat, multiply some more. 2) Look at all the plausible requirements, not
> just the population. Eg., factors like how to size recovery logs or not
> wanting to have to arrange a second election if the single system fails,
> not to mention the arguments amongst the second set of losers. Some
> things aren't naturally centralized, in most western countries votes are
> counted on a local basis, eg. per riding. That might result in an
> average load of only 100 transactions per second and several hundreds
> peak. But in most (so-called) democracies the candidates will have their
> own scrutineers and they'll want either paper ballots or paper output
> from the voting systems. It's hard to imagine a computer system that can
> do a judicial re-count.
>
>
> I'm not up on current hardware but I'd say that besides doing the paper
> analysis of the stuff above you could reasonably limit your stress-tests
> to a much smaller scale.

Thanks for pointing these out, Paul!
Even though I was aware of these numbers, I didn't really think through 
their total effect the way you just highlighted. I have already amended 
the test plan accordingly, considering only a scaled-down version of the 
task. Having the election broken down to regions is a perfect example.

In the meantime I have also made some progress in the implememtation of 
the load test. I concluded that Perl DBI can be my best friend here, 
allowing me to construct a tool that spawns and inserts data-records as 
required. And it can be run on virtually any O/S, and against any of my 
target database types.

Where I still don't really have an answer is the measurement part.
I don't see how to measure each database's performance of handling the 
INSERTs. Can't quite grab what to measure, how I could measure that, and 
how to feed it into something like gnuplot to get comparable graphs.

Thoughts and pointers on such technical details would be more than welcome!


Regards,
-- 
Keve Nagy * Debrecen * Hungary
keve(at)mail(dot)poliod(dot)hu

Back to comp.databases | Previous | NextPrevious in thread | Next in thread | Find similar


Thread

Re: Need suggestions on transaction performance comparison paul c <anonymous@not-for-mail.invalid> - 2011-04-07 07:52 -0700
  Re: Need suggestions on transaction performance comparison Keve Nagy <keve@see.my.sig> - 2011-04-27 11:42 +0200
    Re: Need suggestions on transaction performance comparison paul c <toledobythesea@gmail.com> - 2011-05-10 06:49 -0700

csiph-web