Data Base Enginge Efficiency
Posted by srdiamond15
on 1/3/2005
srdiamond15
1/3/2005 1:26 pm
One characteristic we have not discussed much is the power of the database engines. Less than ten years ago, one of the ways reviewers measured the speed of word processing programs involved timing the scrolling of a long document. It seems this is no longer related to the speed of the program, because an application could easily be programmed to scroll with excessive speed. At least this is what I have surmised.
Does such a test have any applicability to databases. I wanted to see how the programs would handle a long note, so I imported one with a thousand pages into several. HyperClip scrolled the document in 27 seconds; UltraRecall in 34. TexNotes did it in 9.
Does this reflect on the power of the engine, or is TexNotes merely optimised for longer notes?
Does such a test have any applicability to databases. I wanted to see how the programs would handle a long note, so I imported one with a thousand pages into several. HyperClip scrolled the document in 27 seconds; UltraRecall in 34. TexNotes did it in 9.
Does this reflect on the power of the engine, or is TexNotes merely optimised for longer notes?
sub
1/3/2005 4:46 pm
[Stephen D.: HyperClip scrolled the document in 27 seconds; UltraRecall in 34. TexNotes did it in 9.]
My guess is that the above variance reflects the different programming routes taken. Hyperclip admittedly uses a standard Microsoft control for RTF; TexNotes probably uses either original code or some open source code segment, integrated within the whole program.
Microsoft doesn't publish the actual code to its modules the way the open source community does. This means that when Hyperclip wants to edit or display RTF it passes control to a higher-level "black-box" procedure; this is easy to do, but slower than integrated code.
It's a bit like having different surgeons alternately performing various interelated steps of an operation. The result might be excellent with each surgeon doing what they are best at; however, the necessary communication among them will probably double the total time required.
Overall, optimising code is not easy; that's why fewer and fewer programmers do it; that's why programs get bigger and bigger disproportionately to added features; that's why we need faster and faster machines to do the same things more or less (admittedly in a more stylish way). With PC prices being what they are, most companies find it hard to justify spending time in optimising their software since it won't make much difference to contemporary machines anyway.
alx
My guess is that the above variance reflects the different programming routes taken. Hyperclip admittedly uses a standard Microsoft control for RTF; TexNotes probably uses either original code or some open source code segment, integrated within the whole program.
Microsoft doesn't publish the actual code to its modules the way the open source community does. This means that when Hyperclip wants to edit or display RTF it passes control to a higher-level "black-box" procedure; this is easy to do, but slower than integrated code.
It's a bit like having different surgeons alternately performing various interelated steps of an operation. The result might be excellent with each surgeon doing what they are best at; however, the necessary communication among them will probably double the total time required.
Overall, optimising code is not easy; that's why fewer and fewer programmers do it; that's why programs get bigger and bigger disproportionately to added features; that's why we need faster and faster machines to do the same things more or less (admittedly in a more stylish way). With PC prices being what they are, most companies find it hard to justify spending time in optimising their software since it won't make much difference to contemporary machines anyway.
alx
