Alien-Judy
view release on metacpan or search on metacpan
src/judy-1.0.5/test/SLcompare.c view on Meta::CPAN
// @(#) $Revision: 4.1 $ $Source: /judy/test/manual/SLcompare.c $
//=======================================================================
// Author Douglas L. Baskins, June 2002.
// Permission to use this code is freely granted, provided that this
// statement is retained.
// email - doug@sourcejudy.com
//=======================================================================
// Compile for choice of algorithm:
// cc -static -O -o Judy -DJUDYMETHOD SLcompare.c -lJudy
// cc -static -O -o Hash -DHASHMETHOD SLcompare.c
// cc -static -O -o Splay -DSPLAYMETHOD SLcompare.c
// cc -static -O -o Redblack -DREDBLACKMETHOD SLcompare.c
//
// Note: -static will generally get better performance because string
// routines are slower with shared librarys.
//
// Usage:
//
// Judy <textfile>
// Hash <textfile>
// Splay <textfile>
// Redblack <textfile>
/*
This program will give you a chance to measure the speed/space
performance of 4 different string store/lookup algorithms on your data
set. All are very fast and generally can scale to very large data sets.
The memory efficiency is usually best with the Judy algorithm because it
uses the opportunity to compress data in a digital tree. The Hashing is
generally the fastest algorithm, however it looses its advantage when
the number of stored exceeds about 5X the size of the hash table.
Many thanks to J. Zobel for supplying the code for Hash, Splay and
Redblack algorithms.
I will be including more algorithms as people donate code for testing.
The Judy code is available at: <http://sourceforge.net/projects/judy>
9/2002 (dlb).
This is the output of this program when run on a 750Mhz P3 laptop.
This output is using a file available from www.google.com. It looks
like 'http' web pages and was one of the files used in a March 2002
contest by www.google.com
wc google/data/repos.00
3440314 14613027 162822437 google/data/repos.00
lines avg_linelen getline stored RAMused/line store/ln lookup/ln ADT
2807737 58.0 byts 0.856 uS 1455201 104.0 byts 4.859 uS 4.566 uS SPLAY
2807737 58.0 byts 0.862 uS 1455201 96.0 byts 2.523 uS 2.101 uS HASH
2807737 58.0 byts 0.856 uS 1455201 104.0 byts 4.601 uS 4.001 uS REDBLK
2807737 58.0 byts 0.875 uS 1455201 78.7 byts 3.898 uS 2.687 uS JUDY
With = 1.46 million lines stored and 2.81 million non-blank lines,
all the algorithms seem to keep their performance. The hash table is a
perfect size for this data set. I suspect only the HASH algorithm will
slow down with larger data sets unless the hash table is increased larger
than 2**20 million entrys. I have now tried a data set with about 10
million unique lines on a 64 bit machine (google/data/repos.*).
The speeds are about the same. Lookup times are in the 2-4 uSec range
and this is probably insignificant compared to the application needing
the data. I think all four methods are fine for speed, and Judy is a
standout for its memory efficiency -- frequently using less memory than
the strings alone.
I plan on tuning JudySL in the near future. This should improve the
performance of JudySL with short (a text word) strings. I have a data
base of all (28.8 million) domain names on the internet as a test case.
wc /data/domnames.txt
28826508 28826508 488227095 /data/domnames.txt
For the curious, JudySL (I.E JSLI/G macros in this program) is simply an
application subroutine that uses JudyL to the work. JudyL is a very
scaleable word to word (or pointer) mapper. JudySL is implemented as a
digital tree using JudyL arrays as 2^32 (or 2^64 in 64 bit machines)
wide (virtual) nodes. Each level in the digital tree decodes the next 4
(or 8) bytes of the line/string until only one entry would be in the
node. Then the remaining undecoded portion of the line is stored as a
string from the parent node. A digital tree does not need rotation or
balancing. In practice, digital trees are rarely use because of their
poor memory efficiency in the nodes and a tendency to have large depths.
These problems are solved in the Judy algorithm. For more details see
application notes on: www.sourcejudy.com. And for the really curious,
a technical paper is available in the application section. If you would
like to be a reviewer, contact Doug Baskins at <doug@sourcejudy.com>
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h> // printf
#include <fcntl.h>
#include <string.h>
#include <errno.h> // errno
#include <sys/mman.h> // mmap()
#include <sys/stat.h> // stat()
#include <sys/time.h> // gettimeofday()
// remove all uses of this in production code.
int gDupCnt = 0; // counter for duplicate strings
( run in 1.236 second using v1.01-cache-2.11-cpan-39bf76dae61 )