Fsdb - a flat-text database for shell scripting
Fsdb, the flatfile streaming database is package of commands for manipulating flat-ASCII databases from shell scripts. Fsdb is useful to process medium amounts of data (with very little data you'd do it by hand, with megabytes you might want a real database). Fsdb was known as as Jdb from 1991 to Oct. 2008.
Fsdb is very good at doing things like:
extracting measurements from experimental output
examining data to address different hypotheses
joining data from different experiments
eliminating/detecting outliers
computing statistics on data (mean, confidence intervals, correlations, histograms)
reformatting data for graphing programs
Fsdb is built around the idea of a flat text file as a database. Fsdb files (by convention, with the extension .fsdb), have a header documenting the schema (what the columns mean), and then each line represents a database record (or row).
For example:
#fsdb experiment duration ufs_mab_sys 37.2 ufs_mab_sys 37.3 ufs_rcp_real 264.5 ufs_rcp_real 277.9
Is a simple file with four experiments (the rows), each with a description, size parameter, and run time in the first, second, and third columns.
Rather than hand-code scripts to do each special case, Fsdb provides higher-level functions. Although it's often easy throw together a custom script to do any single task, I believe that there are several advantages to using this library:
these programs provide a higher level interface than plain Perl, so
Fewer lines of simpler code:
dbrow '_experiment eq "ufs_mab_sys"' | dbcolstats duration
Picks out just one type of experiment and computes statistics on it, rather than:
while (<>) { split; $sum+=$F[1]; $ss+=$F[1]**2; $n++; } $mean = $sum / $n; $std_dev = ...
in dozens of places.
the library uses names for columns, so
No more $F[1]
, use _duration
.
New or different order columns? No changes to your scripts!
Thus if your experiment gets more complicated with a size parameter, so your log changes to:
#fsdb experiment size duration ufs_mab_sys 1024 37.2 ufs_mab_sys 1024 37.3 ufs_rcp_real 1024 264.5 ufs_rcp_real 1024 277.9 ufs_mab_sys 2048 45.3 ufs_mab_sys 2048 44.2
Then the previous scripts still work, even though duration is now the third column, not the second.
A string of actions are self-documenting (each program records what it does).
No more wondering what hacks were used to compute the final data, just look at the comments at the end of the output.
For example, the
dbrow '_experiment eq "ufs_mab_sys"' | dbcolstats duration
The library is mature, supporting large datasets, corner cases, error handling, backed by an automated test suite.
No more puzzling about bad output because your custom script skimped on error checking.
No more memory thrashing when you try to sort ten million records.
Fsdb-2.x supports Perl scripting (in addition to shell scripting), with libraries to do Fsdb input and output, and easy support for pipelines. The shell script
dbcol name test1 | dbroweval '_test1 += 5;'
can be written in perl as:
dbpipeline(dbcol(qw(name test1)), dbroweval('_test1 += 5;'));
(The disadvantage is that you need to learn what functions Fsdb provides.)
Fsdb is built on flat-ASCII databases. By storing data in simple text
files and processing it with pipelines it is easy to experiment (in
the shell) and look at the output.
To the best of my knowledge, the original implementation of
this idea was /rdb
, a commercial product described in the book
UNIX relational database management: application development in the UNIX environment
by Rod Manis, Evan Schaffer, and Robert Jorgensen (and
also at the web page http://www.rdb.com/). Fsdb is an incompatible
re-implementation of their idea without any accelerated indexing or
forms support. (But it's free, and probably has better statistics!).
Fsdb-2.x supports threading and will exploit multiple processors or cores, and provides Perl-level support for input, output, and threaded-pipelines.
Installation instructions follow at the end of this document.
Fsdb-2.x requires Perl 5.8 to run.
All commands have manual pages and provide usage with the --help
option.
All commands are backed by an automated test suite.
The most recent version of Fsdb is available on the web at http://www.isi.edu/~johnh/SOFTWARE/FSDB/index.html.
kitrace_to_db now supports a --utc option, which also fixes this test case for users outside of the Pacific time zone. Bug reported by David Graff, and also by Peter Desnoyers (within a week of each other :-)
xml_to_db can convert simple, very regular XML files into Fsdb.
dbfilepivot "pivots" a file, converting multiple rows correponding to the same entity into a single row with multiple columns.
Fsdb now uses the standard Perl build and installation from ExtUtil::MakeMaker(3), so the quick answer to installation is to type:
perl Makefile.PL make make test make install
Or, if you want to install it somewhere else, change the first line to
perl Makefile.PL PREFIX=$HOME
and it will go in your home directory's bin, etc. (See the ExtUtil::MakeMaker(3) manpage for more details.)
Fsdb requires perl 5.8 or later and uses ithreads.
A test-suite is available, run it with
make test
A FreeBSD port to Fsdb is available, see http://www.freshports.org/databases/fsdb/.
A Fink (MacOS X) port is available, see http://pdb.finkproject.org/pdb/package.php/fsdb. (Thanks to Lars Eggert for maintaining this port.)
These programs are based on the idea storing data in simple ASCII files. A database is a file with one header line and then data or comment lines. For example:
#fsdb account passwd uid gid fullname homedir shell johnh * 2274 134 John_Heidemann /home/johnh /bin/bash greg * 2275 134 Greg_Johnson /home/greg /bin/bash root * 0 0 Root /root /bin/bash # this is a simple database
The header line must be first and begins with #h
.
There are rows (records) and columns (fields),
just like in a normal database.
Comment lines begin with #
.
Column names are any string not containing spaces or single quote
(although it is prudent to keep them alphanumeric with underscore).
By default, columns are delimited by whitespace. With this default configuration, the contents of a field cannot contain whitespace. However, this limitation can be relaxed by changing the field separator as described below.
The big advantage of simple flat-text databases is that it is usually easy to massage data into this format, and it's reasonably easy to take data out of this format into other (text-based) programs, like gnuplot, jgraph, and LaTeX. Think Unix. Think pipes. (Or even output to Excel and HTML if you prefer.)
Since no-whitespace in columns was a problem for some applications,
there's an option which relaxes this rule. You can specify the field
separator in the table header with -F x
where x
is the new field
separator. The special value -F S
sets a separator of two spaces, thus
allowing (single) spaces in fields. An example:
#fsdb -F S account passwd uid gid fullname homedir shell johnh * 2274 134 John Heidemann /home/johnh /bin/bash greg * 2275 134 Greg Johnson /home/greg /bin/bash root * 0 0 Root /root /bin/bash # this is a simple database
See dbfilealter(1) for more details. Regardless of what the column separator is for the body of the data, it's always whitespace in the header.
There's also a third format: a "list". Because it's often hard to see what's columns past the first two, in list format each "column" is on a separate line. The programs dblistize and dbcolize convert to and from this format, and all programs work with either formats. The command
dbfilealter -R C < DATA/passwd.fsdb
outputs:
#fsdb -R C account passwd uid gid fullname homedir shell account: johnh passwd: * uid: 2274 gid: 134 fullname: John_Heidemann homedir: /home/johnh shell: /bin/bash account: greg passwd: * uid: 2275 gid: 134 fullname: Greg_Johnson homedir: /home/greg shell: /bin/bash account: root passwd: * uid: 0 gid: 0 fullname: Root homedir: /root shell: /bin/bash # this is a simple database # | dblistize
See dbfilealter(1) for more details.
A number of programs exist to manipulate databases. Complex functions can be made by stringing together commands with shell pipelines. For example, to print the home directories of everyone with ``john'' in their names, you would do:
cat DATA/passwd | dbrow '_fullname =~ /John/' | dbcol homedir
The output might be:
#fsdb homedir /home/johnh /home/greg # this is a simple database # | dbrow _fullname =~ /John/ # | dbcol homedir
(Notice that comments are appended to the output listing each command, providing an automatic audit log.)
In addition to typical database functions (select, join, etc.) there are also a number of statistical functions.
The real power of Fsdb is that one can apply arbitary code to rows to do powerful things.
cat DATA/passwd | dbroweval '_fullname =~ s/(\w+)_(\w+)/$2,_$1/'
converts "John_Heidemann" into "Heidemann,_John". Not too much more work could split fullname into firstname and lastname fields.
An advantage of Fsdb is that you can talk about columns by name
(symbolically) rather than simply by their positions. So in the above
example, dbcol homedir
pulled out the home directory column, and
dbrow '_fullname =~ /John/'
matched against column fullname.
In general, you can use the name of the column listed on the #fsdb
line
to identify it in most programs, and _name to identify it in code.
Some alternatives for flexibility:
Numeric values identify columns positionally, numbering from 0. So 0 or _0 is the first column, 1 is the second, etc.
In code, _last_columnname gets the value from columname's previous row.
See dbroweval(1) for more details about writing code.
Enough said. I'll summarize the commands, and then you can
experiment. For a detailed description of each command, see a summary
by running it with the argument --help
(or -?
if you prefer.)
Full manual pages can be found by running the command
with the argument --man
, or running the Unix command man dbcol
or whatever program you want.
add columns to a database
set the column headings for a non-Fsdb file
select columns from a table
select rows from a table
sort rows based on a set of columns
compute the natural join of two tables
rename a column
merge two columns into one
split one column into two or more columns
split one column into multiple rows
"pivots" a file, converting multiple rows correponding to the same entity into a single row with multiple columns.
check that db file doesn't have some common errors
compute statistics over a column (mean,etc.,optionally median)
group rows by some key value, then compute stats (mean, etc.) over each group (equivalent to dbmapreduce with dbcolstats as the reducer)
group rows (map) and then apply an arbitrary function to each group (reduce)
compare two samples distributions (mean/conf interval/T-test)
computing moving statistics over a column of data
compute Z-scores and T-scores over one column of data
compute the rank or percentile of a column
compute histograms over a column of data
compute the coefficient of correlation over several columns
compute linear regression and correlation for two columns
compute a running sum over a column of data
count the number of rows (a subset of dbstats)
compute differences between each row of a table
number each row
run arbitrary Perl code on each row
count/eliminate identical rows (like Unix uniq(1))
pretty-print columns
convert between column or list format, or change the column separator
remove comments from a table
generate a script that sends form mail based on each row
(These programs convert data into fsdb. See their web pages for details.)
HTML tables to fsdb (assuming they're reasonably formatted).
http://ficus-www.cs.ucla.edu/ficus-members/geoff/kitrace.html
spreadsheet tab-delimited files to db
(see man tcpdump(8)
on any reasonable system)
XML input to fsdb, assuming they're very regular
(And out of fsdb:)
Comma-separated-value format from fsdb.
simple conversion of Fsdb to html tables
Many programs have common options:
Show basic usage.
Specify confidence interval FRACTION (dbcolstats, dbmultistats, etc.)
--element-separator S
Specify column separator S (dbcolsplittocols, dbcolmerge).
Enable debugging (may be repeated for greater effect in some cases).
Compute stats over all data (treating non-numbers as zeros). (By default, things that can't be treated as numbers are ignored for stats purposes)
Assume the data is pre-sorted. May be repeated to disable verification (saving a small amount of work).
give value E as the value for empty (null) records
Input data from file I.
Write data out to file O.
Skip logging the program in a trailing comment.
When giving Perl code (in dbrow and dbroweval) column names can be embedded if preceded by underscores. Look at dbrow(1) or dbroweval(1) for examples.)
Most programs run in constant memory and use temporary files if necessary. Exceptions are dbcolneaten, dbcolpercentile, dbmapreduce, dbmultistats, dbrowsplituniq.
Take the raw data in DATA/http_bandwidth
,
put a header on it (dbcoldefine size bw
),
took statistics of each category (dbmultistats -k size bw
),
pick out the relevant fields (dbcol size mean stddev pct_rsd
), and you get:
#fsdb size mean stddev pct_rsd 1024 1.4962e+06 2.8497e+05 19.047 10240 5.0286e+06 6.0103e+05 11.952 102400 4.9216e+06 3.0939e+05 6.2863 # | dbcoldefine size bw # | /home/johnh/BIN/DB/dbmultistats -k size bw # | /home/johnh/BIN/DB/dbcol size mean stddev pct_rsd
(The whole command was:
cat DATA/http_bandwidth | dbcoldefine size | dbmultistats -k size bw | dbcol size mean stddev pct_rsd
all on one line.)
Then post-process them to get rid of the exponential notation by adding this to the end of the pipeline:
dbroweval '_mean = sprintf("%8.0f", _mean); _stddev = sprintf("%8.0f", _stddev);'
(Actually, this step is no longer required since dbcolstats now uses a different default format.)
giving:
#fsdb size mean stddev pct_rsd 1024 1496200 284970 19.047 10240 5028600 601030 11.952 102400 4921600 309390 6.2863 # | dbcoldefine size bw # | dbmultistats -k size bw # | dbcol size mean stddev pct_rsd # | dbroweval { _mean = sprintf("%8.0f", _mean); _stddev = sprintf("%8.0f", _stddev); }
In a few lines, raw data is transformed to processed output.
Suppose you expect there is an odd distribution of results of one datapoint. Fsdb can easily produce a CDF (cumulative distribution function) of the data, suitable for graphing:
cat DB/DATA/http_bandwidth | \ dbcoldefine size bw | \ dbrow '_size == 102400' | \ dbcol bw | \ dbsort -n bw | \ dbrowenumerate | \ dbcolpercentile count | \ dbcol bw percentile | \ xgraph
The steps, roughly: 1. get the raw input data and turn it into fsdb format, 2. pick out just the relevant column (for efficiency) and sort it, 3. for each data point, assign a CDF percentage to it, 4. pick out the two columns to graph and show them
The first commercial program I wrote was a gradebook, so here's how to do it with Fsdb.
Format your data like DATA/grades.
#fsdb name email id test1 a a@ucla.edu 1 80 b b@usc.edu 2 70 c c@isi.edu 3 65 d d@lmu.edu 4 90 e e@caltech.edu 5 70 f f@oxy.edu 6 90
Or if your students have spaces in their names, use -F S
and two spaces
to separate each column:
#fsdb -F S name email id test1 alfred aho a@ucla.edu 1 80 butler lampson b@usc.edu 2 70 david clark c@isi.edu 3 65 constantine drovolis d@lmu.edu 4 90 debrorah estrin e@caltech.edu 5 70 sally floyd f@oxy.edu 6 90
To compute statistics on an exam, do
cat DATA/grades | dbstats test1 |dblistize
giving
#fsdb -R C ... mean: 77.5 stddev: 10.84 pct_rsd: 13.987 conf_range: 11.377 conf_low: 66.123 conf_high: 88.877 conf_pct: 0.95 sum: 465 sum_squared: 36625 min: 65 max: 90 n: 6 ...
To do a histogram:
cat DATA/grades | dbcolhisto -n 5 -g test1
giving
#fsdb low histogram 65 * 70 ** 75 80 * 85 90 ** # | /home/johnh/BIN/DB/dbhistogram -n 5 -g test1
Now you want to send out grades to the students by e-mail. Create a form-letter (in the file test1.txt):
To: _email (_name) From: J. Random Professor <jrp@usc.edu> Subject: test1 scores
_name, your score on test1 was _test1. 86+ A 75-85 B 70-74 C 0-69 F
Generate the shell script that will send the mail out:
cat DATA/grades | dbformmail test1.txt > test1.sh
And run it:
sh <test1.sh
The last two steps can be combined:
cat DATA/grades | dbformmail test1.txt | sh
but I like to keep a copy of exactly what I send.
At the end of the semester you'll want to compute grade totals and assign letter grades. Both fall out of dbroweval. For example, to compute weighted total grades with a 40% midterm/60% final where the midterm is 84 possible points and the final 100:
dbcol -rv total | dbcolcreate total - | dbroweval ' _total = .40 * _midterm/84.0 + .60 * _final/100.0; _total = sprintf("%4.2f", _total); if (_final eq "-" || ( _name =~ /^_/)) { _total = "-"; };' | dbcolneaten
If you got the data originally from a spreadsheet, save it in "tab-delimited" format and convert it with tabdelim_to_db (run tabdelim_to_db -? for examples).
To convert the Unix password file to db:
cat /etc/passwd | sed 's/:/ /g'| \ dbcoldefine -F S login password uid gid gecos home shell \ >passwd.fsdb
To convert the group file
cat /etc/group | sed 's/:/ /g' | \ dbcoldefine -F S group password gid members \ >group.fsdb
To show the names of the groups that div7-members are in (assuming DIV7 is in the gecos field):
cat passwd.fsdb | dbrow '_gecos =~ /DIV7/' | dbcol login gid | \ dbjoin - group.fsdb gid | dbcol login group
Which Fsdb programs are the most complicated (based on number of test cases)?
ls TEST/*.cmd | \ dbcoldefine test | \ dbroweval '_test =~ s@^TEST/([^_]+).*$@$1@' | \ dbrowuniq -c | \ dbsort -nr count | \ dbcolneaten
(Answer: dbmapreduce, then dbcolstats, dbfilealter and dbjoin.)
Stats on an exam (in $FILE
, where $COLUMN
is the name of the exam)?
cat $FILE | dbcolstats -q 4 $COLUMN <$FILE | dblistize | dbstripcomments
cat $FILE | dbcolhisto -g -n 20 $COLUMN | dbcolneaten | dbstripcomments
Merging a the hw1 column from file hw1.fsdb into grades.fsdb assuing there's a common student id in column "id":
dbcol id hw1 <hw1.fsdb >t.fsdb
dbjoin -a -e - grades.fsdb t.fsdb id | \ dbsort name | \ dbcolneaten >new_grades.fsdb
Merging two fsdb files with the same rows:
cat file1.fsdb file2.fsdb >output.fsdb
or if you want to clean things up a bit
cat file1.fsdb file2.fsdb | dbstripextraheaders >output.fsdb
or if you want to know where the data came from
for i in 1 2 do dbcolcreate source $i < file$i.fsdb done >output.fsdb
(assumes you're using a Bourne-shell compatible shell, not csh).
As with any tool, one should (which means must) understand the limits of the tool.
All Fsdb tools should run in constant memory. In some cases (such as dbcolstats with quartiles, where the whole input must be re-read), programs will spool data to disk if necessary.
Most tools buffer one or a few lines of data, so memory will scale with the size of each line. (So lines with many columns, or when columns have lots data, may cause larege memory consumption.)
All Fsdb tools should run in constant or at worst n log n
time.
All Fsdb tools use normal Perl math routines for computation. Although I make every attempt to choose numerically stable algorithms, normal rounding due to computer floating point approximations can result in inaccuracies when data spans a large range of precisions. (See for example the dbcolstats_extrema test cases.)
Any requirements and limitations of each Fsdb tool is documented on its manual page.
If any Fsdb program violates these assumptions, that is a bug that should be documented on the tool's manual page or ideally fixed.
There have been three versions of Fsdb; fsdb 1.0 is a complete re-write of the pre-1995 versions, and was distributed from 1995 to 2007. Fsdb 2.0 is a significant re-write of the 1.x versions for reasons described below.
Fsdb (in its various forms) has been used extensively by its author since 1991. Since 1995 it's been used by two other researchers at UCLA and several at ISI. In February 1998 it was announced to the Internet. Since then it has found a few users, some outside where I work.
I've thought about fsdb-2.0 for many years, but it was started in earnest in 2007. Fsdb-2.0 has the following goals:
While fsdb is great on the Unix command line as a pipeline between programs, it should also be possible to set it up to run in a single process. And if it does so, it should be able to avoid serializing and deserializing (converting to and from text) data between each module. (Accomplished in fsdb-2.0: see dbpipeline, although still needs tuning.)
Fsdb's roots go back to perl4 and 1991, so the fsdb-1.x library is very, very crufty. More than just being ugly (but it was that too), this made things reading from one format file and writing to another the application's job, when it should be the library's. (Accomplished in fsdb-1.15 and improved in 2.0: see the Fsdb::IO manpage.)
Because fsdb modules were added as needed over 10 years,
sometimes the module APIs became inconsistent.
(For example, the 1.x dbcolcreate
required an empty
value following the name of the new column,
but other programs specify empty values with the -e
argument.)
We should smooth over these inconsistencies.
(Accomplished as each module was ported in 2.0 through 2.7.)
Given a clean IO API, the distinction between "colized" and "listized" fsdb files should go away. Any program should be able to read and write files in any format. (Accomplished in fsdb-2.1.)
Fsdb-2.0 preserves backwards compatibility where possible, but breaks it where necessary to accomplish the above goals. As of August 2008, fsdb-2.7 is the preferred version.
Fsdb includes code ported from Geoff Kuenning (Fsdb::Support::TDistribution
).
Fsdb contributors: Ashvin Goel goel@cse.oge.edu, Geoff Kuenning geoff@fmg.cs.ucla.edu, Vikram Visweswariah visweswa@isi.edu, Kannan Varadahan kannan@isi.edu, Lars Eggert larse@isi.edu, Arkadi Gelfond arkadig@dyna.com, David Graff graff@ldc.upenn.edu, Haobo Yu haoboy@packetdesign.com, Pavlin Radoslavov pavlin@catarina.usc.edu, Fabio Silva fabio@isi.edu, Jerry Zhao zhaoy@isi.edu, Ning Xu nxu@aludra.usc.edu, Martin Lukac mlukac@lecs.cs.ucla.edu.
Fsdb includes datasets contributed from NIST (DATA/nist_zarr13.fsdb), from http://www.itl.nist.gov/div898/handbook/eda/section4/eda4281.htm, the NIST/SEMATECH e-Handbook of Statistical Methods, section 1.4.2.8.1. Background and Data. The source is public domain, and reproduced with permission.
As stated in the introduction, Fsdb is an incompatible reimplementation
of the ideas found in /rdb
. By storing data in simple text files and
processing it with pipelines it is easy to experiment (in the shell)
and look at the output. The original implementation of this idea was
/rdb, a commercial product described in the book UNIX relational
database management: application development in the UNIX environment
by Rod Manis, Evan Schaffer, and Robert Jorgensen (and also at the web
page http://www.rdb.com/).
In August, 2002 I found out Carlo Strozzi extended RDB with his package NoSQL http://www.linux.it/~carlos/nosql/. According to Mr. Strozzi, he implemented NoSQL in awk to avoid the Perl start-up of RDB. Although I haven't found Perl startup overhead to be a big problem on my platforms (from old Sparcstation IPCs to 2GHz Pentium-4s), you may want to evaluate his system. The Linux Journal has a description of NoSQL at http://www.linuxjournal.com/article/3294. It seems quite similar to Fsdb. Like /rdb, NoSQL supports indexing (not present in Fsdb). Fsdb appears to have richer support for statistics, and, as of Fsdb-2.x, its support for Perl threading may support faster performance (one-process, less serialization and deserialization).
Versions prior to 1.0 were released informally on my web page but were not announced.
started for my own research use
first check-in to RCS
parts now require perl5
adds autoconf support and a test script.
support for double space field separators, better tests
minor changes and release on comp.lang.perl.announce
adds dmalloc_to_db converter
fixes some warnings
dbjoin now can run on unsorted input
fixes a dbjoin bug
some more tests in the test suite
improves error messages (all should now report the program that makes the error)
fixed a bug in dbstats output when the mean is zero
fsdb-announce@heidemann.la.ca.us
and fsdb-talk@heidemann.la.ca.us
To subscribe to either, send mail to fsdb-announce-request@heidemann.la.ca.us
or fsdb-talk-request@heidemann.la.ca.us
with "subscribe" in the BODY of the message.
larse@isi.edu
nxu@aludra.usc.edu
.
2.0, 25-Jan-08 --- a quiet 2.0 release (gearing up towards complete)
dbstats
(renamed dbcolstats),
dbcolrename,
dbcolcreate,It also provides perl function aliases for the internal modules, so a string of fsdb commands in perl are nearly as terse as in the shell:
use Fsdb::Filter::dbpipeline qw(:all); dbpipeline( dbrow(qw(name test1)), dbroweval('_test1 += 5;') );
-
(the default empty value) for
statistics it cannot compute (for example, standard deviation
if there is only one row),
instead of the old mix of -
and "na".-t mean,stddev
option is now
--tmean mean --tstddev stddev
. See dbcolstatscores for details.-e
option.n
output
(except without differentiating numeric/non-numeric input),
or the equivalent of dbstripcomments | wc -l
.-i
option to include non-matches
is now renamed -a
, so as to not conflict with the new
standard option -i
for input file.
2.1, 6-Apr-08 --- another alpha 2.0, but now all converted programs understand both listize and colize format
The old dbjoin argument -i
is now -a
or <--type=outer>.
A minor change: comments in the source files for dbjoin are now intermixed with output rather than being delayed until the end.
-e
option (to avoid end-of-line spaces) is now -E
to avoid conflicts with the standard empty field argument.-e
option is now -E
to avoid conflicts.
And its -n
, -s
, and -w
are now
-N
, -S
, and -W
to correspond.Fsdb::IO
now understand both list-format
and column-format data, so all converted programs can now
automatically read either format. This capability was one
of the milestone goals for 2.0, so yea!
Release 2.2 is another 2.x alpha release. Now most of the commands are ported, but a few remain, and I plan one last incompatible change (to the file header) before 2.x final.
shifting more old programs to Perl modules. New in 2.2: dbrowaccumulate, dbformmail. dbcolmovingstats. dbrowuniq. dbrowdiff. dbcolmerge. dbcolsplittocols. dbcolsplittorows. dbmapreduce. dbmultistats. dbrvstatdiff. Also dbrowenumerate exists only as a front-end (command-line) program.
The following programs have been dropped from fsdb-2.x: dbcoltighten, dbfilesplit, dbstripextraheaders, dbstripleadingspace.
combined_log_format_to_db to convert Apache logfiles
Options to dbrowdiff are now -B and -I, not -a and -i.
dbstripcomments is now dbfilestripcomments.
dbcolneaten better handles empty columns; dbcolhisto warning suppressed (actually a bug in high-bucket handling).
dbmultistats now requires a -k
option in front of the
key (tag) field, or if none is given, it will group by the first field
(both like dbmapreduce).
dbmultistats with quantile option doesn't work currently.
dbcoldiff is renamed dbrvstatdiff.
dbformmail was leaving its log message as a command, not a comment. Oops. No longer.
Another alpha release, this one just to fix the critical dbjoin bug listed below (that happens to have blocked my MP3 jukebox :-).
Dbsort no longer hangs if given an input file with no rows.
Dbjoin now works with unsorted input coming from a pipeline (like stdin). Perl-5.8.8 has a bug (?) that was making this case fail---opening stdin in one thread, reading some, then reading more in a different thread caused an lseek which works on files, but fails on pipes like stdin. Go figure.
The dbjoin fix also fixed dbmultistats -q (it now gives the right answer). Although a new bug appeared, messages like: Attempt to free unreferenced scalar: SV 0xa9dd0c4, Perl interpreter: 0xa8350b8 during global destruction. So the dbmultistats_quartile test is still disabled.
Another alpha release, mostly to fix minor usability problems in dbmapreduce and client functions.
dbrow now defaults to running user supplied code without warnings
(as with fsdb-1.x).
Use --warnings
or -w
to turn them back on.
dbroweval can now write different format output
than the input, using the -m
option.
dbmapreduce emits warnings on perl 5.10.0 about "Unbalanced string table refcount" and "Scalars leaked" when run with an external program as a reducer.
dbmultistats emits the warning "Attempt to free unreferenced scalar" when run with quartiles.
In each case the output is correct. I believe these can be ignored.
dbmapreduce no longer logs a line for each reducer that is invoked.
Another alpha release, fixing more minor bugs in
dbmapreduce
and lossage in Fsdb::IO
.
dbmapreduce can now tolerate non-map-aware reducers that pass back the key column in put. It also passes the current key as the last argument to external reducers.
the Fsdb::IO::Reader manpage, correctly handle -header
option again.
(Broken since fsdb-2.3.)
Another alpha release, needed to fix DaGronk. One new port, small bug fixes, and important fix to dbmapreduce.
shifting more old programs to Perl modules. New in 2.2: dbcolpercentile.
--rank
to require ranking instead of -r
.
Also, --ascending
and --descending
can now be specified separately,
both for --percentile
and --rank
.Sigh, the sense of the --warnings option in dbrow was inverted. No longer.
I found and fixed the string leaks (errors like "Unbalanced string
table refcount" and "Scalars leaked") in dbmapreduce and dbmultistats.
(All IO::Handle
s in threads must be manually destroyed.)
The -C
option to specify the column separator in dbcolsplittorows
now works again (broken since it was ported).
2.7, 30-Jul-08 beta
The beta release of fsdb-2.x. Finally, all programs are ported. As statistics, the number of lines of non-library code doubled from 7.5k to 15.5k. The libraries are much more complete, going from 866 to 5164 lines. The overall number of programs is about the same, although 19 were dropped and 11 were added. The number of test cases has grown from 116 to 175. All programs are now in perl-5, no more shell scripts or perl-4. All programs now have manual pages.
Although this is a major step forward, I still expect to rename "fsdb" to "fsdb".
shifting more old programs to Perl modules. New in 2.7: dbcolscorellate. dbcolsregression. cgi_to_db. dbfilevalidate. db_to_csv. csv_to_db, db_to_html_table, kitrace_to_db, tcpdump_to_db, tabdelim_to_db, ns_to_db.
The following programs have been dropped from fsdb-2.x: db2dcliff, dbcolmultiscale, crl_to_db. ipchain_logs_to_db. They may come back, but seemed overly specialized. The following program dbrowsplituniq was dropped because it is superseded by dbmapreduce. dmalloc_to_db was dropped pending a test cases and examples.
dbfilevalidate now has a -c
option to correct errors.
html_table_to_db provides the inverse of db_to_html_table.
Change header format, preserving forwards compatibility.
Complete editing pass over the manual, making sure it aligns with fsdb-2.x.
The header of fsdb files has changed, it is now #fsdb, not #h (or #L) and parsing of -F and -R are also different. See dbfilealter for the new specification. The v1 file format will be read, compatibly, but not written.
dbmapreduce now tolerates comments that preceed the first key, instead of failing with an error message.
Still in beta; just a quick bug-fix for dbmapreduce.
dbmapreduce now generates plausible output when given no rows of input.
Still in beta, but picking up some bug fixes.
dbmapreduce now generates plausible output when given no rows of input.
dbroweval the warnings option was backwards; now corrected. As a result, warnings in user code now default off (like in fsdb-1.x).
dbcolpercentile now defaults to assuming the target column is numeric.
The new option -N
allows selectin of a non-numeric target.
dbcolscorrelate now includes --sample
and --nosample
options
to compute the sample or full population correlation coefficients.
Thanks to Xue Cai for finding this bug.
Still in beta, but picking up some bug fixes.
html_table_to_db is now more agressive about filling in empty cells with the official empty value, rather than leaving them blank or as whitespace.
dbpipeline now catches failures during pipeline element setup and exits reasonably gracefully.
dbsubprocess now reaps child prcoesses, thus avoiding running out of processes when used a lot.
Finally, a full (non-beta) 2.x release!
Jdb has been renamed Fsdb, the flatfile-streaming database. This change affects all internal Perl APIs, but no shell command-level APIs. While Jdb served well for more than ten years, it is easily confused with the Java debugger (even though Jdb was there first!). It also is too generic to work well in web search engines. Finally, Jdb stands for ``John's database'', and we're a bit beyond that. (However, some call me the ``file-system guy'', so one could argue it retains that meeting.)
If you just used the shell commands, this change should not affect you. If you used the Perl-level libraries directly in your code, you should be able to rename "Jdb" to "Fsdb" to move to 2.12.
The jdb-announce list not yet been renamed, but it will be shortly.
With this release I've accomplished everything I wanted to in fsdb-2.x. I therefore expect to return to boring, bugfix releases.
dbrowaccumulate now treats non-numeric data as zero by default.
Fixed a perl-5.10ism in dbmapreduce that breaks that program under 5.8. Thanks to Martin Lukac for reporting the bug.
Improved documentation for dbmapreduce's -f
option.
dbcolmovingstats how computes a moving standard deviation in addition to a moving mean.
Fix a make install bug reported by Shalindra Fernando.
Another minor release bug: on some systems programize_module looses executable permissions. Again reported by Shalindra Fernando.
Typo in the dbroweval manual fixed.
There is no longer a comment line to label columns in dbcolneaten, instead the header line is tweaked to line up. This change restores the Jdb-1.x behavior, and means that repeated runs of dbcolneaten no longer add comment lines each time.
It turns out dbcolneaten was not correctly handling trailing spaces
when given the -E
option to suppress them. This regression is now
fixed.
dbroweval(1) can now handle direct references to the last row via $lfref, a dubious but now documented feature.
Separators set with -C
in dbcolmerge and dbcolsplittocols
were not properly
setting the heading, and null fields were not recognized.
The first bug was reported by Martin Lukac.
Documentation for Fsdb::IO::Reader has been improved.
The package should now be PGP-signed.
Internal improvements to debugging output and robustness of dbmapreduce and dbpipeline. TEST/dbpipeline_first_fails.cmd re-enabled.
Loging for dbmapreduce with code refs is now stable (it no longer includes a hex pointer to the code reference).
Better handling of mixed blank lines in Fsdb::IO::Reader (see test case dbcolize_blank_lines.cmd).
html_table_to_db now handles multi-line input better, and handles tables with COLSPAN.
dbpipeline now cleans up threads in an eval
to prevent "cannot detach a joined thread" errors that popped
up in perl-5.10. Hopefully this prevents a race condition
that causes the test suites to hang about 20% of the time
(in dbpipeline_first_fails).
dbmapreduce now detects and correctly fails when the input and reducer have incompatible field seperators.
dbcolstats, dbcolhisto, dbcolscorrelate, dbcolsregression,
and dbrowcount
now all take an -F
option to let one specify the output field seperator
(so they work better with dbmapreduce).
An omitted -k
from the manual page of dbmultistats
is now there. Bug reported by Unkyu Park.
Fsdb::IO::Writer now no longer fails with -outputheader => never (an obscure bug).
Fsdb (in the warnings section) and dbcolstats now more carefully document how they handle (and do not handle) numerical precision problems, and other general limits. Thanks to Yuri Pradkin for prompting this documentation.
Fsdb::Support::fullname_to_sortkey
is now restored from Jdb
.
Documention for multiple styles of input approaches (including performance description) added to the Fsdb::IO manpage.
dbmerge now correctly handles n-way merges. Bug reported by Yuri Pradkin.
dbcolneaten now defaults to not padding the last column.
dbrowenumerate now takes -N NewColumn to give the new column a name other than "count". Feature reuested by Mike Rouch in January 2005.
New program dbcolcopylast copies the last value of a column into a new column copylast_column of the next row. New program requested by Fabio Silva; useful for convereting dbmultistats output into dbrvstatdiff input.
Several tools (particularly dbmapreduce and dbmultistats) would report errors like "Unbalanced string table refcount: (1) for "STDOUT" during global destruction" on exit, at least on certain versions of Perl (for me on 5.10.1), but similar errors have been off-and-on for several Perl releases. Although I think my code looked OK, I worked around this problem with a different way of handling standard IO redirection.
Documentation to dbrvstatdiff was changed to use "sd" to refer to standard deviation, not "ss" (which might be confused with sum-of-squares).
This documentation about dbmultistats was missing the -k option in some cases.
dbmapreduce was failing on MacOS-10.6.3 for some tests with the error
dbmapreduce: cannot run external dbmapreduce reduce program (perl TEST/dbmapreduce_external_with_key.pl)
The problem seemed to be only in the error, not in operation. On MacOS, the error is now suppressed. Thanks to Alefiya Hussain for providing access to a Mac system that allowed debugging of this problem.
The csv_to_db command requires an external Perl library (Text::CSV_XS). On computers that lack this optional library, previously Fsdb would configure with a warning and then test cases would fail. Now those test cases are skipped with an additional warning.
The test suite now supports alternative valid output, as a hack to account for last-digit floating point differences. (Not very satisfying :-(
dbcolstats output for confidence intervals on very large datasets has changed. Previously it failed for more than 2^31-1 records, and handling of T-Distributions with thousands of rows was a bit dubious. Now datasets with more than 10000 are considered infinitely large and hopefully correctly handled.
The dbfilealter command had a --correct
option to
work-around from incompatible field-seperators,
but it did nothing. Now it does the correct but sad, data-loosing
thing.
The dbmultistats command previously failed with an error message when invoked on input with a non-default field separator. The root cause was the underlying dbmapreduce that did not handle the case of reducers that generated output with a different field separator than the input. We now detect and repair incompatible field separators. This change corrects a problem originally documented and detected in Fsdb-2.20. Bug re-reported by Unkyu Park.
John Heidemann, johnh@isi.edu
Fsdb is Copyright (C) 1991-2011 by John Heidemann <johnh@isi.edu>.
This program is free software; you can redistribute it and/or modify it under the terms of version 2 of the GNU General Public License as published by the Free Software Foundation.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
A copy of the GNU General Public License can be found in the file ``COPYING''.
Any comments about these programs should be sent to John Heidemann
johnh@isi.edu
.