varlena
varlena
PostgreSQL Training,
Consulting & Support
General Bits
By A. Elein Mustain

10-Oct-2005 Issue: 121

Archives | General Tidbits | Google General Bits | Docs | Castellano | PortuguÍs | Subscriptions | Notifications | | Prev

General Bits is a column loosely based on the PostgreSQL mailing list pgsql-general.
To find out more about the pgsql-general list and PostgreSQL, see www.PostgreSQL.org.

Birthdays
[SQL] BirthDay SQL Issue 01-Oct-2005

We have a people table that contains their date of birth.

	db=# select * from birthdays;
	  name     |    dob
	-----------+------------
	 petruchio | 1961-06-02
	 ophelia   | 1978-10-15
	 lucinda   | 1959-12-31
	 emilia    | 1958-05-24
	 miranda   | 1964-03-13
	 grumio    | 1968-10-17

And now we want to know what birthdays are in a date range, for example from the start to the end of October. We want to be able find people's birthdays for this month or this week.

Thinking carefully about the information we have and the information we want, we see we want to discard the years in question and actually only compare the months and days. To do that we can use the handy dandy to_char function to chop off the year. We could have chosen to use extract to pull out the month and then again to pull out the day, but one stroke is nicer. In addition, the text result should compare just the same as the integers would. Also, because we want to find those things falling within a range, we will want to use the BETWEEN operator.

If we had the date range in Month-Day format already, the query and result would be:

	select name from birthdays where to_char(dob,'MMDD') between '1001' and '1031' ;
	---------
 	ophelia
 	grumio
	(2 rows)

Contributorse brett bhalligan at yahoo.com, Richard Huxton dev at archonet.com, Andreas Kretschmer akretschmer at spamfence.net, Kenneth Gonsalves lawgon at thenilgiris.com, elein at varlena.com
Restoring Dump Data
[GENERAL] Shell script to extract a table from a plain text dump 07-Oct-2005

It is fairly easy to restore a table from a dump file. Simple copy the dump file to a new file name, say working.sql, and find the COPY statement which copies the data into the table you desire. Then delete all of the lines above and below the end of the COPY and you have everything you need to restore the table. From there, psql to your database, delete the old (and assumedly bad) rows, and copy them back in using \i working.sql.

For the lazy and only for cases where the table name is unique in the dump file Christoper Kings-Lynne has written a nice shell script to save you the editing. But don't use this if the table name is not unique in the entire dump, including other databases and schemas in the dump. If you do, you'll just get the first one which may or may not be the right one. In this shell script, it is also assumed that the dump file is bzip2'd. If you don't zip the dump or tar it instead, just change the uncompression line to do the right thing.

	#!/bin/sh
	
	# This script extracts a single table from a PostgreSQL pg_dumpall plain
	# text dump.  This is useful for restoring the data for just one table.
	#
	# Usage: restore_table.sh backup_bzip table_name
	
	# Check that arguments are given
	if [ "$1" = "" -o "$2" = "" ]; then
	       echo "Usage: restore_table.sh backup_bzip table_name"
	       exit 1;
	fi
	
	# Check that we're not going to clobber existing files
	if [ -e working.sql -o -e "$2.sql" ]; then
	       echo "Error: restore_table.sh: working files already exist"
	       echo "Error: Remove working.sql and $2.sql before restarting. "
	       exit 1;
	fi
	
	# Extract the backup to a working SQL script
	bunzip2 < $1 > working.sql
	
	# Find the line before the table's COPY output begins
	START=`grep -n "^COPY $2 " working.sql | sed -e 's/:.*//'`
	START=$(($START-1))
	
	# Remove all of the working file before the COPY
	sed -i -e 1,${START}d working.sql
	
	# Find line number at which COPY ends
	END=`grep -n "^\\\\\\." working.sql | head -1 | sed -e 's/:.*//'`
	END=$(($END+1))
	
	# Remove all contents of the working file after the end of the COPY
	sed -i -e $END,\$d working.sql
	
	# Rename the working file to the table name
	mv working.sql "$2.sql"
	exit 0;

Watch out for foreign keys! If you cannot delete the old and assumed bad rows, due to foreign references from other tables, a delete cascade is not recommended. That will just delete data that you may not have wanted to delete (!). This poses a trickier problem that cannot be solved by just this nice script. More careful analysis of the difference between your existing data in the table and the dumped table will be required and specific corrective updates may be required.

Contributors: Christopher Kings-Lynne chriskl at familyhealth.com.au elein at varlena.com


Comments and Corrections are welcome. Suggestions and contributions of items are also welcome. Send them in!
Copyright A. Elein Mustain 2003, 2004, 2005, 2006, 2007, 2008, 2009

Top
Google
Search General Bits & varlena.com Search WWW