request open-ils.actor opensrf.open-ils.system.ils_version
Doesn't have to be open-ils.actor any service will work.
http://acq.open-ils.org/gateway?service=open-ils.actor&method=opensrf.open-ils.system.ils_version
Replace acq.open-ils.org with the hostname for the server to test.
psql -U <dbuser> <dbname> # update actor.usr set passwd = 'open-ils' where usrname = 'admin'; # \q
Another option with newer versions of Evergreen, as the opensrf user:
Open-ILS/src/support-scripts/eg_db_config.pl --admin-password new_password
SELECT now() - xact_start AS "duration", waiting, current_query FROM pg_stat_activity WHERE current_query !~ '<IDLE>' AND (now() - xact_start) > '30 seconds'::INTERVAL ORDER BY 1 DESC;
perl -Mencoding=utf8 -MUnicode::Normalize -e 'while(<>){ $_=NFC($_);while(/([\x{0080}-\x{fffd}])/gc){ print "line $., $1 -> ".sprintf("\%0.4x",ord($1))."\n"; } }' filename.ext
CREATE OR REPLACE FUNCTION perl_find_char(text,text) RETURNS text AS $$ USE Encode; my $x=shift; Encode::_utf8_off($x);my $char=shift; my $re = qr/$char/; RETURN $x IF ($x =~ /$re/); RETURN undef; $$ LANGUAGE plperlu; CREATE OR REPLACE FUNCTION perl_rm_char (text,text) RETURNS text AS $$use Encode; my $x=shift; Encode::_utf8_off($x);my $char=shift; my $re = qr/$char/; $x =~ s/$re/$1$2/g; Encode::_utf8_on($x); RETURN $x; $$ LANGUAGE plperlu;
Usage:
BEGIN; UPDATE some_table SET the_field = perl_rm_char(the_field,E'([\\x00-\\x79])[\\x80-\\xff]([\\x00-\\x79])') WHERE perl_find_char(l_call_number,E'[\\x00-\\x79][\\x{80}-\\x{FF}][\\x{00}-\\x{79}]') IS NOT NULL; EXPLAIN ANALYZE SELECT LOWER(the_field) FROM some_table; -- If the last command does not throw an error, you have succeeded COMMIT;
A quick way to view the circulation policies, in PostgreSQL, with the names instead of id values, similar to the Staff Client Circulation Policy Editor. Substitute a.org_unit=1 with the id of your library from the actor.org_unit table.
SELECT a.id, a.active, a.circ_modifier, b.shortname, c.name, a.renewals, d.name, e.name, f.name FROM config.circ_matrix_matchpoint AS a LEFT JOIN actor.org_unit b ON a.org_unit = b.id LEFT JOIN permission.grp_tree c ON a.grp = c.id LEFT JOIN config.rule_circ_duration d ON a.duration_rule = d.id LEFT JOIN config.rule_recurring_fine e ON a.recurring_fine_rule = e.id LEFT JOIN config.rule_max_fine f ON a.max_fine_rule = f.id WHERE a.org_unit=1;
Another approach, showing more columns and their human readable values, change the WHERE clause to get different ones or comment out WHERE clause to see all rules. By yboston, inspired by bshum,
SELECT ccmm.id AS "ID", ccmm.active AS "Active", org.shortname AS "copy_circ_lib", pgt.name AS "Patron Group", ccmm.circ_modifier AS "Circ Modifier", loc.name AS "copy_location", ccmm.is_renewal AS "Renewal?", ccmm.circulate AS "Circulate?", crcd.name AS "Circ Duration", crrf.name AS "Recurring Fine", ccmm.grace_period, crmf.name AS "Max Fine", limit_set.items_out AS "Max Items Out", array_to_string(array_agg(lscm.circ_mod), ', ') AS "Limited Circ Mods" FROM config.circ_matrix_matchpoint ccmm INNER JOIN actor.org_unit aou ON aou.id = ccmm.org_unit INNER JOIN permission.grp_tree pgt ON pgt.id = ccmm.grp INNER JOIN config.rule_circ_duration crcd ON crcd.id = ccmm.duration_rule INNER JOIN config.rule_recurring_fine crrf ON crrf.id = ccmm.recurring_fine_rule INNER JOIN config.rule_max_fine crmf ON crmf.id = ccmm.max_fine_rule LEFT OUTER JOIN config.circ_matrix_limit_set_map map ON map.matchpoint = ccmm.id LEFT OUTER JOIN config.circ_limit_set limit_set ON limit_set.id = map.limit_set LEFT OUTER JOIN config.circ_limit_set_circ_mod_map lscm ON lscm.limit_set = limit_set.id LEFT OUTER JOIN actor.org_unit org ON (ccmm.copy_circ_lib = org.id) LEFT OUTER JOIN asset.copy_location loc ON (loc.id = ccmm.copy_location) WHERE ccmm.copy_circ_lib = 101 -- OPTIONAL: set one or more org unit id numbers here GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9,10, 11, 12, 13 ORDER BY copy_circ_lib, pgt.name, ccmm.circ_modifier;
If you need to cancel a query/report that is stuck and is locking up the queue we use pg_cancel_backend(backend_pid). This will kill the query and allow the clark process to 1) notice and 2) clean up after the canceled query.
SELECT pg_cancel_backend(backend_pid);
The backend_pid comes from the procpid column of pg_stat_activity.
SELECT procpid, now()-query_start AS duration, current_query FROM pg_stat_activity WHERE current_query <> '<IDLE>' ORDER BY 2;
This will give you the current queries, ordered by duration. Reports are easy to spot because the reporter uses MD5sum values for table aliases.
For databases running PostgreSQL 9.2+, the pg_stat_activity table is a little different, so this query is more helpful perhaps:
SELECT pid, now()-query_start AS duration, query FROM pg_stat_activity WHERE state <> 'idle' ORDER BY 2;
Given a simple org tree of A->B->C, if a user at C shares a report folder with all of C and then shares a subfolder with A or B, the display of shared folders for all users at A or B will be broken (the location of the breakage depends on the sort order of the username). To fix this, the "top-level" folder has to be shared the most widely with the subfolders either at the same or a more restrictive level. This SQL will list the folders that are shared more widely than their parents:
-- 3 types of folders, 3 queries. SELECT rpt.id AS "Template Folder id", rpt.name AS "Name", au.usrname AS "User", rpt.share_with AS "Shared With", prpt.share_with AS "Parent Shared With" FROM reporter.template_folder rpt JOIN reporter.template_folder prpt ON (rpt.parent = prpt.id) JOIN actor.usr au ON (rpt.owner = au.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE); SELECT rpt.id AS "Report Folder id", rpt.name AS "Name", au.usrname AS "User", rpt.share_with AS "Shared With", prpt.share_with AS "Parent Shared With" FROM reporter.report_folder rpt JOIN reporter.report_folder prpt ON (rpt.parent = prpt.id) JOIN actor.usr au ON (rpt.owner = au.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE); SELECT rpt.id AS "Output Folder id", rpt.name AS "Name", au.usrname AS "User", rpt.share_with AS "Shared With", prpt.share_with AS "Parent Shared With" FROM reporter.output_folder rpt JOIN reporter.output_folder prpt ON (rpt.parent = prpt.id) JOIN actor.usr au ON (rpt.owner = au.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE);
Using Bucardo's check_postgres script with Nagios/Icincga? Add checks for these:
check_postgres --query="SELECT COUNT(rpt.id) AS result, COALESCE(STRING_AGG(rpt.id::TEXT, ','), 'none') AS ids FROM reporter.template_folder AS rpt JOIN reporter.template_folder AS prpt ON (rpt.parent = prpt.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE);" -c 1 -u evergreen -H localhost --action=custom_query -db evergreen check_postgres --query="SELECT COUNT(rpt.id) AS result, COALESCE(STRING_AGG(rpt.id::TEXT, ','), 'none') AS ids FROM reporter.report_folder AS rpt JOIN reporter.report_folder AS prpt ON (rpt.parent = prpt.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE);" -c 1 -u evergreen -H localhost --action=custom_query -db evergreen check_postgres --query="SELECT COUNT(rpt.id) AS result, COALESCE(STRING_AGG(rpt.id::TEXT, ','), 'none') AS ids FROM reporter.output_folder AS rpt JOIN reporter.output_folder AS prpt ON (rpt.parent = prpt.id) WHERE rpt.shared IS TRUE AND (prpt.share_with NOT IN (SELECT id FROM actor.org_unit_ancestors(rpt.share_with)) OR prpt.shared IS FALSE);" -c 1 -u evergreen -H localhost --action=custom_query -db evergreen
This is useful for auditing who has been granted privileges where:
SELECT DISTINCT au.id, au.first_given_name, au.family_name, puwom.work_ou, aou.shortname, pugm.grp, pgt.name FROM actor.usr au INNER JOIN permission.usr_grp_map pugm ON au.id = pugm.usr INNER JOIN permission.usr_work_ou_map puwom ON au.id = puwom.usr INNER JOIN permission.grp_tree pgt ON pugm.grp = pgt.id INNER JOIN actor.org_unit aou ON puwom.work_ou = aou.id -- WHERE puwom.work_ou IN (110, 125, 126) -- uncomment this clause and adjust to limit to specific org_units -- AND pgt.id IN (17, 18) -- uncomment this clause and adjust to limit to members of specific perm groups ORDER BY family_name, work_ou, grp ;
SELECT (SELECT code FROM permission.perm_list WHERE id = perm) AS "Perm", (SELECT 'Depth: ' || depth || ' Grantable: ' || CASE WHEN grantable THEN 'True' ELSE 'False' END FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_X AND b.perm = a.perm) AS "Group X", (SELECT 'Depth: ' || depth || ' Grantable: ' || CASE WHEN grantable THEN 'True' ELSE 'False' END FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_Y AND b.perm = a.perm) AS "Group Y" FROM permission.grp_perm_map AS a WHERE grp IN (ID_FOR_GROUP_X,ID_FOR_GROUP_Y) ORDER BY 1;
SELECT code AS "Perm", (SELECT 'Depth: ' || depth || ' Grantable: ' || CASE WHEN grantable THEN 'True' ELSE 'False' END FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_X AND b.perm = a.id) AS "Group X", (SELECT 'Depth: ' || depth || ' Grantable: ' || CASE WHEN grantable THEN 'True' ELSE 'False' END FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_Y AND b.perm = a.id) AS "Group Y" FROM permission.perm_list AS a ORDER BY 1
SELECT child.ou_id AS child_id, child.depth AS child_depth, parent.ou_id AS parent_id, parent.depth AS parent_depth FROM ( SELECT aou.id AS ou_id, aou.parent_ou AS parent, aout.depth AS depth FROM actor.org_unit AS aou LEFT JOIN actor.org_unit_type AS aout ON ( aou.ou_type = aout.id ) ) AS parent, ( SELECT aou.id AS ou_id, aou.parent_ou AS parent, aout.depth AS depth FROM actor.org_unit AS aou LEFT JOIN actor.org_unit_type AS aout ON ( aou.ou_type = aout.id ) ) AS child WHERE child.parent = parent.ou_id AND child.depth <> parent.depth + 1;
SELECT id, circ_lib, (SELECT owning_lib FROM asset.call_number WHERE id = call_number) AS "owning_lib", STATUS, (SELECT holdable FROM config.copy_status WHERE id = STATUS) AS "status holdable", (SELECT opac_visible FROM config.copy_status WHERE id = STATUS) AS "status OPAC visible", location, (SELECT circulate FROM asset.copy_location WHERE id = location) AS "loc circulate", (SELECT holdable FROM asset.copy_location WHERE id = location) AS "loc holdable", (SELECT opac_visible FROM asset.copy_location WHERE id = location) AS "loc OPAC visible", circulate, holdable, opac_visible, age_protect, circ_modifier, circ_AS_type, REF, alert_message IS NOT NULL AS "alert" FROM asset.copy WHERE id = #item_id_here#;
Assuming an admin/super-user or a login with the DEBUG_CLIENT permission, press Control+Shift+F7 from within the staff client to open a debug textbox near the top of the interface. Enter the following into that textbox and press Execute:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); JSAN.use('util.file'); var f = new util.file('ws_info'); alert(f._file.path);
Find your database's oid and make a note of it:
psql -U evergreen -h localhost -d postgres -c "select oid from pg_database where datname = 'evergreen'"
Then, as the root or postgres user, run the following commands
cd /var/lib/postgresql/${pgversion}/main/base/${oid} cat * > /dev/null
Replace ${pgversion} with your PostgreSQL version number, ex. 9.5. Replace ${oid} with the number returned by the database query.
Ben Shum asked me to share the scripts Evergreen Indiana utilizes to purge reports older than 30 days
15 3 * * * cd /var/evergreen/scripts && psql -U evergreen < purgereports.sql >/dev/null 2>&1 15 3 * * * cd /openils/bin && ./deleteoldreports.sh >/dev/null 2>&1
DELETE FROM reporter.schedule WHERE complete_time < (now() - '30 days'::INTERVAL); DELETE FROM reporter.report WHERE create_time < (now() - '30 days'::INTERVAL) AND recur = FALSE; DELETE FROM reporter.report WHERE create_time < (now() - '30 days'::INTERVAL) AND id NOT IN (SELECT report FROM reporter.schedule);
reportdelete=`./find_orphaned_reports.pl`; for i in $reportdelete; do rm -r $i; done
Example from Lebbeous Fogle-Weekley (see here for context):
request open-ils.search open-ils.search.biblio.multiclass.query {} "piano limit(10) offset(10)"
Another example from Bill Erickson in the same discussion (from JSPAC logs):
request open-ils.search open-ils.search.biblio.multiclass.query {"org_unit":1,"depth":0,"limit":10,"offset":0,"visibility_limit":3000,"default_class":"keyword"}, "concerto", 1
INSERT INTO permission.grp_perm_map (grp,perm,depth) SELECT ID_FOR_TARGET_GROUP, perm, depth FROM permission.grp_perm_map WHERE grp = ID_FOR_SOURCE_GROUP AND perm NOT ( SELECT perm FROM permission.grp_perm_map WHERE grp = ID_FOR_TARGET_GROUP );
perl -MUnicode::Normalize -MEncode -MMARC::Charset=marc8_to_utf8 -e 'binmode(STDIN,":bytes");binmode(STDOUT,":utf8");while(<>){ Encode::_utf8_off($_); chomp; $_=marc8_to_utf8($_); print NFC($_)."\n"}' in-file > out-file
~/cvs/ILS/Open-ILS/src/extras/import/marc2bre.pl -k ../catflex.keys -c /opt/openils/conf/bootstrap.conf ../marc/xaa.mrc | ~/cvs/ILS/Open-ILS/src/extras/import/direct_ingest.pl -c /opt/openils/conf/bootstrap.conf -t 1 | ~/cvs/ILS/Open-ILS/src/extras/import/pg_loader.pl -c /opt/openils/conf/bootstrap.conf -or bre -or mrd -or mfr -or mtfe -or mafe -or msfe -or mkfe -or msefe -a mrd -a mfr -a mtfe -a mafe -a msfe -a mkfe -a msefe > xaa.sql
#!/usr/bin/perl use CGI; use MARC::Record; use MARC::File::XML; print "Content-type: application/marc+xml\n\n" . MARC::Record->new_from_usmarc( CGI->new->param('marc') )->as_xml_record; __END__ To test this, run the chunk below from the command line: $ perl ./marc2xml.cgi marc=00514cam%20%2022001451%20%204500001001800000008004100018050002100059100003100080245005500111260004500166300004100211490003500252500007400287999000700361%1E%20%20%2001031483%20%2F%2Fr25%1E921016s1901%20%20%20%20nyuab%20%20%20%20%20%20%20%20%2000010%20eng%20%20%1E0%20%1FaPS2472%1Fb%2EO3%201901%1E1%20%1FaNorris%2C%20Frank%2C%1Fd1870%2D1902%2E%1E14%1FaOctopus%3B%1Fba%20story%20of%20California%2C%1Fcby%20Frank%20Norris%2E%1E0%20%1FaNew%20York%2C%1FbDoubleday%2C%20Page%20%26%20co%2E%2C%1Fc1901%2E%1E%20%20%1Fa4%20p%2El%2E%2C%20652%20p%2E%1Fbillus%2E%20%28map%29%1Fc20%20cm%2E%1E0%20%1FaHis%20The%20epic%20of%20the%20wheat%2E%20%5BI%5D%1E%20%20%1Fa%22Deals%20with%20the%20war%20between%20the%20wheat%20grower%20and%20the%20railroad%20trust%2E%22%1E%20%20%1Faa1%1E%1D
First, stick this in /openils/var/web/opac/extras/new-records.cgi:
#!/bin/bash echo -n "Location: /" wget -o /dev/null -O - 'http://localhost/opac/extras/feed/freshmeat/rss2/biblio/import/10/' | grep 'rel="OPAC"' | cut -f2 -d'>' | cut -f1 -d'<' | cut -f4- -d\/ echo "Status: 302" echo ""
And then add this to your eg_vhost.conf
<Directory "/openils/var/web/opac/extras"> <Files new-records.cgi> ForceType cgi-script </Files> Options ExecCGI </Directory>
(stolen shamelessly from http://support.internetconnection.net/CODE_LIBRARY/Perl_URL_Encode_and_Decode.shtml … I don't want to loose the tip.)
Here's an excellent way to encode strings that are going to be placed in a URL:
$str =~ s/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg;
$str can now be passed in the query string safely, for example:
<a ref="page_given_string.pl?str=$str">LINK</a>
Likely, "page_given_string.pl" will want to decode $str back into something useful:
$str =~ s/\%([A-Fa-f0-9]{2})/pack('C', hex($1))/seg;
Get all 650a fields for record 133027
SELECT val FROM oils_xpath_table('id','marc','biblio.record_entry', '//*[@tag="650"]/*[@code="a"]', 'id=' || 133027) AS foo (id INT, val TEXT);
BEGIN; INSERT INTO metabib.metarecord (fingerprint, master_record) SELECT DISTINCT ON (b.fingerprint) b.fingerprint, b.id FROM biblio.record_entry b WHERE NOT b.deleted AND b.id IN (SELECT r.id FROM biblio.record_entry r LEFT JOIN metabib.metarecord_source_map k ON (k.source = r.id) WHERE k.id IS NULL AND r.fingerprint IS NOT NULL) AND NOT EXISTS ( SELECT 1 FROM metabib.metarecord WHERE fingerprint = b.fingerprint ) ORDER BY b.fingerprint, b.quality DESC; INSERT INTO metabib.metarecord_source_map (metarecord, SOURCE) SELECT m.id, r.id FROM biblio.record_entry r JOIN metabib.metarecord m USING (fingerprint) WHERE NOT r.deleted AND r.id IN (SELECT b.id FROM biblio.record_entry b LEFT JOIN metabib.metarecord_source_map k ON (k.source = b.id) WHERE k.id IS NULL); -- and if all's well ... COMMIT;
For this example, how to include tag 590's with keyword searches:
INSERT INTO config.metabib_field (field_class,name,xpath,format) VALUES ( 'keyword', '590', $$//marc:datafield[@tag="590"]$$, 'marcxml' );
biblio.record_entry
to just records that have the desired tag.This would be easily adapted to force the 001 of your bibliographic records to be the record ID, as well, effectively making the record ID the system control number.
An enhanced version would test to see if there is a 001 field in the first place, and create it if it's not there.
CREATE OR REPLACE FUNCTION serial.record_entry() RETURNS TRIGGER AS $serial_record_entry$ BEGIN NEW.marc := REGEXP_REPLACE(NEW.marc, E'^(.*?<controlfield tag="001">).*?(<.*$)', E'\\1' || NEW.record || E'\\2', 's'); RETURN NEW; END; $serial_record_entry$ LANGUAGE plpgsql; CREATE TRIGGER serial_record_entry_update_001 BEFORE INSERT OR UPDATE ON serial.record_entry FOR EACH ROW EXECUTE PROCEDURE serial.record_entry();
Please use with caution!
UPDATE biblio.record_entry SET marc = regexp_replace(marc,E'<datafield[^>]*?tag="059".+?</datafield>','','g');
After that, it is best to reingest the record (using the method appropriate to your Evergreen version listed below).
As of 2.0, record ingest is handled by stored procedures in the database. The OpenSRF ingest methods should not be used at present, as they bypass the stored procedures and no longer can be guaranteed to have the same results. To selectively reingest a bib record:
-- update flag to force reingesting the record even if the MARC hasn't changed UPDATE config.internal_flag SET enabled = TRUE WHERE name = 'ingest.reingest.force_on_same_marc'; UPDATE biblio.record_entry SET id = id WHERE id = 123; UPDATE config.internal_flag SET enabled = FALSE WHERE name = 'ingest.reingest.force_on_same_marc';
The same spell can be used to reingest an authority record:
UPDATE config.internal_flag SET enabled = TRUE WHERE name = 'ingest.reingest.force_on_same_marc'; UPDATE authority.record_entry SET id = id WHERE id = 123; UPDATE config.internal_flag SET enabled = FALSE WHERE name = 'ingest.reingest.force_on_same_marc';
There are several other flags that can be tweaked to selectively control which index tables get updated. These include:
Some flags apply when inserting new bibs:
Other flags apply when dealing with authority records or changes to authorities:
From Galen Charlton:
– query to identify all bibs that are missing 901s or whose first 901$c is not equal to the bib ID – this *will* take a long time in a big database
SELECT id FROM biblio.record_entry WHERE (XPATH('//marc:datafield[@tag="901"][1]/marc:subfield[@code="c"]/text()', marc::XML, ARRAY[ARRAY['marc', 'http://www.loc.gov/MARC21/slim']]))[1]::TEXT IS NULL OR (XPATH('//marc:datafield[@tag="901"][1]/marc:subfield[@code="c"]/text()', marc::XML, ARRAY[ARRAY['marc', 'http://www.loc.gov/MARC21/slim']]))[1]::TEXT <> id::TEXT;
While the reporter.materialized_simple_record view is convenient, sometimes those all-lowercase titles are annoying (for example, in shelf list reports that you're preparing for public consumption). In those cases, a little XPATH can go a long way–but you need to correct for the XML array that XPATH() returns, which you can do with ARRAY_TO_STRING() like the following:
SELECT ARRAY_TO_STRING( XPATH('//marc:datafield[@tag="245"][1]//text()', marc::XML, ARRAY[ARRAY['marc', 'http://www.loc.gov/MARC21/slim']] ), ' ' ) FROM biblio.record_entry WHERE ...
grep -Hn duration: pg.*.log|grep -v vacuum|awk '{print $16 " " $1}'| sed 's/:/ /g'|awk '{print $1 "\t" $2 ":" $3}'|sort -run|head
grep multiclass activity.log |grep 'offset":0,'| PERL5LIB=/root/ perl -MJSON -pe 's/^.+{"searches":(.+),"org.+$/$1/gso;eval {$_ = (values(%{JSON->JSON2perl($_)}))[0]->{term};s/"//gso;s/^\s*(.+)\s*/$1/gso;s/\s+/ /gso;$_=lc($_)."\n"};$_="\n" if ($@)'|sort|uniq -c|sort -nr > query-counts.txt
(for i in `perl -e 'print join " " => map { sprintf "\%02d" => $_ } 1 .. 31;'`; do (cd $i; echo $i; ~/stats/top_searches.sh 2>/dev/null ); done) | tee may_searches.txt
This is useful for getting a reasonable count of the number of copies of monographs held by a given library based on the first character of the LC call number. For Dewey you could adjust the substring length accordingly.
SELECT SUBSTRING(UPPER(acn.label), 1, 1) AS "LC class", COUNT(ac.id) AS "Count" FROM asset.copy ac INNER JOIN asset.call_number acn ON ac.call_number = acn.id INNER JOIN actor.org_unit aou ON aou.id = acn.owning_lib INNER JOIN metabib.rec_descriptor mrd ON mrd.record = acn.record WHERE aou.id = 103 AND mrd.bib_level IN ('a','m') -- limit to monographs; use 'b','s' for serials AND mrd.item_form NOT IN ('s') -- limit to non-electronic resources -- AND acn.label NOT LIKE 'XX%' -- avoid ranges of legacy ephemeral garbage AND ac.deleted IS FALSE AND acn.deleted IS FALSE GROUP BY SUBSTRING(UPPER(acn.label), 1, 1) ORDER BY 1 ;
A common task when considering the purchase of a bulk set of new titles is to check for titles that a given library already owns. One way to do this is to compare the ISBNs in the existing records vs. the potential set of new titles, assuming that you have been given a title list that includes ISBNs.
BEGIN; CREATE TABLE scratchpad.isbn_list (isbn text); -- Replace ISBN_1, ISBN_2, etc with the list of ISBNs, one per line COPY scratchpad.isbn_list(isbn) FROM stdin; ISBN_1 ISBN_2 \. COMMIT;
'reporter.super_simple_record.isbn
' is an ARRAY column, so we have to match against the contents of that column using the 'ANY
' operator. Warning: this is slow; that's the price you pay for a non-normalized database schema.SELECT COUNT(*) FROM asset.call_number WHERE record IN ( SELECT id FROM reporter.super_simple_record rssr INNER JOIN scratchpad.isbn_list sil ON sd.isbn = ANY (rssr.isbn) ) AND owning_lib = 103 AND deleted IS FALSE ;
SELECT thecount.name,create_time,thecount.lasttime,thecount.count FROM reporter.template rt, ( SELECT rtinner.name "name",MAX(run_time) "lasttime",COUNT(*) "count" FROM reporter.schedule rs,reporter.report rr, (SELECT naming.name,string_agg(clones.id||'',',') ids FROM reporter.template naming , reporter.template clones WHERE clones.data=naming.data AND naming.folder IN(SELECT id FROM reporter.template_folder WHERE owner= !!!REPLACE ME WITH OWNER ID!!! AND shared AND share_with=1) AND naming.owner= !!!REPLACE ME WITH OWNER ID!!! GROUP BY 1 ) rtinner WHERE rr.template = any(('{'||rtinner.ids||'}')::INT[]) AND rr.id=rs.report GROUP BY 1 ) AS thecount WHERE thecount.name = rt.name AND rt.owner= !!!REPLACE ME WITH OWNER ID!!! AND rt.folder IN(SELECT id FROM reporter.template_folder WHERE owner= !!!REPLACE ME WITH OWNER ID!!! AND shared AND share_with=1) ORDER BY 4 DESC
You will need to edit that query to look for reports owned by a particular actor.usr.id. Replace "!!!REPLACE ME WITH OWNER ID!!!" with the ID number. This is useful when you have centralized report templates for everyone to use. It also takes into account cloned reports from the same source template!
This email thread lists a couple ideas.
--with-perl --enable-integer-datetimes
Creating a build for dojo in Evergreen will decrease the loading time for each page significantly, providing that there are enough modules being called to warrant a build in the first place. See the Dojo Build System (Pre-1.7) documentation for details.
To start, you want to understand the construction of a dojo build. Each build is a profile, each profile has layers. Each layer is made up of dojo modules. You can see Open-ILS/examples/openils.profile.js
to illustrate this concept. Later, these layers will be called in your markup as scripts. Therefore, you can potentially organize your dojo builds to work harmoniously with your custom set up, utilizing custom 'layers' of grouped modules for each UI.
First, retrieve and extract a copy of the Dojo source. NOTE: You must use the source distribution for this, which contains src
in the name, as only the source version contains the required layer building scripts! Under the top-level extracted directory you should have four dirs; dijit, dojo, dojox, and util. Copy Open-ILS/examples/openils.profile.js
into the util/buildscripts/profiles
directory.
To create the build:
cd util/buildscripts ./build.sh profile=openils action=release version=1.3.1 # (if using Dojo 1.3.1, otherwise make this the Dojo version you are currently running)
Once you run the build script you should then have a directory named release/
as a peer to util/
, dojo/
, dijit/
and dojox/
in the Dojo source. Copy the contents of release/dojo/
, optionally excluding release/dojo/util
, into /openils/var/web/js/dojo/
.
If you decide to create a custom build for your organization, you may want to create custom layers in your profile. The dojo URL referenced above should explain how to go about doing that. Your first step is to find each file where you include 'dojo.require'. Then you want to find the module that is being used for each template or interface. dojofind.pl will do this for you, with slight tweaking for the file path, if necessary.
http://{hostname}/opac/extras/browse/{format, such as: xml (callnumber axis only), html[-full], marcxml[-full], or other unapi format}/{title|author|subject|topic|series|item-age|callnumber|authority.title|authority.author|authority.subject|authority.topic}/{org shortname, or -, ignored by authority.* axes but required positionally}/{target string}[/{page number offset, can be negative, defaults to 0}]?[&?status={number}...][&?copyLocation={number}...]
In a fresh checkout of the repo, after the autotools autogen.sh, run the following:
cd build/i18n; make newpot; for i in `ls po/lang.dtd/|cut -f1 -d.|grep -v lang`; do make LOCALE=$i install; done
to make some fancy i18n stuffs.
Dan Scott's presentation:
One trick that is useful in many situations is to use SQL to generate a script. For instance, after adjusting the settings on penalties, you might need to update the penalties on a given class of patrons. While there's a nice SQL function to do that, it's not convenient to call it on multiple users. Instead, use SQL to create a script you can then run. Suppose I want to run the actor.calculate_system_penalties on all Adult patrons. I could generate a script to do that via this query:
select 'insert into actor.usr_standing_penalty ' || '( org_unit, usr, standing_penalty, staff, set_date, stop_date, note ) ' || 'select org_unit, usr, standing_penalty, staff, set_date, stop_date, note ' || ' from actor.calculate_system_penalties( ' || usr.id || ', ' || ou.id || ' ) ; ' from actor.usr usr join actor.org_unit ou on ( usr.home_ou = ou.id ) join permission.grp_tree pgt on ( usr.profile = pgt.id ) where pgt.name = 'Adult' ;
The results of this query will look something like this (I've split the lines up a bit for clarity–each entry would be one long line):
?column? insert into actor.usr_standing_penalty ( org_unit, usr, standing_penalty, staff, set_date, stop_date, note ) select org_unit, usr, standing_penalty, staff, set_date, stop_date, note from actor.calculate_system_penalties( 82111, 104 ) ; insert into actor.usr_standing_penalty ( org_unit, usr, standing_penalty, staff, set_date, stop_date, note ) select org_unit, usr, standing_penalty, staff, set_date, stop_date, note from actor.calculate_system_penalties( 158001, 109 ) ; ...
So, to make use of this, you'd capture the output of the query, and then run it as a script. In psql \o fileName allows you to save your output to a file; use \o (no file name) to turn off the output. You may need to edit the file to remove a spurious column heading and rowcounts. You can obviate the need to deal with column headers and rowcounts by setting psql to omit those from the output via this psql command: \t
Another way to eliminate the need to deal with column headings is to give the column a name which is a comment (this is particularly handy if you might need to run the script again):
select 'insert into actor.usr_standing_penalty ' || '( org_unit, usr, standing_penalty, staff, set_date, stop_date, note ) ' || 'select org_unit, usr, standing_penalty, staff, set_date, stop_date, note ' || ' from actor.calculate_system_penalties( ' || usr.id || ', ' || ou.id || ' ) ; ' as "/* recalc Adult penalties */" ...
Note that using a column name that is itself a comment does not work with the \t command (because it turns off column headings as well as footers); if you want to keep the column heading, but eliminate the footer, you can use this command: \pset footer off
If you prefer to use GUI tools for SQL (some of us do, at least frequently), in pgAdmin III, although you can capture the output easily enough, by default, it most inconveniently surrounds it in double-quotes. You can turn this behavior off in the main window's File > Options dialog's Query Tool tab: Change Result copy quoting to None
Frequently when creating a query, especially a complex one that will return many rows or just run for a long time, it's helpful to have a relatively small number of rows returned so you can make sure you've got the conditions correct. For instance, in the above example (Using SQL to Generate SQL), it's easy to miss a critical piece (like the final ; ), for instance. Use the LIMIT keyword to return a small number of rows until you're confident your SQL is working the way you want it to:
select 'insert into actor.usr_standing_penalty select ...' ... LIMIT 20 -- only return 20 rows ;
Just do not forget to comment out or delete the LIMIT before you run it for real, or you may fall victim to the but-I-could-have-sworn-there-were-more-than-that-many syndrome.
SELECT (SELECT nspname FROM pg_namespace WHERE oid = (SELECT relnamespace FROM pg_class WHERE oid = tgrelid)) || '.' || (SELECT relname FROM pg_class WHERE oid = tgrelid) AS "table", tgname AS "trigger", tgenabled FROM pg_trigger WHERE tgenabled = 'D';
-- breadth-first (i.e. all first-level org units, then all second-level OUs, etc.) WITH RECURSIVE orgs AS ( SELECT id, shortname, name, parent_ou FROM actor.org_unit WHERE parent_ou IS NULL UNION ALL SELECT NEXT.id, NEXT.shortname, NEXT.name, NEXT.parent_ou FROM actor.org_unit NEXT JOIN orgs prev ON NEXT.parent_ou = prev.id ) SELECT id, shortname, name, parent_ou FROM orgs; -- depth-first ("expanded tree" order, as in most org selector dropdown menus) WITH RECURSIVE orgs AS ( SELECT id, shortname, name, parent_ou, array[id] AS path FROM actor.org_unit WHERE parent_ou IS NULL UNION SELECT NEXT.id, NEXT.shortname, NEXT.name, NEXT.parent_ou, (prev.path || NEXT.id) AS path FROM actor.org_unit NEXT JOIN orgs prev ON NEXT.parent_ou = prev.id ) SELECT id, shortname, name, parent_ou FROM orgs ORDER BY path;
You know how in SQL a SELECT statement's ORDER BY clause can order on expressions? Like this:
SELECT foo FROM bar ORDER BY foo = 7 DESC, foo ASC;
That would give you a set of foo where 7 is your first result, and the rest of the results are ordered in ascending order. There are times when this query (or similar) is handy. Really!
json_query doesn't provide an obvious way to accomplish this, but you can achieve the equivalent like so (just showing the order_by clause for brevity's sake):
"order_by": [ {"class": "bar", "field": "foo", "transform": "numeric_eq", "params": [7], "direction": "desc"}, {"class": "bar", "field": "foo", "direction": "asc"} ]
Since operators aren't directly supported in the order_by clause of a json_query, this uses the built-in function behind the = operator, which is NUMERIC_EQ().
/openils/var/web/xul/server/skin/custom.js should contain:
////////////////////////////////////////////////////////////////////////// // Overriding receipt templates globally try { // In practice, this should be true in menu.js/opac.js, but not util_overlay.xul if (typeof JSAN != 'undefined' && typeof JSON2js != 'undefined' && typeof xulG != 'undefined') { var r = new XMLHttpRequest(); r.open("GET", xulG.url_prefix('/xul/server/skin/print_list_templates'), false); r.send(null); if (r.status == 200) { JSAN.use('OpenILS.data'); var custom_data = new OpenILS.data(); custom_data.stash_retrieve(); var custom_templates = JSON2js( r.responseText ); for (var i in custom_templates) { custom_data.print_list_templates[i] = custom_templates[i]; } custom_data.stash('print_list_templates'); dump('Overriding receipt templates via custom.js\n'); } } } catch(E) { dump('Error overriding receipt templates in custom.js: ' + E + '\n'); }
And /openils/var/web/xul/server/skin/print_list_templates should be a file just like the type of file you get when you use Export from the Receipt Template Editor. You could trim the contents to override only specific templates.
Alternate version with per workstation OU settings If you would like to manage print templates centrally with the ability to have different settings per Org Unit, this variant allows you to have default print_list_template and then a version for specific Org units, print_list_template-oushortname.
////////////////////////////////////////////////////////////////////////// // Overriding receipt templates globally try { // In practice, this should be true in menu.js/opac.js, but not util_overlay.xul if (typeof JSAN != 'undefined' && typeof JSON2js != 'undefined' && typeof xulG != 'undefined') { var r = new XMLHttpRequest(); //request the OU specific print_list_templates files r.open("GET", xulG.url_prefix('/xul/server/skin/print_list_templates') + '-' + ses('ws_ou_shortname'), false); r.send(null); if (r.status != 200) { //request the default print_list_templates file. r.open("GET", xulG.url_prefix('/xul/server/skin/print_list_templates'), false); r.send(null); } if (r.status == 200) { JSAN.use('OpenILS.data'); var custom_data = new OpenILS.data(); custom_data.stash_retrieve(); var custom_templates = JSON2js( r.responseText ); for (var i in custom_templates) { custom_data.print_list_templates[i] = custom_templates[i]; } custom_data.stash('print_list_templates'); dump('Overriding receipt templates via custom.js\n'); } } } catch(E) { dump('Error overriding receipt templates in custom.js: ' + E + '\n'); }
First /openils/var/web/xul/server/skin/print_list_templates-OUSHORTNAME is checked, then /openils/var/web/xul/server/skin/print_list_templates is checked if the OU specific version doesn't exist. Both files should be a file just like the type of file you get when you use Export from the Receipt Template Editor. You could trim the contents to override only specific templates.
While newer versions of Evergreen offer the ability to clear the cache for a given record through the staff client, sometimes doing it from the command line is a better option.
If you want to clear the cache for a given record by its database ID, you can do the following from the command line (substitute 12345678 for the record ID, and modify your memcached server address if necessary):
export MEMCACHED_SERVERS=127.0.0.1; memcdump | grep ^ac\\..*record_12345678 | xargs memcrm
Thank you Jeff Godin for this tip!
Similar to the spell above, this can be used to clear all cached catalog search results and facets from memcached. This ordinarily isn't needed for production use, but can be useful when testing tweaks to search code or settings.
memcdump --servers localhost|grep open-ils.search | xargs -n 1 memcrm --servers localhost
This "cache killer" can be used to bypass cached results for a single search. Append your search term with a random unique string of characters preceded by a hyphen. As an example, a search term to re-execute a search for puffins, bypassing previous cached results could be:
puffins -oononromfh879ui
Thank you Mike Rylander for this tip!