troubleshooting:random_magic_spells
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
troubleshooting:random_magic_spells [2025/07/26 16:34] – removed - external edit (Unknown date) 127.0.0.1 | troubleshooting:random_magic_spells [2025/07/26 16:46] (current) – sleary | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Troubleshooting: | ||
+ | =====Check Evergreen version via srfsh===== | ||
+ | |||
+ | | ||
+ | |||
+ | Doesn' | ||
+ | |||
+ | =====Check Evergreen version via http===== | ||
+ | http:// | ||
+ | |||
+ | Replace acq.open-ils.org with the hostname for the server to test. | ||
+ | |||
+ | =====Unable to login through the web interface or staff client after a successful install? | ||
+ | | ||
+ | psql -U < | ||
+ | # update actor.usr set passwd = ' | ||
+ | # \q | ||
+ | |||
+ | Another option with newer versions of Evergreen, as the opensrf user: | ||
+ | <code bash> | ||
+ | Open-ILS/ | ||
+ | </ | ||
+ | =====See long-running database queries===== | ||
+ | <code sql> | ||
+ | select now() - xact_start as " | ||
+ | </ | ||
+ | =====Find unicode characters in a file===== | ||
+ | |||
+ | perl -Mencoding=utf8 -MUnicode:: | ||
+ | |||
+ | =====Stored procedures to find and eliminate bad (truncated) UTF-8 characters in postgres===== | ||
+ | |||
+ | <code sql> | ||
+ | create or replace function perl_find_char(text, | ||
+ | create or replace function perl_rm_char (text,text) returns text as $$use Encode; my $x=shift; Encode:: | ||
+ | </ | ||
+ | |||
+ | Usage: | ||
+ | <code sql> | ||
+ | BEGIN; | ||
+ | UPDATE some_table SET the_field = perl_rm_char(the_field, | ||
+ | EXPLAIN ANALYZE SELECT LOWER(the_field) FROM some_table; | ||
+ | -- If the last command does not throw an error, you have succeeded | ||
+ | COMMIT; | ||
+ | </ | ||
+ | ===== Quick way to audit in-db circ rules for a particular org unit ===== | ||
+ | |||
+ | A quick way to view the circulation policies, in PostgreSQL, with the names instead of id values, similar to the Staff Client Circulation Policy Editor. | ||
+ | |||
+ | <code sql> | ||
+ | SELECT a.id, a.active, a.circ_modifier, | ||
+ | LEFT JOIN actor.org_unit b ON a.org_unit = b.id | ||
+ | LEFT JOIN permission.grp_tree c ON a.grp = c.id | ||
+ | LEFT JOIN config.rule_circ_duration d ON a.duration_rule = d.id | ||
+ | LEFT JOIN config.rule_recurring_fine e ON a.recurring_fine_rule = e.id | ||
+ | LEFT JOIN config.rule_max_fine f ON a.max_fine_rule = f.id | ||
+ | WHERE a.org_unit=1; | ||
+ | </ | ||
+ | |||
+ | Another approach, showing more columns and their human readable values, change the WHERE clause to get different ones or comment out WHERE clause to see all rules. By yboston, inspired by bshum, | ||
+ | |||
+ | <code sql> | ||
+ | SELECT ccmm.id AS " | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | |||
+ | FROM config.circ_matrix_matchpoint ccmm | ||
+ | INNER JOIN actor.org_unit aou ON aou.id = ccmm.org_unit | ||
+ | INNER JOIN permission.grp_tree pgt ON pgt.id = ccmm.grp | ||
+ | INNER JOIN config.rule_circ_duration crcd ON crcd.id = ccmm.duration_rule | ||
+ | INNER JOIN config.rule_recurring_fine crrf ON crrf.id = ccmm.recurring_fine_rule | ||
+ | INNER JOIN config.rule_max_fine crmf ON crmf.id = ccmm.max_fine_rule | ||
+ | LEFT OUTER JOIN config.circ_matrix_limit_set_map map ON map.matchpoint = ccmm.id | ||
+ | LEFT OUTER JOIN config.circ_limit_set limit_set ON limit_set.id = map.limit_set | ||
+ | LEFT OUTER JOIN config.circ_limit_set_circ_mod_map lscm ON lscm.limit_set = limit_set.id | ||
+ | LEFT OUTER JOIN actor.org_unit org on (ccmm.copy_circ_lib = org.id) | ||
+ | LEFT OUTER JOIN asset.copy_location loc on (loc.id = ccmm.copy_location) | ||
+ | WHERE ccmm.copy_circ_lib = 101 -- OPTIONAL: set one or more org unit id numbers here | ||
+ | GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9,10, 11, 12, 13 | ||
+ | ORDER BY copy_circ_lib, | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | ===== Cancel Query/ | ||
+ | |||
+ | If you need to cancel a query/ | ||
+ | <code sql> | ||
+ | SELECT pg_cancel_backend(backend_pid); | ||
+ | </ | ||
+ | The backend_pid comes from the procpid column of pg_stat_activity. | ||
+ | <code sql> | ||
+ | SELECT procpid, now()-query_start AS duration, current_query FROM pg_stat_activity WHERE current_query <> '< | ||
+ | </ | ||
+ | This will give you the current queries, ordered by duration. Reports are easy to spot because the reporter uses MD5sum values for table aliases. | ||
+ | |||
+ | For databases running PostgreSQL 9.2+, the pg_stat_activity table is a little different, so this query is more helpful perhaps: | ||
+ | <code sql> | ||
+ | SELECT pid, now()-query_start AS duration, query FROM pg_stat_activity WHERE state <> ' | ||
+ | </ | ||
+ | ===== List Reporter Folders with Bad Sharing Depths ===== | ||
+ | Given a simple org tree of A-> | ||
+ | <code sql> | ||
+ | -- 3 types of folders, 3 queries. | ||
+ | select rpt.id as " | ||
+ | |||
+ | select rpt.id as " | ||
+ | |||
+ | select rpt.id as " | ||
+ | </ | ||
+ | |||
+ | Using Bucardo' | ||
+ | < | ||
+ | check_postgres --query=" | ||
+ | |||
+ | check_postgres --query=" | ||
+ | |||
+ | check_postgres --query=" | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Listing users with permissions by work_ou ===== | ||
+ | This is useful for auditing who has been granted privileges where: | ||
+ | <code sql> | ||
+ | SELECT distinct au.id, au.first_given_name, | ||
+ | FROM actor.usr au | ||
+ | INNER JOIN permission.usr_grp_map pugm ON au.id = pugm.usr | ||
+ | INNER JOIN permission.usr_work_ou_map puwom ON au.id = puwom.usr | ||
+ | INNER JOIN permission.grp_tree pgt ON pugm.grp = pgt.id | ||
+ | INNER JOIN actor.org_unit aou ON puwom.work_ou = aou.id | ||
+ | -- WHERE puwom.work_ou IN (110, 125, 126) -- uncomment this clause and adjust to limit to specific org_units | ||
+ | -- AND pgt.id IN (17, 18) -- uncomment this clause and adjust to limit to members of specific perm groups | ||
+ | ORDER BY family_name, | ||
+ | ; | ||
+ | </ | ||
+ | |||
+ | ===== Comparing permissions between multiple permission groups ===== | ||
+ | |||
+ | <code sql> | ||
+ | SELECT | ||
+ | (SELECT code FROM permission.perm_list WHERE id = perm) AS " | ||
+ | (SELECT ' | ||
+ | from permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_X AND b.perm = a.perm) AS "Group X", | ||
+ | (SELECT ' | ||
+ | FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_Y AND b.perm = a.perm) AS "Group Y" | ||
+ | FROM permission.grp_perm_map AS a WHERE grp IN (ID_FOR_GROUP_X, | ||
+ | </ | ||
+ | |||
+ | ===== Comparing permissions between multiple permission groups (variant: uses complete permission list) ===== | ||
+ | |||
+ | <code sql> | ||
+ | SELECT | ||
+ | code AS " | ||
+ | (SELECT ' | ||
+ | from permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_X AND b.perm = a.id) AS "Group X", | ||
+ | (SELECT ' | ||
+ | FROM permission.grp_perm_map AS b WHERE b.grp = ID_FOR_GROUP_Y AND b.perm = a.id) AS "Group Y" | ||
+ | FROM permission.perm_list as a order by 1 | ||
+ | </ | ||
+ | ===== Finding mis-matched depths with org units and org unit types ===== | ||
+ | |||
+ | <code sql> | ||
+ | SELECT | ||
+ | child.ou_id | ||
+ | child.depth | ||
+ | parent.ou_id AS parent_id, | ||
+ | parent.depth AS parent_depth | ||
+ | FROM | ||
+ | ( | ||
+ | SELECT | ||
+ | aou.id | ||
+ | aou.parent_ou | ||
+ | aout.depth | ||
+ | FROM | ||
+ | actor.org_unit AS aou | ||
+ | LEFT JOIN actor.org_unit_type AS aout | ||
+ | ON ( aou.ou_type = aout.id ) | ||
+ | ) AS parent, | ||
+ | ( | ||
+ | SELECT | ||
+ | aou.id | ||
+ | aou.parent_ou | ||
+ | aout.depth | ||
+ | FROM | ||
+ | actor.org_unit AS aou | ||
+ | LEFT JOIN actor.org_unit_type AS aout | ||
+ | ON ( aou.ou_type = aout.id ) | ||
+ | ) AS child | ||
+ | WHERE | ||
+ | child.parent = parent.ou_id | ||
+ | AND child.depth <> parent.depth + 1; | ||
+ | </ | ||
+ | |||
+ | =====Checking Circ/Hold Flags on an Item===== | ||
+ | <code sql> | ||
+ | SELECT | ||
+ | id, | ||
+ | circ_lib, | ||
+ | (SELECT owning_lib FROM asset.call_number WHERE id = call_number) AS " | ||
+ | status, | ||
+ | (SELECT holdable FROM config.copy_status WHERE id = status) AS " | ||
+ | (SELECT opac_visible FROM config.copy_status WHERE id = status) AS " | ||
+ | location, | ||
+ | (SELECT circulate FROM asset.copy_location WHERE id = location) AS "loc circulate", | ||
+ | (SELECT holdable FROM asset.copy_location WHERE id = location) AS "loc holdable", | ||
+ | (SELECT opac_visible FROM asset.copy_location WHERE id = location) AS "loc OPAC visible", | ||
+ | circulate, | ||
+ | holdable, | ||
+ | opac_visible, | ||
+ | age_protect, | ||
+ | circ_modifier, | ||
+ | circ_AS_type, | ||
+ | alert_message is not null AS " | ||
+ | FROM asset.copy | ||
+ | WHERE id = # | ||
+ | </ | ||
+ | |||
+ | =====Where is my workstation registration ws_info file?===== | ||
+ | Assuming an admin/ | ||
+ | < | ||
+ | netscape.security.PrivilegeManager.enablePrivilege(" | ||
+ | </ | ||
+ | |||
+ | ====== Performance ====== | ||
+ | =====Preload Your database into RAM===== | ||
+ | Find your database' | ||
+ | < | ||
+ | psql -U evergreen -h localhost -d postgres -c " | ||
+ | </ | ||
+ | |||
+ | Then, as the root or postgres user, run the following commands | ||
+ | < | ||
+ | cd / | ||
+ | cat * > /dev/null | ||
+ | </ | ||
+ | |||
+ | Replace ${pgversion} with your PostgreSQL version number, ex. 9.5. Replace ${oid} with the number returned by the database query. | ||
+ | |||
+ | ====== Utility ====== | ||
+ | |||
+ | =====Regularly Scheduled Report Output Purging===== | ||
+ | |||
+ | Ben Shum asked me to share the scripts Evergreen Indiana utilizes to purge reports older than 30 days | ||
+ | |||
+ | === Cron Tab Entries === | ||
+ | |||
+ | < | ||
+ | 15 3 * * * cd / | ||
+ | |||
+ | === purgereports.sql === | ||
+ | <code sql> | ||
+ | DELETE FROM reporter.report WHERE create_time < (now() - '30 days':: | ||
+ | DELETE FROM reporter.report WHERE create_time < (now() - '30 days':: | ||
+ | |||
+ | === deleteoldreports.sh === | ||
+ | |||
+ | < | ||
+ | for i in $reportdelete; | ||
+ | rm -r $i; done</ | ||
+ | |||
+ | ===== Use srfsh to search the OPAC ===== | ||
+ | |||
+ | Example from Lebbeous Fogle-Weekley (see [[http:// | ||
+ | |||
+ | < | ||
+ | request open-ils.search open-ils.search.biblio.multiclass.query {} "piano limit(10) offset(10)" | ||
+ | </ | ||
+ | |||
+ | Another example from Bill Erickson in the same discussion (from JSPAC logs): | ||
+ | |||
+ | < | ||
+ | request open-ils.search open-ils.search.biblio.multiclass.query {" | ||
+ | </ | ||
+ | |||
+ | ===== Copying permissions from one group to another ===== | ||
+ | |||
+ | <code sql> | ||
+ | INSERT INTO permission.grp_perm_map (grp, | ||
+ | SELECT ID_FOR_TARGET_GROUP, | ||
+ | WHERE grp = ID_FOR_SOURCE_GROUP AND perm NOT ( SELECT perm FROM permission.grp_perm_map WHERE grp = ID_FOR_TARGET_GROUP ); | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====Transcoding a flat file from MARC8 to UTF-8===== | ||
+ | |||
+ | perl -MUnicode:: | ||
+ | |||
+ | |||
+ | =====Magical command line for creating loader SQL===== | ||
+ | |||
+ | ~/ | ||
+ | |||
+ | =====Dan Scott asked for a web service that can turn MARC into XML ... well, here it is===== | ||
+ | |||
+ | <code perl> | ||
+ | # | ||
+ | |||
+ | use CGI; | ||
+ | use MARC:: | ||
+ | use MARC:: | ||
+ | |||
+ | print " | ||
+ | |||
+ | __END__ | ||
+ | |||
+ | To test this, run the chunk below from the command line: | ||
+ | |||
+ | $ perl ./ | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====How to turn the freshmeat (new bib) feed into a JSPAC result page=== | ||
+ | |||
+ | First, stick this in / | ||
+ | < | ||
+ | #!/bin/bash | ||
+ | echo -n " | ||
+ | wget -o /dev/null -O - ' | ||
+ | echo " | ||
+ | echo "" | ||
+ | </ | ||
+ | |||
+ | And then add this to your eg_vhost.conf | ||
+ | |||
+ | < | ||
+ | < | ||
+ | <Files new-records.cgi> | ||
+ | ForceType cgi-script | ||
+ | </ | ||
+ | Options ExecCGI | ||
+ | </ | ||
+ | </ | ||
+ | =====Perl URL Encode & Decode String===== | ||
+ | (stolen shamelessly from http:// | ||
+ | |||
+ | Here's an excellent way to encode strings that are going to be placed in a URL: | ||
+ | |||
+ | <code perl> | ||
+ | |||
+ | $str can now be passed in the query string safely, for example: | ||
+ | <code perl>< | ||
+ | |||
+ | Likely, " | ||
+ | |||
+ | <code perl> | ||
+ | |||
+ | ====== Bib-work ====== | ||
+ | |||
+ | =====Get MARC field for a record===== | ||
+ | |||
+ | Get all 650a fields for record 133027 | ||
+ | <code sql> | ||
+ | SELECT val FROM oils_xpath_table(' | ||
+ | </ | ||
+ | |||
+ | =====How to generate metarecords for a newly loaded bib batch===== | ||
+ | |||
+ | <code sql> | ||
+ | |||
+ | BEGIN; | ||
+ | |||
+ | INSERT INTO metabib.metarecord (fingerprint, | ||
+ | SELECT | ||
+ | FROM biblio.record_entry b | ||
+ | WHERE NOT b.deleted | ||
+ | AND b.id IN (SELECT r.id FROM biblio.record_entry r LEFT JOIN metabib.metarecord_source_map k ON (k.source = r.id) WHERE k.id IS NULL AND r.fingerprint IS NOT NULL) | ||
+ | AND NOT EXISTS ( SELECT 1 FROM metabib.metarecord WHERE fingerprint = b.fingerprint ) | ||
+ | ORDER BY b.fingerprint, | ||
+ | |||
+ | INSERT INTO metabib.metarecord_source_map (metarecord, | ||
+ | SELECT | ||
+ | FROM biblio.record_entry r | ||
+ | JOIN metabib.metarecord m USING (fingerprint) | ||
+ | WHERE NOT r.deleted | ||
+ | AND r.id IN (SELECT b.id FROM biblio.record_entry b LEFT JOIN metabib.metarecord_source_map k ON (k.source = b.id) WHERE k.id IS NULL); | ||
+ | |||
+ | -- and if all's well ... | ||
+ | |||
+ | COMMIT; | ||
+ | |||
+ | </ | ||
+ | |||
+ | =====How to include a specific MARC field with a specific search class.===== | ||
+ | For this example, how to include tag 590's with keyword searches: | ||
+ | |||
+ | - Create an indexing entry in config.metabib_field:< | ||
+ | | ||
+ | ' | ||
+ | ' | ||
+ | | ||
+ | ' | ||
+ | ); | ||
+ | </ | ||
+ | - Follow the [[# | ||
+ | =====Sync the 001 field of your serials records to the linked bibliographic record ID===== | ||
+ | This would be easily adapted to force the 001 of your bibliographic records to be the record ID, as well, effectively making the record ID the system control number. | ||
+ | |||
+ | An enhanced version would test to see if there is a 001 field in the first place, and create it if it's not there. | ||
+ | <code sql> | ||
+ | CREATE OR REPLACE FUNCTION serial.record_entry() RETURNS trigger AS $serial_record_entry$ | ||
+ | BEGIN | ||
+ | NEW.marc := REGEXP_REPLACE(NEW.marc, | ||
+ | RETURN NEW; | ||
+ | END; | ||
+ | $serial_record_entry$ LANGUAGE plpgsql; | ||
+ | |||
+ | CREATE TRIGGER serial_record_entry_update_001 | ||
+ | BEFORE INSERT OR UPDATE ON serial.record_entry | ||
+ | FOR EACH ROW | ||
+ | EXECUTE PROCEDURE serial.record_entry(); | ||
+ | </ | ||
+ | |||
+ | =====How to prune a tag under the hood===== | ||
+ | Please use with caution! | ||
+ | <code sql> | ||
+ | update biblio.record_entry set marc = regexp_replace(marc, | ||
+ | </ | ||
+ | After that, it is best to reingest the record (using the method appropriate to your Evergreen version listed below). | ||
+ | |||
+ | |||
+ | ===== Reingesting Bib and Authority Records (2.0 and later) ===== | ||
+ | |||
+ | As of 2.0, record ingest is handled by stored procedures in the database. | ||
+ | |||
+ | <code sql> | ||
+ | -- update flag to force reingesting the record even if the MARC hasn't changed | ||
+ | update config.internal_flag set enabled = true where name = ' | ||
+ | update biblio.record_entry set id = id where id = 123; | ||
+ | update config.internal_flag set enabled = false where name = ' | ||
+ | </ | ||
+ | |||
+ | The same spell can be used to reingest an authority record: | ||
+ | |||
+ | <code sql> | ||
+ | update config.internal_flag set enabled = true where name = ' | ||
+ | update authority.record_entry set id = id where id = 123; | ||
+ | update config.internal_flag set enabled = false where name = ' | ||
+ | </ | ||
+ | |||
+ | There are several other flags that can be tweaked to selectively control which index tables get updated. | ||
+ | |||
+ | * ingest.metarecord_mapping.skip_on_update (enabling this can significantly speed up the reingest of a large number of records) | ||
+ | * ingest.disable_located_uri | ||
+ | * ingest.disable_metabib_full_rec | ||
+ | * ingest.disable_metabib_rec_descriptor | ||
+ | * ingest.disable_metabib_field_entry | ||
+ | |||
+ | Some flags apply when inserting new bibs: | ||
+ | |||
+ | * ingest.metarecord_mapping.skip_on_insert (enabling this can significantly speed up importing a large number of bibs; use the spell for creating metarecords after loading the bibs) | ||
+ | * ingest.assume_inserts_only | ||
+ | |||
+ | Other flags apply when dealing with authority records or changes to authorities: | ||
+ | |||
+ | * ingest.disable_authority_linking | ||
+ | * ingest.disable_authority_auto_update | ||
+ | |||
+ | ===== Find missing or mismatched 901$c ===== | ||
+ | |||
+ | From Galen Charlton: | ||
+ | |||
+ | -- query to identify all bibs that are missing 901s or whose first 901$c is not equal to the bib ID | ||
+ | -- this *will* take a long time in a big database | ||
+ | |||
+ | <code sql> | ||
+ | SELECT id | ||
+ | FROM biblio.record_entry | ||
+ | WHERE (XPATH('// | ||
+ | OR (XPATH('// | ||
+ | </ | ||
+ | |||
+ | ===== Generating a SQL report with natural titles ===== | ||
+ | |||
+ | While the reporter.materialized_simple_record view is convenient, sometimes those all-lowercase titles are annoying (for example, in shelf list reports that you're preparing for public consumption). In those cases, a little XPATH can go a long way--but you need to correct for the XML array that XPATH() returns, which you can do with ARRAY_TO_STRING() like the following: | ||
+ | |||
+ | <code sql> | ||
+ | SELECT | ||
+ | ARRAY_TO_STRING( | ||
+ | XPATH('// | ||
+ | marc::XML, ARRAY[ARRAY[' | ||
+ | ), | ||
+ | ' ' | ||
+ | ) | ||
+ | FROM biblio.record_entry WHERE ... | ||
+ | </ | ||
+ | ====== Statistics ====== | ||
+ | |||
+ | =====To find the worst queries for the day, in the logger Pg logs===== | ||
+ | |||
+ | grep -Hn duration: pg.*.log|grep -v vacuum|awk ' | ||
+ | |||
+ | =====Generate counts for normalized query strings from activity.log===== | ||
+ | |||
+ | grep multiclass activity.log |grep ' | ||
+ | |||
+ | =====Example of generating monthly search counts from the activity log===== | ||
+ | |||
+ | < | ||
+ | |||
+ | ===== Counting number of copies held by a library by LC call number range ===== | ||
+ | This is useful for getting a reasonable count of the number of copies of monographs held by a given library based on the first character of the LC call number. For Dewey you could adjust the substring length accordingly. | ||
+ | <code sql> | ||
+ | SELECT SUBSTRING(UPPER(acn.label), | ||
+ | FROM asset.copy ac | ||
+ | INNER JOIN asset.call_number acn ON ac.call_number = acn.id | ||
+ | INNER JOIN actor.org_unit aou ON aou.id = acn.owning_lib | ||
+ | INNER JOIN metabib.rec_descriptor mrd ON mrd.record = acn.record | ||
+ | WHERE aou.id = 103 | ||
+ | AND mrd.bib_level IN (' | ||
+ | AND mrd.item_form NOT IN (' | ||
+ | -- AND acn.label NOT LIKE ' | ||
+ | AND ac.deleted IS FALSE | ||
+ | AND acn.deleted IS FALSE | ||
+ | GROUP BY SUBSTRING(UPPER(acn.label), | ||
+ | ORDER BY 1 | ||
+ | ; | ||
+ | </ | ||
+ | |||
+ | ===== Counting titles that match a list of ISBNs owned by a given library ===== | ||
+ | |||
+ | A common task when considering the purchase of a bulk set of new titles is to check for titles that a given library already owns. One way to do this is to compare the ISBNs in the existing records vs. the potential set of new titles, assuming that you have been given a title list that includes ISBNs. | ||
+ | |||
+ | - Create a scratch table containing the list of ISBNs for the potential purchase, then copy the ISBNs into that table:< | ||
+ | BEGIN; | ||
+ | CREATE TABLE scratchpad.isbn_list (isbn text); | ||
+ | |||
+ | -- Replace ISBN_1, ISBN_2, etc with the list of ISBNs, one per line | ||
+ | COPY scratchpad.isbn_list(isbn) FROM stdin; | ||
+ | ISBN_1 | ||
+ | ISBN_2 | ||
+ | \. | ||
+ | |||
+ | COMMIT; | ||
+ | </ | ||
+ | - Then match the ISBNs against the records for a given library. We make the assumption that if you have an undeleted call number attached to the pertinent record, that you own at least one copy of the record. The only real " | ||
+ | <code sql> | ||
+ | SELECT COUNT(*) | ||
+ | FROM asset.call_number | ||
+ | WHERE record IN ( | ||
+ | SELECT id | ||
+ | FROM reporter.super_simple_record rssr | ||
+ | INNER JOIN scratchpad.isbn_list sil | ||
+ | ON sd.isbn = ANY (rssr.isbn) | ||
+ | ) | ||
+ | AND owning_lib = 103 | ||
+ | AND deleted IS FALSE | ||
+ | ; | ||
+ | </ | ||
+ | |||
+ | |||
+ | =====Show statistic usage on reports=== | ||
+ | |||
+ | <code sql> | ||
+ | select thecount.name, | ||
+ | reporter.template rt, | ||
+ | ( | ||
+ | select rtinner.name " | ||
+ | (select naming.name, | ||
+ | reporter.template naming , | ||
+ | reporter.template clones | ||
+ | where | ||
+ | clones.data=naming.data and | ||
+ | naming.folder in(select id from reporter.template_folder where owner= !!!REPLACE ME WITH OWNER ID!!! and shared and share_with=1) and | ||
+ | naming.owner= | ||
+ | group by 1 | ||
+ | ) rtinner | ||
+ | |||
+ | where rr.template = any((' | ||
+ | group by 1 | ||
+ | |||
+ | ) as thecount | ||
+ | where | ||
+ | thecount.name = rt.name and | ||
+ | rt.owner= | ||
+ | rt.folder in(select id from reporter.template_folder where owner= !!!REPLACE ME WITH OWNER ID!!! and shared and share_with=1) | ||
+ | order by 4 desc | ||
+ | |||
+ | </ | ||
+ | |||
+ | You will need to edit that query to look for reports owned by a particular actor.usr.id. Replace " | ||
+ | |||
+ | ====== Development ====== | ||
+ | |||
+ | =====Generating the release ChangeLog from Git===== | ||
+ | |||
+ | [[http:// | ||
+ | |||
+ | |||
+ | =====Flags to use for custom builds of postgres===== | ||
+ | < | ||
+ | |||
+ | ===== Custom Dojo Build ===== | ||
+ | |||
+ | Creating a build for dojo in Evergreen will decrease the loading time for each page significantly, | ||
+ | |||
+ | To start, you want to understand the construction of a dojo build. | ||
+ | |||
+ | First, retrieve and extract a copy of the [[http:// | ||
+ | |||
+ | To create the build: | ||
+ | |||
+ | cd util/ | ||
+ | | ||
+ | |||
+ | Once you run the build script you should then have a directory named '' | ||
+ | ==== Creating a Custom Build ==== | ||
+ | |||
+ | If you decide to create a custom build for your organization, | ||
+ | Your first step is to find each file where you include ' | ||
+ | Then you want to find the module that is being used for each template or interface. | ||
+ | dojofind.pl will do this for you, with slight tweaking for the file path, if necessary. | ||
+ | |||
+ | |||
+ | ===== URL Format for " | ||
+ | |||
+ | http:// | ||
+ | |||
+ | ==== I18N build spell ===== | ||
+ | |||
+ | In a fresh checkout of the repo, after the autotools autogen.sh, run the following: | ||
+ | |||
+ | < | ||
+ | |||
+ | to make some fancy i18n stuffs. | ||
+ | |||
+ | ====== SQL Tips & Tricks ====== | ||
+ | |||
+ | =====SQL for System Administrators===== | ||
+ | |||
+ | Dan Scott' | ||
+ | |||
+ | [[http:// | ||
+ | |||
+ | =====Using SQL to Generate SQL===== | ||
+ | |||
+ | One trick that is useful in many situations is to use SQL to generate a script. For instance, after adjusting the settings on penalties, you might need to update the penalties on a given class of patrons. While there' | ||
+ | |||
+ | | ||
+ | || '( org_unit, usr, standing_penalty, | ||
+ | || ' | ||
+ | || ' | ||
+ | from actor.usr usr | ||
+ | join actor.org_unit ou | ||
+ | on ( usr.home_ou = ou.id ) | ||
+ | join permission.grp_tree pgt | ||
+ | on ( usr.profile = pgt.id ) | ||
+ | where pgt.name = ' | ||
+ | ; | ||
+ | |||
+ | The results of this query will look something like this (I've split the lines up a bit for clarity--each entry would be one long line): | ||
+ | |||
+ | ? | ||
+ | | ||
+ | ( org_unit, usr, standing_penalty, | ||
+ | | ||
+ | from actor.calculate_system_penalties( 82111, 104 ) ; | ||
+ | | ||
+ | ( org_unit, usr, standing_penalty, | ||
+ | | ||
+ | from actor.calculate_system_penalties( 158001, 109 ) ; | ||
+ | ... | ||
+ | |||
+ | So, to make use of this, you'd capture the output of the query, and then run it as a script. In psql //\o fileName// allows you to save your output to a file; use //\o// (no file name) to turn off the output. You may need to edit the file to remove a spurious column heading and rowcounts. You can obviate the need to deal with column headers and rowcounts by setting psql to omit those from the output via this psql command: //\t// | ||
+ | |||
+ | Another way to eliminate the need to deal with column headings is to give the column a name which is a comment (this is particularly handy if you might need to run the script again): | ||
+ | |||
+ | | ||
+ | || '( org_unit, usr, standing_penalty, | ||
+ | || ' | ||
+ | || ' | ||
+ | as "/* recalc Adult penalties */" | ||
+ | ... | ||
+ | |||
+ | Note that using a column name that is itself a comment does not work with the //\t// command (because it turns off column headings as well as footers); if you want to keep the column heading, but eliminate the footer, you can use this command: //\pset footer off// | ||
+ | |||
+ | If you prefer to use GUI tools for SQL (some of us do, at least frequently), | ||
+ | |||
+ | =====Using LIMIT to check your results===== | ||
+ | |||
+ | Frequently when creating a query, especially a complex one that will return many rows or just run for a long time, it's helpful to have a relatively small number of rows returned so you can make sure you've got the conditions correct. For instance, in the above example (Using SQL to Generate SQL), it's easy to miss a critical piece (like the final ; ), for instance. Use the LIMIT keyword to return a small number of rows until you're confident your SQL is working the way you want it to: | ||
+ | |||
+ | | ||
+ | ... | ||
+ | LIMIT 20 -- only return 20 rows | ||
+ | ; | ||
+ | |||
+ | Just do not forget to comment out or delete the LIMIT before you run it for real, or you may fall victim to the but-I-could-have-sworn-there-were-more-than-that-many syndrome. | ||
+ | |||
+ | =====Listing disabled triggers===== | ||
+ | |||
+ | <code sql> | ||
+ | select | ||
+ | (select nspname from pg_namespace where oid = (select relnamespace from pg_class where oid = tgrelid)) | ||
+ | || ' | ||
+ | || (select relname from pg_class where oid = tgrelid) as " | ||
+ | tgname as " | ||
+ | tgenabled | ||
+ | from | ||
+ | pg_trigger | ||
+ | where | ||
+ | tgenabled = ' | ||
+ | </ | ||
+ | |||
+ | =====Getting a hierarchically-ordered list of org units===== | ||
+ | <code sql> | ||
+ | -- breadth-first (i.e. all first-level org units, then all second-level OUs, etc.) | ||
+ | WITH RECURSIVE orgs AS | ||
+ | ( | ||
+ | SELECT id, shortname, name, parent_ou | ||
+ | FROM actor.org_unit | ||
+ | WHERE parent_ou IS NULL | ||
+ | UNION ALL | ||
+ | SELECT next.id, next.shortname, | ||
+ | FROM actor.org_unit next | ||
+ | JOIN orgs prev ON next.parent_ou = prev.id | ||
+ | ) | ||
+ | SELECT id, shortname, name, parent_ou | ||
+ | FROM orgs; | ||
+ | |||
+ | -- depth-first (" | ||
+ | WITH RECURSIVE orgs AS | ||
+ | ( | ||
+ | SELECT id, shortname, name, parent_ou, array[id] AS path | ||
+ | FROM actor.org_unit | ||
+ | WHERE parent_ou IS NULL | ||
+ | UNION | ||
+ | SELECT next.id, next.shortname, | ||
+ | FROM actor.org_unit next | ||
+ | JOIN orgs prev ON next.parent_ou = prev.id | ||
+ | ) | ||
+ | SELECT id, shortname, name, parent_ou | ||
+ | FROM orgs ORDER BY path; | ||
+ | </ | ||
+ | ====== Cstore/ | ||
+ | |||
+ | ===== order_by using expressions ===== | ||
+ | |||
+ | You know how in SQL a SELECT statement' | ||
+ | |||
+ | <code sql> | ||
+ | |||
+ | That would give you a set of foo where 7 is your first result, and the rest of the results are ordered in ascending order. | ||
+ | |||
+ | json_query doesn' | ||
+ | |||
+ | < | ||
+ | " | ||
+ | {" | ||
+ | {" | ||
+ | ] | ||
+ | </ | ||
+ | |||
+ | Since operators aren't directly supported in the order_by clause of a json_query, this uses the built-in function behind the = operator, which is NUMERIC_EQ(). | ||
+ | |||
+ | ====== EG Behavior ====== | ||
+ | ===== Override Receipt Templates Globally ===== | ||
+ | / | ||
+ | <code javascript> | ||
+ | ////////////////////////////////////////////////////////////////////////// | ||
+ | // Overriding receipt templates globally | ||
+ | |||
+ | try { | ||
+ | // In practice, this should be true in menu.js/ | ||
+ | if (typeof JSAN != ' | ||
+ | var r = new XMLHttpRequest(); | ||
+ | r.open(" | ||
+ | r.send(null); | ||
+ | if (r.status == 200) { | ||
+ | JSAN.use(' | ||
+ | var custom_data = new OpenILS.data(); | ||
+ | var custom_templates = JSON2js( r.responseText ); | ||
+ | for (var i in custom_templates) { | ||
+ | custom_data.print_list_templates[i] = custom_templates[i]; | ||
+ | } | ||
+ | custom_data.stash(' | ||
+ | dump(' | ||
+ | } | ||
+ | } | ||
+ | } catch(E) { | ||
+ | dump(' | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | And / | ||
+ | like the type of file you get when you use Export from the Receipt Template | ||
+ | Editor. | ||
+ | |||
+ | **Alternate version with per workstation OU settings** | ||
+ | If you would like to manage print templates centrally with the ability to have different settings | ||
+ | per Org Unit, this variant allows you to have default print_list_template and then a version for specific Org units, print_list_template-oushortname. | ||
+ | |||
+ | <code javascript> | ||
+ | ////////////////////////////////////////////////////////////////////////// | ||
+ | // Overriding receipt templates globally | ||
+ | |||
+ | try { | ||
+ | // In practice, this should be true in menu.js/ | ||
+ | if (typeof JSAN != ' | ||
+ | var r = new XMLHttpRequest(); | ||
+ | //request the OU specific print_list_templates files | ||
+ | r.open(" | ||
+ | r.send(null); | ||
+ | if (r.status != 200) { | ||
+ | // | ||
+ | | ||
+ | | ||
+ | } | ||
+ | if (r.status == 200) { | ||
+ | JSAN.use(' | ||
+ | var custom_data = new OpenILS.data(); | ||
+ | var custom_templates = JSON2js( r.responseText ); | ||
+ | for (var i in custom_templates) { | ||
+ | custom_data.print_list_templates[i] = custom_templates[i]; | ||
+ | } | ||
+ | custom_data.stash(' | ||
+ | dump(' | ||
+ | } | ||
+ | } | ||
+ | } catch(E) { | ||
+ | dump(' | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | First / | ||
+ | / | ||
+ | exist. | ||
+ | like the type of file you get when you use Export from the Receipt Template | ||
+ | Editor. | ||
+ | |||
+ | ===== Clear the added content cache (book covers, etc) from the command line ===== | ||
+ | |||
+ | While newer versions of Evergreen offer the ability to [[http:// | ||
+ | |||
+ | If you want to clear the cache for a given record by its database ID, you can do the following from the command line (substitute 12345678 for the record ID, and modify your memcached server address if necessary): | ||
+ | |||
+ | <code bash> | ||
+ | export MEMCACHED_SERVERS=127.0.0.1; | ||
+ | </ | ||
+ | |||
+ | Thank you Jeff Godin for this tip! | ||
+ | |||
+ | ===== Clear cached catalog search results ===== | ||
+ | |||
+ | Similar to the spell above, this can be used to clear all cached catalog search results and facets from memcached. This ordinarily isn't needed for production use, but can be useful when testing tweaks to search code or settings. | ||
+ | |||
+ | <code bash> | ||
+ | memcdump --servers localhost|grep open-ils.search | xargs -n 1 memcrm --servers localhost | ||
+ | </ | ||
+ | |||
+ | ===== Clear cached catalog search results for a single term ===== | ||
+ | |||
+ | This "cache killer" | ||
+ | |||
+ | <code bash> | ||
+ | puffins -oononromfh879ui | ||
+ | </ | ||
+ | |||
+ | Thank you Mike Rylander for this tip! |