Batch Patron Functions
Support for batch loading of patron records, batch update, batch delete.
Batch Patron Load
A user interface similar to Vandelay would allow library staff to load delimited files of patrons to both create new records and update existing records.
- the delimited file will be provided in a format defined by these specifications
- each patron in the delimited file will include a unique identifier
- unique identifiers between consortia members may not be unique (this could be accounted for either by the loading process utilizing ident_type or some other method, or by the institution creating the delimited file by adding unique characters to the unique id such as org_unit shortname)
- the delimited file may or may not include a barcode
- Load a delimited file into a staging table
- files should be converted to utf-8 prior to loading into a staging table
- auto eliminate multiple spaces on import in the case of blank padded data
- staging table needs to be complete enough to create a full, valid patron record (not provisional)
- Manage Staged Records
- specify default data for null/blank fields (for example profile, expire_date)
- view the staged records
- the library should be able to view and resolve match conflicts and other data issues.
- the library should be able to edit individual rows of staged data
- use case #1: the library views the staging table and notices that a handful of international addresses are missing the country due to a formatting problem with the delimited file.
- use case #2: the library views the staging table and notices that blank phone numbers were included as ' - - '.
- the library should be able to use a button to update all expire_dates in the staging table
- use case: the semester is about to begin and the academic institution provided a delimited file of students. They forgot to change their script to use the new semester end date as the expire_date. The library notices this but wants to load the records as soon as possible rather than wait for another file to be produced.
- the library should be able to define match points
- the library should be able to specify match points during loading
- use case #1: a library provides a delimited file of student records including university id. A match point based on university id is used to match the ident_value in the on-file record.
- use case #2: a library wants to begin loading student records. They have not utilized the ident_value field for unique identifiers and want to match on barcode.
- the library should be able to define overlay profiles that define which fields will be replaced by the incoming record and which fields will be protected from change by the incoming record.
- the overlay process should not replace on-file data with incoming blank data. (not sure if this should be configurable or a general assumption. Need a use case if incoming blank should replace on-file data)
- use case #1: a library loads the student's college email address during an initial load. Circulation staff update some user's email with addresses that are more frequently checked. The library may decide not to overlay incoming email addresses.
- the loading process should not impact the performance of the running system
- the loading process should be able to load 5,000 - 20,000 records without timing out
- how many addresses should the loader have to handle?
Batch Patron Update and Delete
A user interface similar to record and copy buckets that allows libraries to gather a group of patron records together to perform batch update or delete.
- A search interface to search for and select patron records
- Ability to batch update one or more fields
- Ability to set records to deleted
- Delete should not be performed on records with outstanding checkouts, bills, holds
- A preview of the changes
- Provide a permission to batch update
- Provide a permission to batch delete
- Batch functions should not impact the performance of the running system
- Batch functions should be able to run without timing out
- In addition to copy buckets and record buckets, Evergreen developers partially built "user buckets" so there is already some of it built, see:https://github.com/evergreen-library-system/Evergreen/blob/daa5e3bcb53d80e9a427b72ba1ef3baaceed1202/Open-ILS/src/sql/Pg/070.schema.container.sql#L205