dev:testing:performance_issues
This is an old revision of the document!
Table of Contents
Evergreen Performance Analysis
Use the space below to identify areas in Evergreen that would benefit from a performance analysis along with any questions that you would like a performance analysis to answer.
Staff Client
- Acquisitions - There are a lot of slow areas in the acquisitions. The following are some specifics I can identify at the moment
- Going to general search takes approximately 4 seconds on our production system to display the form (C/W MARS)
- Copies under line items take approximately 21 seconds to fully display with fund drop down in production (C/W MARS)
- Cataloging - Clicking Add Volumes or Edit Items/Volume per Bib screen is slow loading (approximate 16 seconds on our production system at C/W MARS)
- Checkout/Checkin – Slow enough times to cause workflow issues. Generally, in production it takes 1-2 seconds for this process but if the copy record has an alert message it slows to 4-6 seconds in production. On busy circ desks, things don't get checked in or checked out properly unless staff are paying close attention due to this lag time. This is happening on C/W MARS production system.
- Memory leaks - is there an inherent problem with the technology used in the staff client (xulrunner, Dojo) that is the source of the memory leak problem and other performance problems?
- Testing and development shows that Dojo is not the source of memory leaks. The cause of many seems to be event handlers pinning memory at page unload time, not allowing the GC to free all memory for many interfaces. See: https://bugs.launchpad.net/evergreen/+bug/1086458 for some initial work, and there is more coming.
- Slow retrieval of patron records
- This looks to be caused by a combination of redundant XHR requests and the use of many XHR requests in general. This theory is corroborated by the speed improvement displayed by an experimental server-side (html) version of the patron sidebar. Details to follow.
- Editing and saving patron records is slow.
- Staff client batch operations (e.g. updates/deletes from copy buckets)
- Can we get more specifics on this?
- Sure, here's one example: https://bugs.launchpad.net/evergreen/+bug/921812
- Hmm, I don't see this ^- as a general problem with Messaging. This is a specific problem with an API call and/or how the client is using it.
- We get so many "Network Failures" with real world data that I think we have many more of these "specific problems" with how the client handles many such API calls. Deleting from buckets is just one example.
Template Toolkit Catalog
- Displaying records with many items that have monographic parts.
- Sample record showing slow retrieval on record with 1229 copies and 1229 parts - http://egtraining.noblenet.org/eg/opac/record/3189583
- Sample record showing faster retrieval for the same title as above, but with 1236 copies, not parts - http://egtraining.noblenet.org/eg/opac/record/972760
Messaging(OpenSRF)
- how does OpenSRF compare in performance (and features) to other modern messaging frameworks?
- Perhaps an alternate question is: how does XMPP compare in performance to other messaging frameworks? Since nearly every line of code in Evergreen depends on the transparency provided by OpenSRF, replacing that layer would mean a rewrite. I don't have timing for other messaging frameworks, but OpenSRF adds between 0.5ms and 10ms per inter-application API request, and no measurable time to intra-application API requests.
Database
- Catalog search - is there a way to optimize searching in the catalog so that users get faster results and are able to start re-implementing things like search.relevance_adjustment to provide boosts to relevance ranking?
- Optimizing some queries to improve performance - do we have examples of specific queries that can be optimized?
dev/testing/performance_issues.1362584789.txt.gz · Last modified: 2022/02/10 13:34 (external edit)