dev:testing:performance_issues
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
dev:testing:performance_issues [2013/02/27 16:35] – Adding some performance issues and moving batch operations out of opensrf. klussier | dev:testing:performance_issues [2022/02/10 13:34] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 4: | Line 4: | ||
====Staff Client==== | ====Staff Client==== | ||
+ | * Acquisitions - There are a lot of slow areas in the acquisitions. The following are some specifics I can identify at the moment | ||
+ | * Going to general search takes approximately 4 seconds on our production system to display the form (C/W MARS) | ||
+ | * Copies under line items take approximately 21 seconds to fully display with fund drop down in production (C/W MARS) | ||
+ | * Cataloging - Clicking Add Volumes or Edit Items/ | ||
+ | * Checkout/ | ||
+ | * The alert message angle is probably coming from the "fancy prompt" | ||
+ | * phasefx -- There is not an issue with an alert message on checkin but if there is a holds slip or paging slip (" | ||
* Memory leaks - is there an inherent problem with the technology used in the staff client (xulrunner, Dojo) that is the source of the memory leak problem and other performance problems? | * Memory leaks - is there an inherent problem with the technology used in the staff client (xulrunner, Dojo) that is the source of the memory leak problem and other performance problems? | ||
* Testing and development shows that Dojo is not the source of memory leaks. | * Testing and development shows that Dojo is not the source of memory leaks. | ||
Line 9: | Line 16: | ||
* This looks to be caused by a combination of redundant XHR requests and the use of many XHR requests in general. | * This looks to be caused by a combination of redundant XHR requests and the use of many XHR requests in general. | ||
* Editing and saving patron records is slow. | * Editing and saving patron records is slow. | ||
+ | * Recurring issue of a blank screen appearing on the reporter on the first attempt to load the interface. You don't see " | ||
+ | * Recurring issue in numerous look-up areas including, but not limited to: Items out, Items Status, CheckIn, etc. " | ||
* Staff client batch operations (e.g. updates/ | * Staff client batch operations (e.g. updates/ | ||
* Can we get more specifics on this? | * Can we get more specifics on this? | ||
Line 16: | Line 25: | ||
====Template Toolkit Catalog==== | ====Template Toolkit Catalog==== | ||
- | * Displaying records with many items and paging through the items. | + | * Displaying records with many items that have monographic parts. |
- | * Do parts exacerbate this problem? | + | * Sample record showing slow retrieval on record with 1229 copies and 1229 parts - http:// |
- | * Sample record - http://evergreen.noblenet.org/ | + | * Sample record |
+ | * Downloading CSV of large checkout history, possibly due to a timeout - https:// | ||
====Messaging(OpenSRF)==== | ====Messaging(OpenSRF)==== | ||
+ | * how does OpenSRF compare in performance (and features) to other modern messaging frameworks? | ||
+ | * Perhaps an alternate question is: how does XMPP compare in performance to other messaging frameworks? | ||
====Database==== | ====Database==== | ||
- | * Catalog search - is there a way to optimize searching in the catalog so that users get faster results and are able to start re-implementing things like search.relevance_adjustment to provide boosts to relevance ranking? | + | * Catalog search - is there a way to optimize searching in the catalog so that users get faster results and are able to start re-implementing things like search.relevance_adjustment to provide boosts to relevance ranking? |
+ | * Optimizing some queries to improve performance - do we have examples of specific queries that can be optimized? | ||
+ | * Post-2.4 holds targeter slowness - https:// | ||
+ | |||
+ | ====Thoughts on General Improvement==== | ||
+ | * (I'll tag my name by what I post in case anything doesn' | ||
+ | * combining api calls (as mentioned above) -- berick | ||
+ | * this is huge and it's the kind of thing that can only come after a UI has settled in. | ||
+ | * use streaming API responses -- berick | ||
+ | * We traditionally rely way too much on repetitive call and response when streaming would cut out much of the network back and forth | ||
+ | * reduced logging on the server -- berick | ||
+ | * ability to force db calls to master without transaction for retrieving authoritative data -- berick | ||
+ | * as opposed to repetitively creating, using, then rolling-back DB transactions, | ||
+ | * I've had this idea kicking around for a while ... so, I've put up a [[https:// | ||
+ | * consider more aggressive caching (w/ cache invalidation) of things like org unit settings values, etc. within api calls -- berick | ||
+ | * kill synchronous xmlhttprequest calls with fire -- berick | ||
+ | * synchronous requests cause the staff client to block pending the results of potentially long-running API calls | ||
+ | * minor problems include briefly unresponsive UIs | ||
+ | * in severe cases, this causes " | ||
+ | ====Draft RFP==== | ||
+ | Seeking further feedback on a [[dev: |
dev/testing/performance_issues.1362000927.txt.gz · Last modified: 2022/02/10 13:34 (external edit)