Explorations and milestones for the browser staff client development project.
Author: Bill Erickson
After expressing my own concerns (annoyances?) about using iframes for HTML UI integration, Dan Wells replied to the list, "I don’t think we should rule out using iframes for catalog integration. Iframes are actually pretty close in many respects to the way the catalog 'integrates' into the current (XUL) staff client." Point taken and ran with…
I've integrated patron registration and the catalog into the browser client in a manner similar to the XUL client using iframes. In most cases, it's as simple as passing in "xulG" functions to mimic the ones on offer by the XUL client. In a few cases, we have to edit the original UIs (e.g. avoiding inline hrefs for "oils://" and the like), but thus far that has been the exception. I'm still working through the different behaviors, but I have no reason to think we can't port all of the actions from the XUL client over to the browser.
I have run into one snag with this approach related to how WebKit (Chrome/Safari) browsers handle browsing history with iframes in conjunction with HTML5 pushstate. Basically, after navigating forward in the iframe, then clicking the Back button in the browser, instead of going to the previous iframe page as one would expect (and as Firefox does), the browser navigates to the previous pushsate URL, which will always be the URL of the containing page, which effectively does nothing, since that URL doesn't change.
If this sounds like gibberish, you can see it in effect on my dev site. Simply perform a record search, then try using the back button to return to the search page:
https://bill-dev2.esilibrary.com/eg/staff/cat/catalog/index (admin / demo123)
Unlike here, where I'm using an iframe but no pushstate routing. Using the Back button after a search actually returns the user to the previous page.
https://bill-dev2.esilibrary.com/~berick/cat.html
I have yet to track down a bug entry for this (I did find a mention at https://github.com/jashkenas/backbone/issues/799), though I'm fairly sure this is in fact a browser bug. BTW, I did try wiring up handlers for myIframe.contentWindow.history.back(), etc. but they resulted the same behavior. I'll keep an eye out for any developments on this.
Author: Bill Erickson
The past month I've been in the trenches mainly working on circulation UI's. Evergreen is highly configurable, particularly with circulation, which means there's a lot of code to work through. I think I'm over the main hump, though, with checkout/checkin/renewal and their accompanying actions effectively done. There are still a few things to tend do (e.g. permission failure override dialogs), but I'm out of the woods.
Chrome and eventually Firefox supports the HTML5 <input type='date'/>, which presents a date picker UI to the user without the need for external javascript. What neither yet supports, though, is the type='datetime' input, which offers both a date and time selector. There are a few places in Evergreen where users need to select a time, like editing due dates. I don't personally see having to enter hh:mm:ss as burdensome until the selectors are implemented (assuming the necessary UI hints), particularly considering how rarely users need to enter time, but I could be missing something. Input welcome.
I know people don't like entering yyyy-mm-dd for dates, though, so if Firefox is unable to get date inputs working in time, we may have to use an external solution, like the angular-ui-bootstrap datepicker). I've coded the date input as an Angular directive (<input eg-date-input ng-model=…/>), so in theory, the main work of adding an external option should be done directly within the directive and not spread throughout the codebase. (Though, really, there are only a handful of date selectors…)
PCRUD didn't exist for the first few years of the XUL client, so it relies heavily on middle-layer API calls which simply collect data. In the browser client, I'm leaning heavily on PCRUD (either directly or via flattener) for these data collection calls, since it's considerably faster. (Plus PCRUD is C code, so it requires less RAM and CPU). I mention this here because PCRUD uses its own set of permissions for allowing access to data. In any case where I added support to PCRUD for retrieval of a certain type of object (in the IDL), I made sure to match the permission from the API call. However, there will certainly be cases where the permissions don't quite match. In those cases, we may have to add/modify the permission to the relevant PCRUD entry in the IDL. Something to watch out for.
Chrome has a security feature / limitation which limits the number of tabs opened from a single user action to one. You can see how this would be ideal for general web browsing. It creates some barriers for us, though. For example, if you selected 5 patrons in a list and chose the "open these patrons in a new tab" action, only one tab would open with one patron. (Note that no such action exists in the current code for this very reason).
We have to approach this from a different angle. For example, providing links in list rows to the same resources allows users to directly ctrl-click the links and open the resources in a new tab. Instead of selecting 5 items then picking an action, click on the desired link within each row directly. However, this creates a requirement that certain list columns must be visible for accessing the action. For example, to access Item Details from a list of checkouts, the barcode (link) column would have to be visible, regardless of whether you wanted to see the barcode in the list. I don't know if that's a reasonable expectation.
We can use row-specific context menus, like how the XUL client duplicates the 'actions for selected items' menus as right-click context menus. Making the default row double-click action (configurably) open items in new tabs is another option. E.g. double-clicking each patron in a list of search results opens the patron in a new tab. I believe this is consistent with the XUL client. We'll get there. More to think about…
Author: Bill Erickson
The primary UI work for the last week and half has been the patron billing interfaces. Most of the functionality is there, though there are a few odds and ends left, like styling rows for items that are still accruing fines, etc.
You can see it on my development server: https://bill-dev2.esilibrary.com/eg/staff/circ/patron/7/bills
You’ll notice that the summary billing data is laid out differently than the XUL client. I strive to avoid moving things around unless it really seems necessary. This is one case where the interface seems to have accreted information over the years. (E.g. at least three data points are repeated in the page). I personally find this confusing, so instead of mimicking the UI as it is in the XUL client, I’ve presented the information in a more tabular fashion, all in one place. I can accept if I’m the odd man out here, but I find this much easier to visually navigate. None of this is set in stone, of course, and comments are appreciated.
We now have a functional print templates structure in place. The templates are HTML chunks taken practically verbatim from the stock XUL print templates, modified to use Angular syntax and slightly different data structures. Each template lives within its own .tt2 file on the server (e.g. view source on https://bill-dev2.esilibrary.com/eg/staff/share/print_templates/t_bills_current) instead being embedded within the application. Some benefits of this approach are that administrators have the option of applying modification to stock templates (per org, etc.) in the same manner as TPAC templates or other browser staff templates. It also means stock templates are locale aware.
At print time, the application first looks for a locally saved version of the template and, if it does not find one, it fetches the stock template from the server via XMLHttpRequest (using Angular’s $http service). Once fetched, it’s inserted into a hidden DOM node and $compile’d using the print scope data provided by the caller. The end product is sent to the printer, either directly from the DOM for browser printing or as an HTML string when printing remotely.
We also have a way to modify and save templates locally via the new template editor.
E.g. https://bill-dev2.esilibrary.com/eg/staff/admin/workstation/print/templates
Like the XUL template editor, users can modify templates and preview the changes. We only have 2 templates for now, but more will follow as needed.
To see one in action, go to the Patron Bills Page and chose Print Bills in the Grid’s “Actions” drop-down menu.
The print template only has access to the slim set of data provided by the caller for template variables and the base Angular template control structures (for looping, etc.), however (currrently) it’s also possible to insert a <script> chunk (or other JS) into the template to access window objects. This is a security issue which will require special care. At minimum we’ll need to scrub some stuff (<script> tags, some attributes) from the templates before loading them into the DOM. Suggestions appreciated.
Author: Bill Erickson
I've started working through some actual functional interfaces.
I've started fleshing out the dev targets for the other dev sprints
Is there any chance we can pick one patron interface layout instead of supporting both the vertical and horizontal displays? Maintaining both adds time costs for maintenance, testing, new features, documentation, etc. Maybe we can have the best of both without supporting both?
What I have now in the prototype is kind of a hybrid. The patron summary lives along the left and the patron search form lives along the top. Today I added an option to toggle open/close the patron summary view (stickiness pending), which allows the various grids to expand to the full page width. To me, the vertical display is nice because you get more vertical space (which is particularly nice on wide-screen displays) and the horizontal display is nice because you can easily hide the patron summary. The current prototype code is an attempt to have both.
For example: https://bill-dev2.esilibrary.com/eg/staff/circ/patron/786/items_out – See the arrows just to the right of the patron's name in the left summary section.
Are there other aspects of either layout that we need to address? Am I living in a fantasy?
There have been some sporadic discussion about how links (<a..> tags) should behave in the browser client. Sometimes a link should open a new tab or a new window, sometimes it should replace the content of the current page. Since users have different preferences for different contexts, sometimes for the same link, I would like to propose that we let the staff decide for themselves how each link should be opened on a per-click basis.
To do this, we code the interface to present all links as standard replace-the-current-page links. If staff want different behavior for a link, they can simply change how they click on it.
These are the commands for controlling link behavior in Chrome. I'm fairly sure they're the same in Firefox, but I have not verified:
(Note that these controls also work on the navigation bar entries, which are also standard links).
Coding all the links the same creates consistent behavior and gives the user the power to decide for themselves when they want to open new tabs/windows or simply navigate to the next interface.
Concerns / Comments / Questions?
Author: Bill Erickson
I've created some "30,000 foot view" project tracking pages:
As code is completed on development targets and the time comes for additional eyes and user testing, they will be stricken through (w/ a date). I expect to start working through more of these in earnest very soon.
In the early days of the prototype, I discussed unit tests with Jasmine.
I also briefly discussed JavaScript Minification.
Recently, I started experimenting with Angular Hotkeys (Thanks to Dan Scott for the suggestion). This lead me down the path of considering how we want to manage dependency retrieval for the browser client. We're using Angular, Angular-Route, Angular-Bootstrap, and now Hotkeys. What's more, we don't want to continue using remotely-served content (e.g. from Google CDN, bootstrapcdn, etc.). We need something to fetch the files and put them into the right place for us.
These three general areas of work are common to all modern JavaScript projects and the Internet has lots of tools for helping accomplish them. After some research, including discussions with over developers at the Code4Lib conference (go figure!), I've honed in on a nice workable solution. Here are the basics:
These are all Node.js plugins, which means once Node.js is installed as a dependency, the rest are managed via the Node package manager. I'm still hammering out a few details, but here's the process as it stands today: http://yeti.esilibrary.com/dev/pub/README.browser-staff-build.html – this file is also in the repo. This describes installing the needed build tools, how to fetch dependencies, how to run unit tests, and how to create the final minified versions of files.
(To use the minified files from the browser, it currently requires a variable be set in the staff web template, but this will likely evolve into something more configurable / dynamic, like an environment variable, etc. It might be cool to support user-toggleable use of minified vs. non-minified files for standard production vs. debugging use).
These are certainly overkill for fetching some files and putting them into the correct places, but we need an external solution for headless unit test running and minification. Since we're already knee deep in Node.js just for unit test running, why not let it solve all of our problems for us?
NOTE: This only affects packaging and installation, the browser client does NOT require Node.js, etc. NOTE
As part of this work, I also pushed some new unit tests for our Angular services.
Author: Bill Erickson
It's pretty much what you expect, with some additional features. It displays all workstations registered on the local machine and it allows users (with correct perms) to select an alternate workstation as the default or as a temporary login alternative. Whereas in the XUL client, you would have to install the client multiple times or use tricks with the domain name to make it appear like a different client for multiple workstation registrations, now you can register multiple workstations and select a different WS from a menu to use.
The login page now limits workstations to those registered on the current machine.
This one is particularly useful for debugging. It lists local (localStorage) and remote (Hatch) stored preferences, displays their content, and with the correct permissions allows the user to remove preferences. In the early stages when the content of different preferences may be evolving, this power could be invaluable for locally repairing preferences without having to locate the preferences directory and manually editing / deleting files.
Note: We may eventually have data we don't want to display here for security reasons.
All of the existing prototype UIs which presented tabular data now use the grid. I did this, even though a few UIs are out of scope for "Sprint 1", so that I could ensure that needed features are all there. As part of this, there were lots of additions and fixes. Also, because the grid encourages paged data displays, the patron search, item-out, and holds lists are now paged, so the UI will not attempt to render hundreds (thousands?) of items if a patron has huge numbers of items out / holds. Also, no 50-patron limit on searches.
Related, I discovered AngularJS Batarang this week. It's a Chrome Dev Tools plugin which profiles AngularJS applications. I've been profiling the grid and finding small changes to speed things up. I expect this tool to be useful in lots of contexts.
Firefox version 29 was released this week. This release officially brings support for SharedWorkers, which we use for sharing a global WebSocket connection. However!, WebSockets are still not supported in Worker threads: https://bugzilla.mozilla.org/show_bug.cgi?id=504553. With the introduction of shared workers, I expect more heat to be applied to this bug. (I see a patch has appeared since I first discovered this bug).
When this is resolved, then FF and Chrome will both be good for global WebSocket connections in PC browsers. Mobile devices will probably always require a single connection per page, since there does not seem to be much push for SharedWorkers in mobile browsers, which makes sense, since mobile activity generally involves only one or at least very few tabs on a given domain.
The last big decision we have to make before we begin in earnest with porting interfaces regards the integration of the catalog into the browser client.
How will this work?
The catalog already knows when it's being accessed from a staff context by detecting a workstation. We could teach the catalog to flesh functionality directly into the catalog when a staff login is detected. For example, on the record detail page, we could display a small summary bar along the top (similar to the XUL version) with a "Actions for this Record" menu. This seems straightforward enough.
Do we foresee any problems with this approach? Alternate suggestions? I'd like to avoid embedded iframes and the like if possible…
Of course, we still have to decide how to build it, e.g. do we integrate Angular/Bootstrap? Build something by hand?
When viewing the catalog, do staff need access to the main staff menu bar along the top of interface? If the answer is yes, then we're squarely in the camp of making the catalog an Angular app when accessed in a staff context.
To be clear, none of this should affect the regular patron view/behavior of the catalog.
Thoughts?
Author: Bill Erickson
For printing HTML, we are using the brand new JDK version 8, which includes as part of the standard edition classes for displaying and printing web pages. The main classes in question are WebView and WebEngine.
http://docs.oracle.com/javase/8/javafx/api/javafx/scene/web/package-summary.html
The way we use them is pretty basic: pass in some HTML and tell it to print itself. It can resolve URLs for remote resources, like images, CSS, scripts, etc., which gives us a lot of flexibility.
To use these classes, we have to construct a javafx Application and allow it run in its own thread, launched from the main thread. Because of this, I had to rearrange the existing Jetty application to run as an embedded server, whereas before it was a standalone server. (This is really what Jetty was designed for, so that's all gravy). Javafx also places restrictions on which threads can modify scene objects, so there's some message queuing and watcher threads sprinkled in for good measure.
Because of the overhaul, I started a new branch for Hatch at http://git.evergreen-ils.org/?p=working/random.git;a=shortlog;h=refs/heads/collab/berick/hatch2
To print javafx Nodes, I had to move over to using the javafx.print library:
http://docs.oracle.com/javase/8/javafx/api/javafx/print/package-summary.html
To test all of this, I've added an initial printer configuration interface to the browser client, under a new Administration menu. In here, users configure printers for different print contexts (just like the XUL client), currently one of default, receipt, label, mail, and offline. Configuring a printer involves launching the native Java print dialog. After settings are applied, Hatch reads the settings from the dialog and returns them to the caller to be displayed and stored as a preference. When the time comes to print, we load the preference for the requested context and pass the config back to Hatch to use for configuring the printer just prior to performing the print action.
This structure seems to work fairly well, but to protect you from yourself, Java will sometimes silently modify settings if the selected settings are invalid. This can cause some confusion. In particular, on my Mac, I can't get any settings to stick and I'm not sure if it's because they are all invalid from Java's perspective (based on my printer, etc.) or if there is something specifically wrong with the Java library on Macs. JDK8 is brand new, after all, and could have some rough edges on lesser used OSes. Windows and Linux seem to fair much better, but they can still be finicky at times.
We're currently storing the following preferences for each print context:
What else do we need?
After some more testing, I'd like to put together a bundle that brave testers can download and run on their local machines so that we can start testing in different environments.
I have a mixed bag of updates this week:
I put together a functioning infinite scroll grid using the angular-ui toolkit's "ng-scroll" directive. Based on my findings there and subsequent discussion with the community about the benefits of scrolled vs. paged grids, I rearranged much of the grid code (a lot of which I needed to do anyway) to operate as a paged grid by default, but one that can be easily turned into a scrolled grid. The transition would require a tiny JS translator object and hiding some of the paging controls. It occurred to me this is easier than going in the opposite direction. This provides some flexibility and may give users a chance to try either approach.
I reached a milestone recently while printing CSV output from a grid through Hatch (the print/storage service) running on my desktop. It was the first practical application of printing in this manner. Yay! It was only text (CSV), but that's a start.
The Hatch-connecting code is designed to fail gracefully and use the browser printer if no connection can be made. Browser printing is done (for now, anyway) via print CSS media and standard $window.print() on the current page .. no window.open() (and the various browser oddities that ensue) required.
Based on discussions at the #egils14, I put together a small status bar which lives at the bottom of the page. For now, it shows connectivity information for the server websockets connection and the Hatch websockets connection. It also supports application-generated messages. It's cute, but I think it could also be very useful and it will hopefully provide a more consistent mechanism for alerting staff to important information. More on this to follow…
I've encountered a number of challenges migrating to SSL WebSockets with untrusted certificates.
The solution to all of these is to use trusted certificates. This can be done by using a certificate from a trusted authority on your server or by configuring your browser to trust a locally-generated certificate, the kind we typically generate by default for Apache in Evergreen. There are a variety of free and low-cost certificate providers. Two that came up in #evergreen recently were namecheap GeoTrust RapidSSL and startssl.
The bigger challenge here will be providing trusted certificates to the local print storage service. I'll go out on a limb and say that providing trusted certificates for every workstation running Hatch is not a realistic goal. The more likely solution will be to generate a local certificate during the installation process and configuring the browser to trust said certificate. Alternate suggestions appreciated.
There are two ways we can integrate the new browser interfaces into staff work flow. Staff using a certain module can either use the browser directly as their primary work environment or we can (theoretically) integrate browser-based interfaces piecemeal into the existing XUL app. The purpose of the mixed (browser + XUL) approach is the ease staff into the new interfaces, to encourage earlier adoption by replacing functionality directly in the client, and to avoid the case where staff may have to switch back and forth between two different environments.
The mixed approach sounds very appealing, but it does not come for free. To integrate browser apps into XUL, there are a number of technical issues we have to address. These are the ones I've encountered so far:
Individually, these are all relatively minor issues. Put them together, then toss in the undiscovered issues, and a complicated picture forms. If we go the mixed route, we would essentially be building the interfaces on two similar, but separate environments. Custom code (eventually discarded) would need to be written and each environment would require its own review and testing phases for each new interface. These add overhead and would extend the duration of the coding portion of the project.
Finally, I'm *mostly* convinced that the mixed approach is feasible. We won't know for certain until we dive in.
Here's where I need your input…
On the one hand the development phase will be shorter and on the other we have a longer project, but one where individual interfaces may be integrated earlier into the staff work flow. They are both appealing.
What are the other aspects of this decision? What is your preference?
I found some code which demonstrates printing of web pages in Java:
http://www.javacodegeeks.com/2013/07/introduction-by-example-javafx-8-printing.html
It's using features not available until Java 8, due out this week. This will, in theory, allow us to print CSS-driven HTML, graphics, etc.
Found this simple chunk of code for automatically stretching Bootstrap grid columns to fill the available space. It would need to be Angular-ized.
This is similar to the "auto" width column settings in the Dojo grid column configuration UI, which cause a column to expand to fill the empty space. With this, a 12-column grid could have 8 columns of data without leaving 4 columns unused.
If we used something like this in combination with increasing the default number of grid columns (via Bootstrap Customize), we could have a nice high column count (e.g. 16 or 24) that flows nicely with only, for example, 8 columns of data.
This (again, tiny) chunk of code does not work with mixed widths, since all columns have to have -auto for correct layout, but that could probably be added without too much trouble.
This doesn't give us magically ideal widths, like a table would provide, but it's a step closer.
The final (known) big item which requires resolution before coding can begin in earnest is the display grid. This will be the workhorse UI component, used in any interface that provides a list of data (list of checkouts, list of holds, list of holdings, list of records, every conify interface, etc.), which is many if not most UIs.
Grid requirements:
I put together a proof of concept grid using Bootstrap Grids for the template.
It worked well, but has some obvious pluses and minuses, so I could really use some additional input on what we as a community see as the most important aspects of this oh-so-important bit of UI. What should we use as the underlying markup to drive the grid? I’ve listed 4 options below, there may be more:
1. Bootstrap Grids
2. Tables
3. We build our own non-Bootstrap grid-based tool.
4. 3rd-Party Grid, e.g. http://angular-ui.github.io/ng-grid/
The pros and cons for this will change depending on the tool. Presently, this is my least favorite option. Since this UI will be heavily modified and customized to suit our needs, I think we need to use something we build. An good example of how 3rd-party code can create more work than it saves are the Dojo grids. In my estimation, we spent more time twiddling with the grids to get them to do what we want that we would have spent building a grid system that did exactly what we needed. Just my $0.02 on that. Other tools may be easier to work with.
Did I forget anything?
The great thing about my experiments so far is that the back-end code is markup-agnostic, since it’s all Angular. We could drop any markup (div, table, ol, etc.) into place and with the correct Angular loop / variable references, it will still work fine with the existing JS.
It occurred to me that the print server will need to operate over WebSockets instead of XMLHttpRequest, so that the browser can respond to asynchronous, external events. For example, integrating an RFID pad which sends checkout commands to the browser as items are laid on the pad. Since the conversation is instigated by the RFID pad and not the browser, a simple XMLHttpRequest call/response setup will not suffice.
Thanks to Warren A. Layton for testing the WebSockets install / code!
I’ve made number of improvements to the WebSockets gateway, including bug fixes and a new configuration option OSRF_WEBSOCKET_MAX_REQUEST_WAIT_TIME. This setting lets us tell the gateway to give up on a lingering outstanding request if it’s taking way too long and it’s the only thing preventing the connection from being marked as idle. It’s mostly a security improvement to prevent otherwise idle gateway processes from staying alive even after a proxied request died on the back-end with no response.
Author: Bill Erickson
Author: Bill Erickson
It appears the version of libapr1 on Debian squeeze (found while testing on old dev server) has a bug which causes a segfault in the websockets Apache gateway.
http://svn.apache.org/viewvc/apr/apr/branches/1.4.x/CHANGES?view=markup
I'm pretty sure it's the item listed under APR 1.4.2. Squeeze reports it has 1.4.2 installed, but the bug described fits the scenario. Suffice to say, it's not a problem on Debian Wheezy.
Author: Bill Erickson
Some of the stuff I've been working on lately…
Mentioned in IRC, I posted a proof-of-concept Java print/storage service at http://git.evergreen-ils.org/?p=working/random.git;a=shortlog;h=refs/heads/collab/berick/hatch
It's built as a Jetty module, which allows us to publish a small API over HTTP directly on the workstation. To allow access from the browser, the module just has to return an HTTP "Access-Control-Allow-Origin" header whose value matches the Evergreen server (or "*" for testing).
The module is designed as a standard HTTP GET/POST handler, meaning the browser communicates via XMLHttpRequest. However, Jetty also supports WebSockets, so the module could also work as a WebSocket handler if the need arises. Since it's not talking over the network, the value of WebSockets is not as high, so I took the easier route for now.
My impression thus far is that Java has a powerful and fairly easy to use print API. I had no trouble finding / selecting printers, setting margin sizes, and flowing long paragraphs of text (instead of chopping them off). I'm not sure yet how to go about printing more complicated elements, like images, nor am I clear on how important that is, but presumably it's doable.
The file storage components were trivial.
Also, Java is portable. The service runs fine in Windows, Mac, and Linux.
So far, I have high hopes.
Testing, bug fixes, documentation, and cleanup continue on https://bugs.launchpad.net/opensrf/+bug/1268619
Beware, I may soon be turning WebSockets on by default in the web staff proto branch so I can test it more thoroughly.
As an experiment, I pushed an alternate version of the patron search API to the web-staff-proto branch to further test the value of WebSockets (and general API improvements). The new API returns a stream of patron objects instead of patron IDs. (We don't do this with XMLHttpRequest, because it requires the caller to wait for all responses to arrive before any may be rendered and the response messages may become very large). This speeds things up and requires the client to make considerably fewer network calls.
A traditional patron search of 50 results requires 102 OpenSRF messages and 102 individual network messages to complete. With WebSockets it takes 51 OpenSRF messages and about 12 (configurable) individual network messages. (When streaming, responses are "bundled" into small collections of responses as a form of I/O buffering, hence the 12 instead of 51 network messages).
In this specific example, patron results render about twice as fast. With properly built APIs, we should see comparable speedups on similarly shaped data sets (e.g. long lists of results).