Saturday, December 27, 2008

Window Sizing Bug Fixed in BIRT 2.3.1

An annoying bug described here that resulted in the ODA window size behaving unpredictably no longer exists when I recently tested with BIRT 2.3.1. The User and Developer Guide has been updated to require BIRT 2.3.1. This is the latest version of BIRT and the direction we want to take with the plugin anyways.

Tuesday, August 19, 2008

Sunday, August 17, 2008

Wrapping Things Up

Since the GSOC deadline is today, I spent the last week polishing things up, uploading/committing the latest versions of everything, and working on documentation.

The following OpenMRS wiki pages have all been updated to reflect the latest status and implementation of the OpenMRS ODA and Logic Web Service:


I recorded an instructional video that walks through the creation of a data source and data set using the OpenMRS ODA, showing the different wizard pages and options available. However, I'm having issues getting it published to flash on blip.tv. I'll create another post when I get this squared away.


I also put together three simple reports that illustrate how the three different data styles can be used:
  • Most Recent - Patient summary pages with their most recent data
  • Stacked - Graphs for each patient that track weight, height, temperature, and CD4 per patient over time
  • Flat - XLS fact sheet with many columns where there is one row per patient


Any and all feedback is appreciated!

Sunday, August 10, 2008

Fixing Bugs, Javadocs, JUnit Tests, User Conference, and Enhanced Modifier Interface

Wow, that's a long title :). Things have been pretty hectic so I actually missed my blog update last week. Allow me to catch up!

Bug Fixes

Tammy has been great with helping get the Logic Service up to speed. I probably identified at least 5 or so bugs with how the Logic Service was returning data and Tammy always promptly fixed them.

Javadocs

I went through all of the BIRT ODA and Logic Web Service classes and added Javadocs. I used the JAutodoc Eclipse plugin to help with adding all of the Javadocs and OpenMRS headings to each class. We plan on initially hosting them on Justin Miranda's development machine: http://www.justinmiranda.com/.

JUnit Tests

With the developers on the verge of a junit-test-a-thon, I went through the BIRT ODA code and added 33 JUnit 4 tests that uses the "should" keyword at the beginning of each test. The tests mainly cover the back end functionality of the BIRT ODA like the building up and breaking down of the Logic Service query among other things. The plan is to add more self-contained tests that cover the Logic Service and Logic Web Service (right now, the Logic Service and Logic Web Service tests I've created require a running OpenMRS instance with specific data).

Actuate International User Conference

Last Monday (8/4/08) through Wednesday (8/6/08), I was at the Actuate International User Conference in Las Vegas. My mentor, Justin Miranda, was invited to present as part of BIRT Live Day. It was great because I finally got to meet Justin and discuss things face-to-face. Although the presentation Justin gave was meant for those not familiar with OpenMRS, I learned a lot as well. We were also able to demo the current ODA during the presentation.

Scott Rosenbaum of Innovent Solutions was also there so we were able to chat with him. He has been a great resource for this ODA project.

Enhanced Modifier Interface

Although the modifier interface allows the user to add multiple modifiers to any token, one of the key components missing from the modifier interface was the ability to specify an aggregate for a given token. I've been delaying this since the Logic Service Parser only supports AGGREGATE {TOKEN} and not AGGREGATE X {TOKEN} style queries right now. However, I decided to go ahead and build this into the modifier page and thus the enhanced interface:


The selected tokens are still listed at the top the modifier page. You can still click the individual token names to see any current modifiers in the bottom of the page and add modifiers to the token as desired. To the left of each token are two drop downs. The first drop down is the aggregate (FIRST, LAST, MAX, and MIN) and the second drop down is the value (1-10) for the aggregate. For instance, one may wish to get the last 8 weights recorded for patients (LAST 8 {WEIGHT (KG)}. The default aggregate settings for when a token is first selected is LAST 1 (gets just one value which is the most recent). If a user selects or removes tokens from the token selection page, the next time the modifier page is visited, the user will see more or less token rows based on their selection.

In order to change the aggregate and aggregate value, you must select a data style that is not the default, most recent. The ODA builds the aggregates and values into the Logic Service query, but right now it doesn't change how the data is returned very much. Since the Logic Service does not yet support these aggregate queries, I'm handling things differently for each data style:

  • Most Recent - The aggregate and aggregate value drop down boxes are disabled (they are greyed out and cannot be selected). By definition, the most recent data style will just get the most recent data for a token so there is no point in applying an aggregate.
  • Stacked - Does absolutely nothing. All of the data will still be returned for the stacked data style. This will be changed when the Logic Service is ready.
  • Flat - The aggregate is not considered at all, but the aggregate value is. So, if a user constructs a query with FIRST 4 WEIGHT and LAST 3 HEIGHT, the FIRST and LAST aggregates won't effect how the data is returned, but there will be 4 expanded columns for WEIGHT and 3 expanded columns for HEIGHT. Again, this will be changed to present the data as expected when the Logic Service is ready.

When the Logic Service supports AGGREGATE X {TOKEN} queries, the Logic Web Service should take very minor modifications to start using it.

Now I'm off to more ODA polishing for the GSOC deadline that is closing in on me :)

Sunday, July 27, 2008

Data Styles and Token Splitting

This week, my mentor, Justin, and Scott Rosenbaum, a member of BIRT PMC, and myself had a web meeting to show and review the current functionality of the ODA. We got some really good feedback. Some of the main points:
  • The default data set should just show the most recent values for a selected token. This simplifies how the data is first returned and the data style can be changed from this if desired.
  • The ODA should support parameters. For instance, the user should be able to provide a parameter as a value in the modifier page. This will most likely be a project for after GSOC.
  • The more data the better. The default behavior should be to split the tokens by all four of the split values we have chosen to initially support.
  • A tree view to select the tokens would be nice. The branches would be the token tags and the leaves under the branches would be the appropriate tokens. This will also more than likely be a task for after GSOC.

As far as coding, I've added quite a bit of new functionality. The two basic additions can be categorized under data styles and token splitting.

Data Styles

There has been a lot of discussions regarding how to display the data to the user. There will always be the patient ID for the first column, but how the other columns are organized can vary. Rather than try to come up with the perfect data set, I've allowed the user to toggle between three different styles:

  1. Most recent. This is the default selection. There is a column displayed for every token/split combination that is selected. There is one row per patient displaying the most recent value for each token selection.
  2. Stacked. This is the EAV style of data where there is a KEY, VALUE, and appropriate splitter columns. There is the potential for multiple rows per patient/token if more than one value exists for a patient/token combination.
  3. Flat. This style will have the most columns and one row per patient. This style provides more information than the "Most recent" style by getting more than just the most recent value. Right now, its hard coded to return 5 values per token. Each of these 5 values can be split and thus even more tokens. Eventually, when the Logic Service supports FIRST x and LAST x, the user will be able to choose what this value is instead of the hard coded 5.

Token Splitting

Token splitting allows the user to get more data from a selected token than just the value of that said token. The following are the four additional "split" we are initially supporting:

  1. Observation Date
  2. Observation Location
  3. Encounter Date
  4. Encounter Type

I have added a new page to the ODA that allows the user to select which splitters to use for each token (the default is to include all of the splitters). The interface is basically a grid of check boxes where the splitters make up the columns and the selected tokens make up the rows. This page dynamically builds itself based on the tokens added or removed over time. Here's an example of what it looks like:


Splitting the tokens is supported for all three data styles mentioned above under "Data Styles".

Sunday, July 20, 2008

Lots of Changes to ODA and Logic Web Service

Wow, there was a lot going on this week with the project. A lot of discussion revolved around the Logic Service with everyone (especially Burke and Tammy). Tammy cleared up a lot of questions I had about the Logic Service and Burke created the beginning of a LogicCriteria parser which I was able to integrate the ODA with. Allow me to summarize all the changes and enhancements made to both the ODA and the Logic Web Service:

BIRT ODA

  • Removed the filter page and reintegrated the filter drop down back into the first page so that the user has to select a filter and then tokens can be chosen using the tag and search feature.
  • Changed the way that the data set wizard pages are presented to the user. Now, instead of having to go through all of the pages when initially creating a data set, the token selection page, the first page, is the only page shown. The user can access the more advanced pages after the edit data set dialogue comes up or by later reopening and editing an existing data set.
  • Added a helper class for easily tearing down and building back up queries, extracting certain pieces of the query, etc.
  • Changed all data set pages to generate query in the format of SELECT {token} optionalModifier x{token2} optionalModifier y... FROM cohortID.
  • Added a new data set page that allows the user to see the actual query that will be sent to the Logic Web Service as they keep changing their queries using the various data set wizard pages. Here's a query I created by selecting one of my cohorts and then various tokens including indicating that I wanted WEIGHT values that were less than 50 and TEMPERATURE that was greater than 40:

Logic Web Service
  • Added latest jars from logic refactoring branch.
  • Changed the data resource to accept new query format in the format of SELECT {token} optionalModifier x{token2} optionalModifier y... FROM cohortID.
  • Changed call that populates filter to single Context.getCohortService().getAllCohortDefinitions() call.
  • Used the new LogicCriteria.parse() method that Burke put together this week so that tokens and their modifiers are passed to this parser. The appropriate LogicCriteria is created and passed on to the logic service for evaluation and the results are passed back to the ODA.
  • Added helper class to help with getting information out of the URL request.

Next step is adding a page and modifications to the query to allow the user to split the tokens into more than just the value like date and location. I'm going to use colons in the query after the token and modifiers to specify how to split the individual tokens.

Sunday, July 13, 2008

Midterm Review and Update

Thursday (7/10/08), I joined the developer's call to review the current status of my project. I got a lot of really good feedback from everyone and a new sense of direction for the project. The following is a basic list of some of the major points from the call and followup mailing discussions:
  • Keep it simple and introduce complexity later if time allows. It's better to have a simple solution that actually works than a really feature rich solution that doesn't do anything.
  • As we add more functionality to the ODA, just creating an initial data set is a lot to throw at the user. When a user initially creates the data set, we just want to provide the first page that allows for token selection and the rest of the pages should not be seen. Then, the user should be able to go to the other pages via the edit data set interface to further refine the query.
  • The Modifier page is going to be redone. The top piece will basically allow a user to add an aggregate to the beginning (just FIRST and LAST for now). LAST will be the default. Then, the user will use the bottom half of the page to choose conditions to add to the query if desired.
  • A new query format is needed to support all of these new additions. We're moving more towards a SQL looking query where the SELECT chooses the tokens and the FROM is the cohort. For token in the SELECT, there will first be an aggregate (LAST), then the token name in curly brackets, followed by an optional condition. After the "aggregate {token} condition", there are pipe delimiters to indicate how to split the token (date, location, etc.). Finally, the desired cohort is in the FROM clause. More will probably required later but this is the simple format for now.

A lot of other details behind adding aggregates and modifiers to tokens were discussed. Check this out which is the latest mock up of the interface. The modifier page (2) still needs more work to provide a better way to indicate how the conditions are applied.

One of the great breakthroughs this week was moving away from the Mock Logic Web Service to the "real" Logic Web Service. The main problem was the data I had from the sample data set had some concepts that were missing names (more details in the bug here). There is also another problem where the dynamic cohorts don't have an identifiable name so I've removed these types of cohorts from the Logic Web Service for now and just the static cohorts are available. Anyways, it's great to be using the ODA and getting back real lists of tokens and actual data instead of the hard coded values I was working with.

As far as coding this week, I added support on both the Logic Web Service and BIRT ODA side to add four columns to every token that is chosen:
  1. Observation Time
  2. Observation Location
  3. Encounter Time
  4. Encounter Type
So far, the user has no choice and all four of these columns are added no matter what is chosen through the interface (this will be worked on when adding the split selection page to the data set wizard). There's a major problem with this so it doesn't work quite like it should. Casting the Result object from the data query to an Obs object does not work so I can't actually get any of the times, location, or type. Right now, it's just returning the Result's date and the rest of the values are null. There's also a problem with datetime data types working (had to use Text for now).

I also fixed a nasty bug where the underlying class that holds the information regarding the token, filter, etc. would never be flushed. This problem isn't noticeable unless you create a brand new data set and notice that all your selections from the previous data set are selected. I added cleanup() methods to all the data set pages that destroy the shared InformationHolder and added logic to reload the InformationHolder when saving the page.

In the immediate future, I'll be working on all the refactoring required to use the new query format. Hopefully soon, I can also create a page for choosing how and which columns to split.

Sunday, July 6, 2008

Persisting the Token Modifiers

This week I worked on persisting the user's selection of token modifiers on the back end so that they could be passed as part of the data query as well as be able to reload the user's selection later in order to further modify. The ODA interface hasn't changed from a graphical point of view but has changed such that all changes made to the modifier table are recorded, saved, and used to make the data query.

Right now, the new URL request format for data looks something like this but will most likely change after further discussion:

(URL removed -> didn't show up correctly in HTML)

Here's one of the examples that I was testing with:

(URL removed -> see example in JUnit test at http://svn.openmrs.org/openmrs-modules/odamocklogicws/test/org/openmrs/module/odamocklogicws/TestMockWebService.java)

I added some support to the mock logic web service so it would know how to handle such requests. Since the actual logic service API isn't ready, I just tacked on the user's modifier requests to the beginning of the hard coded data. Here's a sample data preview after selecting some modifiers for a few given tokens:


Sunday, June 29, 2008

Breaking up the Data Set UI Across Multiple Pages

We decided that there were just too many functions to have them all on one page for the data set wizard. The UI is to be broken up in three stages where the user 1) selects the tokens, 2) applies modifiers, and 3) applies filters/cohorts. So, most of the work I've done this week was refactoring all the code into three separate modules and getting all the framework in place to handle this new paradigm.

Here's the first page which provides a way to select tags for narrowing down the tokens, search the tokens, and move the desired tokens to the selection pane:

Here's the second page that lists the selected tokens and provides the interface for applying modifiers to the tokens:


Here's the final page that is really simple right now and just provides the drop down list of available filters/cohorts:


One of the main pain points has been how to keep all of the pages aware of the data input from the other pages as the user toggles between the pages. For instance, the modifier page needs to know which tokens have been selected on the token selection page so that the appropriate token list is displayed on the modifier page. I built a helper class for this so that these variables can be shared in real time and later written to the design as public properties so they will be loaded appropriately if the user wants to later edit the data set. I'm still ironing all of this out as there are a lot of subtle use cases to handle based on the context in which the data set is being modified, but the basic functionality is there.

Thursday, June 19, 2008

Developers Guide and Modifier Interface

One of the deliverables I put together this week was the BIRT ODA Plugin Developers Guide for OpenMRS. It contains basic guidelines and resources for those interested in changing or enhancing the OpenMRS BIRT ODA. Any feedback to make things clearer, add more steps, etc. is appreciated.

The most significant thing I did this week was delve even deeper into SWT and create the UI for adding modifications to individual tokens:


Basically, a user selects a token that has been moved to the "selected" list and there is a list of modifiers available for the specific token. There is a checkbox to the right of each modifier to indicate whether or not the modifier should be used for a given token. The right-most column of the modifier table is where the argument can be typed in as appropriate. The two up and down arrows to the left of the table allow the user to change the order of the modifiers since the order may effect how the data is returned. The table and arrows will also dynamically change their height and width to fit the ODA screen if the window size is changed.

There were several resources I used to help understand how to implement some of the different functionality:
The next step for the modifier piece is to setup an underlying data structure that keeps track of the modifiers for the different tokens and translate this into a URL request sent to the Logic Web Service (URL format TBD).

Thursday, June 12, 2008

Mocking Up the ODA GUI and Adding Token Search

I've been spending time getting more familiar with various components like the Logic Service API (still not working for me) and playing with the data export tool to get a feel for the different types of data that one would want to search for.

I spent some time creating an initial mock-up of the ODA GUI based on the functionality we're wanting to add:


One of the new features is that of adding modifiers to individual tokens. The user can select individual "Selected Tokens" and add and play with Modifiers for each token in the bottom window. Perhaps the "Last" modifier should be selected by default so that the data returned is current patient data by default. Maybe the Modifier section could even be hidden by default and the user can activate it by clicking "Advanced" mode or something similar.

The other new functionality shown in the mock-up on the third "row" is that of the ability for a user to search amongst the available tokens. In an effort to learn SWT, which is the graphics technology used to create the ODA GUI, I implemented a search with the existing BIRT ODA:


You can see on the bottom of the GUI that the search for "DATE" populated the tokens list with all tokens that contained the string "DATE". I didn't spend much time on how and where the search was actually layed out because this will most likely change, but I learned a lot on how to work with SWT. It also took a lot more time than I thought to implement something that seems so simple :) I also reorganized the GUI code some so it's easier to visualize the flow of how things are setup.

Thursday, June 5, 2008

Mock Logic Web Service for BIRT ODA is Ready

Since the logicws module is not working with the existing Logic Service API, we wanted to create a mock logicws module that would respond with dummy data without having to communicate with the Logic Service API. This way, we can work on development of the ODA without communicating with the Logic Service API while it is being worked on. The solution is a REST web service module that responds to the exact same URL queries from the existing BIRT ODA.

The initial odamocklogicws module is ready and can be checked out from SVN at http://svn.openmrs.org/openmrs-modules/odamocklogicws (I do not yet have access to add the module to the module repository). The project includes simple JUnit tests that use HttpUnit to make the URL requests to the servlet resources and prints out the resulting fake XML data returned from the odamocklockws. The different servlet resources include token, tokenTag, filter, and data. These tests can also be used when the real logicws module is working to easily see how it responds to URL requests.

The same steps in the BIRT ODA Plugin User Guide can be followed except one should use the new odamocklogicws module instead of the logicws for the BIRT ODA to work (with mock data).

Thursday, May 29, 2008

Logic Web Service API Details

Below are the details I interpreted from tracing through the various Logic Service methods like getting filters and tags as well as constructing a query to more fine tune a request. Appreciate any feedback or comments on any more details or any misinterpretations that I made.

Also, the project requirements are still a work in progress but are outlined here.

Details on the API

The org.openmrs.logic.LogicService class is an interface. The org.openmrs.logic.impl.LogicServiceImpl class is the default implementation for this interface.

The get and add methods use the org.openmrs.logic.RuleFactory to return or add their data. As a temporary hack, when initialized, the RuleFactory adds the following hard coded rules: [AGE, BIRTHDATE, DEATH DATE, GENDER, HIV POSITIVE, CAUSE OF DEATH, DEAD, BIRTHDATE ESTIMATED].

The following are details on all the get methods:

  • getDefaultDataType(String token)
    • The data type for a token is returned via RuleFactory.
    • The getDefaultDatatype method of the org.openmrs.logic.rule.Rule interface is called on the token. The AgeRule, EnrolledBeforeDateRule, HIVPositiveRule, and ReferenceRule classes all implement this class and return different data types as appropriate.
  • getLogicDataSource(String name)
  • getLogicDataSources(Map<String, LogicDataSource> logicDataSources)
    • Gets all the registered logic data sources via RuleFactory.
  • getParameterList(String token)
    • Returns the expected parameters for a rule under which the passed token is registered via RuleFactory.
    • The getParameterList method of the Rule interface is called on the token (similiar to getDefaultDataType). The AgeRule, EnrolledBeforeDateRule, HIVPositiveRule, and ReferenceRule classes all implement this class. All of these implementer classes return null when queried for the parameter list.
  • getRule(String token)
    • Pass the token under which the rule was registered.
    • Uses the RuleFactory class and returns an org.openmrs.logic.rule.ReferenceRule based on that token if the token starts with "%%".
  • getTagsByToken(String token)
    • Uses the RuleFactory class and returns all of the tags that are attached to a give token.
  • getTokens()
    • returns the Set of all hardcoded rules via the RuleFactory.
  • getTokensByTag(String tag)
    • returns the Set of all hardcoded rules that match the tag argument via the RuleFactory.

The following are the details on all the add methods:

  • addRule(String token, Rule rule)
    • Uses RuleFactory to first make sure that rule doesn't already exist.
    • If rule does not already exist, the rule is registered under the given token.
  • addRule(String token, String[] tags, Rule rule)
    • Same as addRule method described above except for it assigns given tag(s) to the rule at the same time.
  • addTokenTag(String token, String tag)
    • Uses RuleFactory to add a tag to a previously registered token.

There are many eval methods that are used to filter the token/tag data to return specific information. The org.openmrs.logic.LogicCriteria class is used to setup the criteria for the filtering. The eval methods are basically a series of methods that massage the query to eventually use org.openmrs.logic.LogicContext to evaluate a rule with LogicCriteria for a single patient. The evaluations can be done for a single patient or a list of patients (cohort). The following are more details on the eval methods:

  • Single patient
    • eval(Patient who, String token)
      • Gets information for a given token for a given patient.
      • A LogicCriteria object is created based on the token.
      • Uses eval(Patient who, LogicCriteria lc) discussed below.
    • eval(Patient who, String token, Map<String, Object> parameters)
      • Gets information for a given token and parameters for a given patient.
      • A LogicCriteria object is created based on the token and parameters.
      • Uses eval(Patient who, LogicCriteria lc) discussed below.
    • eval(Patient who, LogicCriteria criteria)
      • Evaluates a query for a given patient.
      • Uses eval(Patient who, LogicCriteria criteria, Map<String, Object> parameters) discussed below.
    • eval(Patient who, LogicCriteria criteria, Map<String, Object> parameters)
      • Creates a LogicContext object with the given patient.
      • The query is evaluated via the LogicContext object and an org.openmrs.logic.result.Result is returned.
  • List of patients
    • eval(Cohort who, String token)
      • Gets information for a given token for a list of patients.
      • A LogicCriteria object is created based on the token.
      • Uses eval(Cohort who, LogicCriteria criteria) discussed below.
    • eval(Cohort who, String token, Map<String, Object> parameters)
      • Gets information for a given token and parameters for a list of patients.
      • A LogicCriteria objects is created based on the token and parameters.
      • Uses eval(Cohort who, LogicCriteria criteria) discussed below.
    • eval(Cohort who, LogicCriteria criteria)
      • Evaluates a query for a list of patients.
      • Uses eval (Cohort who, LogicCriteria criteria, Map<String, Object> parameters) discussed below.
    • eval(Cohort who, LogicCriteria criteria, Map<String, Object> parameters)
      • Creates a LogicContext object with the list of patients.
      • The query is evaluated for via the LogicContext object for each patient id and a Map of Result and patient id pairs are returned.
    • eval(Cohort who, List<LogicCriteria> criterias)
      • This is similiar to the previous eval discussed above except that it evaluates a collection of queries instead of a single query for a set of patients.
      • Each query is evaluated via the LogicContext object for each id and a Map of each LogicCriteria is paired with a Map of Result and patient id pairs.

The LogicContext is what does the "work" for all the above discussed eval methods of the LogicService implementation. It evaluates a rule with criteria and parameters for a single patient. Each rule is evaluated as appropritate depending on the implementing class (AgeRule, EnrolledBeforeDateRule, HIVPositiveRule, and ReferenceRule). The Result from each class is put in a patient id / Result map. The LogicCriteria is then applied to these results via the private applyCriteria() method of the LogicContext class. For now, this method looks like it doesn't do anything and just returns the results without applying the criteria.

The following are some other miscellaneous methods:

  • findToken(String token)
    • Returns all known tokens based on the lookup string passed.
    • Uses the RuleFactory class to look through the registered tokens and return the Set that matches the lookup string.
  • findTags(String partialTag)
    • Returns a Set of tags based on the lookup string passed.
    • Uses the RuleFactory class to look through the tags and find tags that match the lookup string.
  • updateRule(String token, Rule rule)
    • Looks up the rule based on the token passed and replaces it with the rule passed.
    • The RuleFactory class makes sure that it exists and updates with the new rule if it does.
  • removeRule(String token)
    • Removes a rule for the token passed.
    • The RuleFactory class makes sure that the rule to be removed exists and removes it from the rule map if it does.
  • removeTokenTag(String token, String tag)
    • Removes a token's previously assigned tag.
    • The RuleFactory class makes sure that the the token had the given tag and then removes it.
  • registerLogicDataSource(String name, LogicDataSource logicDataSource)
    • Adds a LogicDataSource to the logic service with the given name.
  • setLogicDataSources()
    • Adds multiple LogicDataSource objects to the current data sources in the logic service.
    • Uses registerLogicDataSource method discussed above.
  • removeLogicDataSource()
    • Removes a logic data source from the list.

Monday, May 19, 2008

Trying Out Last Year's GSOC OpenMRS ODA

I started out by following the instructions at http://openmrs.org/wiki/BIRT_ODA_Plugin_User_Guide in an attempt to setup and test the ODA with the Logic Web Service module.

In the OpenMRS Administration interface, it indicates to change module.allow_web_admin to true in the runtime properties in order to upload modules. After a rebuild, it still wouldn't let me upload the module. I went to the "Module Properties" page and it indicated that I needed to have module.allow_upload set to true. I set this in the runtime properties and did another rebuild. This still didn't work for me, so I just dropped the module in C:\Application Data\OpenMRS\modules and restarted. The module showed up in the "Manage Module" section so it looks like it worked.

I dropped the ODA jars in my BIRT installation and setup a new OpenMRS data source. As my mentor, Justin, suspected, the plugin did not work with the current version of the Logic Web Service and it threw an "Expected API functions do not respond correctly" Exception .

I checked out the OpenMRS ODA plugin code and saw that the failure was happening in the canAccessAPI() method of the Connection class. I added some Exception handling to log some info if the problem was an IOException, built and deployed the modified OpenMRS ODA, and restarted Eclipse. This new Exception handling showed me that the failure was happening when trying to access http://localhost:8080/openmrs/moduleServlet/logicws/api/getFilters. Accessing this in my browser gave me a NoSuchMethodError for "org.openmrs.api.context.Context.getReportService()Lorg/openmrs/reporting/ReportService". Tracking this down in the LWS code seems to point to the getAllPatientFilters method. I am wondering if I need to install the ReportService with my OpenMRS deployment?

Based on my conversations with Justin, he was expecting problems like this. So, looks like one of the first steps for this project will be to get a mock web service working so that the ODA development can commence. I was pointed to the following resources for information on modifying/creating a new LWS module:

We also plan on nailing down more of the requirements for this project this week.

Sunday, May 11, 2008

Getting Familiar with BIRT ODA

To really get familiar with an Eclipse ODA, I decided to go through the three part "ODA Extensions and BIRT" series by Scott Rosenbaum and Jason Weathersby. It is not publicly linked but can be accessed here by doing some registration. The tutorial is spread across three parts in volume 8, 9, and 14 of the Eclipse magazine.

The end result of the ODA is the ability to use Google Spreadsheets as a data source. This is particularly appropriate for me because I am wrapping up a big networking project where we have been storing the simulation results in a Google Spreadsheet. The project involves the effects of node density on packet delivery ratio for three different wireless routing protocols.

Going through this ODA guide exposed me to Eclipse plugin and ODA concepts and terminology which will be helpful for this project. With respect to ODA, a simple but important point seems to be that of separating design and runtime for the ODA. The first part of the article mainly focuses on the runtime, the second part with the design time (GUI), and the third part with adding logging, optimization, data types, and parameters.

I think the most valuable lesson I learned was with troubleshooting an ODA. Setting up the Logger for the ODA was crucial to solving a snag I ran into. I had the piece of the ODA working that identifies all the user's spreadsheets and then allows you to drill down and select an individual sheet for the query. However, the data preview kept failing with a CannotExecuteStatement error. Well, needless to say, this generic error didn't help identify where the root cause of the problem was. Looking into what the ODA was logging out showed it was throwing a com.google.gdata.util.ServiceException exception. A few alterations to the ODA and I could see the entire stack indicating it was throwing a com.google.gdata.util.InvalidEntryException exception (extends ServiceException) which typically indicates a bad or malformed request. The final step was to log out the actual query that was being made and I could see that the problem was that the filterClause data set parameter was defaulting to 'dummy default value'. Changing this to null allowed the data preview to work. See the image above that shows the node density and delivery ratio data from my project spreadsheet.

The next thing I am going to do is get, build, and become familiar with the OpenMRS ODA. I've also started adding links that are useful to this project to the right navigation area of this blog.

Monday, May 5, 2008

OpenMRS Fired Up and Running on the Laptop

I finally have a little breathing room from my huge school project and wanted to setup OpenMRS on my laptop. I basically followed the steps from http://openmrs.org/wiki/Step-by-Step_Installation_for_Developers and got things up and running. Basic stuff about the versions of the various software I setup:

  • Fresh install of Eclipse 3.3.2 and setup Subclipse 1.2.4. Running Java 1.6.0_05-b13 but set compliance level to 1.5.
  • Installed MySQL 5.0.51b and configured as a multifunctional db with 20 concurrent connections. Decided to make UTF8 the default character set.
  • Installed the MySQL GUI tools so I could easily look at the db model if required.
  • ant 1.7.0
  • Tomcat 6.0.16

To make sure things were all working correctly, I made my self a patient in the system:

Alright, back to running network simulations for school. Only 10 more days!

Thursday, April 24, 2008

Accepted to GSOC 2008 with OpenMRS!

I just found out this week that I was accepted into this summer's GSOC and will be working with OpenMRS on extending their BIRT ODA from last year's GSOC. My mentor is Justin Miranda and I'm really looking forward to coding this summer under his guidance. I will be blogging my progress for this project here. The following is the abstract for this project that was accepted:

A high level description for this project is available at
http://openmrs.org/wiki/Projects#Extend_OpenMRS_ODA_Plugin. Basically, the
goal of this project is to further improve upon the ODA that was created during last year's GSOC (http://openmrs.org/wiki/BIRT_ODA_Plugin_User_Guide). The ODA is a BIRT plugin that uses OpenMRS's Logic Web Service as a datasource.

Additional optimizations need to be made with respect to how the tokens are treated. Currently, tokens can be filtered upon by choosing the tag but it would be nice to add a feature where the users could search for a particular token name using a regular expression. The list of tokens and their tag relationships should be cached on the client side so that the web service does not have to be hit every time a tag or search is made. The CD4 Count token is a multi-value token. The interface should "explode" the multiple values to show the date as well as the actual value of the CD4 count. The user should also be able to change the default datatype of a token to another datatype.

The Logic Web Service will also need to be enhanced to support more tokens as required. It should handle all of the methods available within the Logic Web Service's already constructed interface so that the ODA can leverage any of those APIs that could be useful. A more complex query builder is desired on the ODA side that would take advantage of all the LWS's operator/modifier APIs that are appropriate for a given token/datatype. Also, there should be an option within the web service that would allow the user to choose not to have the tokens returned if they did not contain any data.

This project would also include developing a simple mechanism/framework to upload the BIRT report designs to the appropriate directory on the OpenMRS server automatically. This will allow for quick and easy deployment. Another area to improve upon as far as usability would be the ease of integration of the BIRT report itself. The data source can be configured such that the connection details to the OpenMRS server can be set at runtime of the report. This would involve both the ODA as well as report design improvements/changes. Common data sets could also be configured in the BIRT report design via BIRT templates. Using templates would streamline and simplify report design for creating new reports. A report designer could just drag and drop the individual data items within the precreated datasets in the BIRT template.

When these ODA / Logic Web Service changes have been implemented, I would like to develop some useful reports to show off the new functionality. We can create some reports that were too troublesome to design or easily implement runtime parameters in the past due to only having the JDBC option available. We could also recreate the data access part of existing BIRT reports with the new ODA and make them better.