Friday, October 31, 2008

Architecture Diagrams for AMDSS

As part of the project planning that John Stinn is doing, I'm starting to draft up some architecture diagrams for how the AMDS services (AMDSS) will be built, accessed and deployed. The first in this series is a deployment diagram for CDC facing and Partner facing components. This is all fairly vanilla UML 2.x diagram notation-spec, wikipedia in case anyone is interested.

Please let me know your comments.

Thursday, October 30, 2008

We have polygons

We have polygons, and they are colored depending on the amount of calls within.

Tomorrow, I work on making popups depending on which polygon was clicked.

Wednesday, October 29, 2008

Oh those wacky geolocations.

So, yesterday I spent some time working up a nifty little display to make sure all the counties were showing up the way I needed them to.

I found out that most (about 45) of the counties in Colorado, actually showed up in Colorado... and some of them were showing up in other states.

Because I made a very naive assumption that County Names did not repeat in the US. And, like that assumption that birthdays don't cluster, it came as a nasty surprise. Thankfully, because I made sure to maintain the poicondai-loader project the way I did, it only took me the better part of an evening to change the loading scheme so that it supported county, state, and geolocation data.

The rest of today has been spent setting up some utility classes for pulling what I am calling PoiConCounts. I spent a lot of thought-chewing time figuring out the granularity of the objects for optimal caching and pulling (and many thanks to Brian for busting me out of the loop I was finding myself in) and just settled on "the thing that is going to be displayed". I have made this flexible, in case the way things need to be displayed or stored ever changes.

Tomorrow I will be trying to get that first county yank into a map with appropriate shading. If that goes well I will probably be ambivalent about whether to try and get the histogram-popup working or move on to zipcodes. But let's not fillet those fish yet...

Monday, October 27, 2008

Maps and Polygons and JDBC, oh my.

Today has been an odd day of sorts, but I got a lot done.

First, the big thing... I got the connectivity to the database working and drawing polygons. Now the next big thing is going to be having a poicondai-web page that sorts everything out by counties and tags the resulting polygons with pop-up counts.

Then, the next big thing is going to be implementing modular pop-ups that instead of just showing counts, show a Google-chart for the selected area with a histogram and what-not. But that might get over-ridden with the encoding of zip-codes.

Otherwise, I am in danger of hitting that ambivalence trap. I spent some time sprucing up the old poicondai chart-based application, and then found out it would have been ostensibly better to get the new map application working... but at the same time the sprucing-up I did is going to probably be pulled into the new poicondai-maps application... and there are lots of better ways I could have set up the database pulling, and I might just replace the whole JDBC framework with Hibernate... and there are lots of different things I could do and I seem to be focusing on the most visually nifty and code-based annoying.

Also, I have lots of little doubts. Despite many days of concept proofing I keep worrying about some functionality not being there and throwing me back to the drawing board (what if I cannot assign click handlers to polygons? Stuff like that).

But like most things... I find that the harder I work, the more luck I seem to have... and it's not like the loops and caches I would build aren't going to be useful if I have to move from a polygon-based map to a tile-based map.

So wish me luck... now I am just trying to get things on map attached to the numbers coming back from NPDS.

Friday, October 24, 2008

VMWare Appliance Now available online

Dan made a VMWare Appliance of the PHGrid Globus node and it's available at

If you download both of the files here, you can run a SUSE installation of the Globus Toolkit. This will make configuration easier as all you have to do is make a few configuration changes(set the IP, hostname, user accounts, request a user and host cert, etc).

This will reduce the install time down even lower (5 minutes) and allow PHGrid nodes to run on Windows (through the free VMWare player).

This first release is still pretty large (4.5GB download, 8GB necessary to run) so it may take you a while to download (took me over 30 minutes to upload).

The data, it is in the database.

So today, I managed to get the polygons for the ~3500 counties into a database.

I used maven (and found some places where maven was being very odd). I used SAX (and found some places where SAX was being very odd). I used eclipse (and.. well, you get the idea).

It was one of those weird coding days... where I was pressed on only by the faith I had that I could get it working and that my brain had set it all up in my mind. And thankfully that seems to be the case. I was going to try and just brute force REGEX/parse the XML string into the objects I needed without using any sort of XML parser (because I wasn't that familiar with SAX/DOM/Etc) but I found out that I couldn't really think of a way to do it... and that is a good thing because I'm sure whatever I would have turned out would have been a candidate for some of those "LOL at this horrible code" type websites.

Instead, I have a little SAX parser that is elegant, fast, and coupled to a little prepared-statement inserter that is also elegant and fast. And I should be able to use it relatively quickly for when I need to load zipcodes.

Now, the next big step ahead of me is to finalize the "pull these counties and then draw these counties" map for poicondai. Then I also have to do the same thing for zip codes. And then make everything look better.

Here's to demonstrations.

Aggregate MDS Service planning

Tom and John presented the FY09 plan to the NCPHI governance council last week. A rather decent size block of work that the NCPHI R&D lab is planning is the development of a set of summary data services and associated coordination services.

These services will be an extension of the RODSA-DAI work and will provide access to various biosurveillance systems hosted by partners out in the states. Each service will return a set of aggregate syndrome counts that will map to a new common data structure that for now we are calling the "Aggregate Minimum Data Set". The AMDS will use the AHIC/HITSP MDS and select the minimum useful fields for aggregate reporting. The work on developing a scientifically vetted AMDS is in progress and so far is involving the Centers of Excellence and CDC BioSense personnel.

So the idea is that there will be multiple implementations of the AMDS services for each participating biosurveillance system and then a set of coordination services that know how to run a federated query across the participating nodes and combine together the results from the query. This is basically what the PHDGInet and RODSA-DAIWeb demos shows as a proof of principle, but 2009 will bring this to a proper pilot by developing services for actual installed biosurveillance systems running at partner sites.

Here's a list of the initial set of services that will support the pilot:

  • AMDSX-DAI (Where X is one service for each participating system, in 2009 probably 5 different implementations planned for RODSA, ESSENCE, BioSense and specific state systems)

  • AMDSCoordinator.RunFederatedQuery

  • AMDSCoordinator.QueryAvailableCoverageArea

Thursday, October 23, 2008

poicondai, and ESRI GIS presentation.

Poicondai is coming along nicely. It's not as far along as I would like, but then again there were a lot of things I discovered that needed to be done that I did not anticipate (building a mini-custom GIS database for the sake of drawing polygons with simple Google maps). Today I am building a little builder that will build the database I need, and I have proven that I can build the overlays I need in Google maps and they look sorta cool.

Otherwise, I have also been attending security meetings and I attended an ESRI-hosted GIS presentation where I learned the differences between Web Mapping Services (WMS) with support for Style Layer Descriptors (SLD), Web Coverages Services (WCS), and WFS (Web Feature Service) and the uber nifty WFS transaction abilities.

In short, all three sets of acronyms are something approved by the Open Geospatial Consortium (IE, they are accepted standards) for a client seeking geospatial data to better define what they need.

WMS pretty much allows for creating overlays and polygons over maps, and the added SLD's allow a client asking for data to say "I want these lines to be blue" or "I want these measurements made in metric and English and I want both returned".

WFS allows for more succinct geospatial objects like "This is a trail" or "this is a path in the woods" or "these are all the rest areas in the park". They are vector based and when transactional, allow the client to actually change the data in the hosting database (hence, if a trail is off, it can be corrected... after being locked, adjusted, and committed.).

WCS is a way of asking for area coverages using raster projections. Hence shaded areas and change diagrams (IE, how did Mount St. Helens look before and after the eruption... with pretty colors!)

ESRI makes server and client products that deal with all the different flavors and visualizations, and even support lots of open source and/or free visualization and server products.

The other cool thing is that KML pretty much supports data from all the methods and encompasses all the cool ways of serializing the GIS data. KML sort of sits between server strength and client strength values and serves as a major transport language.

So yeah, tomorrow I am hoping for polygon databases and displays.

Wednesday, October 22, 2008

Globus service testing using SoapUI

One of the tools I like to use to test out web services is SoapUI (because it is free, open source, 100% Java and other superlatives). So far this is useful for calling public (anonymous access) services, but for secure services it didn't work out.

From the gt-users mailing list, Joel Scheider emailed me to let me know:

Using soapUI, it is possible to pass a client-side SSL credential to a web Service, e.g., for GSI Transport (TLS) authentication, but it's necessary to first convert the public/private key into Java keystore format, as described in Appendix A of this document:

Instead of creating a proxy certificate, this method uses the client certificate directly, so delegation is not supported, but TLS authentication still works.

soapUI also claims to support WS-Security, but I haven't personally tried using that feature yet.

Thought this may be helpful for anyone else who needs a quick way to call Globus services.

Tuesday, October 21, 2008

new techs on old demo

Today I managed to finish sprucing up the poicondai demo with better logging, and better handling of the new zipcode lists. I also started my research of the JQuery show/hide toggle so I could show/hide the raw zip-code list unless data was to be specified.

Tomorrow I hope to complete the show/hide, and get a prelim set of geographical outlays in poicondai with count bubbles. It will look a lot like rodsadai.

Otherwise, today we had a cool meeting on security, and we are going to doing a lot of data and application classifications and start laying out all of the different standards and policies... and just looking upon our data with more of a "security" eye. Thus, I am meeting a lot of new security focused folks who ask the important, retrospectively obvious questions about what services are present and what sorts of fields are used and whether they might be dangerous when boxed up and shipped somewhere to someone who hasn't seen them before.

DiSTRIBuTE's Aggregate Data Model

John Stinn reminded me of the DiSTRIBuTE's Aggregate Data Model. This summer, the PHGrid team met with Ross Lazarus and the BioSense epis about the potential structure of an Aggregate Minimum Data Set (written about earlier on this blog).

The International Society for Disease Surveillance has a project called DiSTRIBuTE (aside to future project namers- name your project something that is easily googlable) that seeks to collect aggregate data on ILI. DiSTRIBuTE uses a minimal aggregate data structure of:
| date | zip3 | age group | fever count | denominator |

This is similar to that proposed by Dr. Lazarus for use at Harvard ESP.

Tracking Changes

The phgrid sourceforge project is now configured to use sourceforge tracking to track changes to the PHGrid projects. So far, there are two active trackers: Feature Requests and Bugs.

The group has been very active in developing out services and demo apps, now that activity will be even more transparent by tracking the changes as they are submitted, prioritized, assigned, developed and tested.

The sourceforge tracker has rather obvious limitations (fixed workflow, no assigned tester field, on and on) but it's free. Any suggestions are appreciated.

This can also be used by the community to suggest new features, services or bugfixes.

To support this tracking, I'm recapping our development workflow below (which is available as a pretty graphic on the wiki):

  1. User Enter Feature/Bug with description (with or without use case)

  2. Admin Prioritize/Assign change

  3. Developer creates (or updates if use case exists) and posts use case, use case is reviewed

  4. Developer codes change in new branch with automated unit testing (JUnit, etc)

  5. Developer assigns change to tester (other than developer)

  6. Tester reviews code and either approves (go to #7) or notes required changes (go to #3)

  7. Developer committs change to trunk/release

  8. Admin closes Feature/Bug

Friday, October 17, 2008

Change, Test, Repeat

Today I have a wee bit more functionality in the advanced search, and I am getting some good leads on how to get and modify GIS polygonal data. It's neat stuff!

But, something else was brought to my attention today: The plans to get a tracker system behind our changes. This is a very good and cool thing because it means that the people interested in our projects will have a well documented and leading practice way to suggest features and report bugs. Also, it means we'll start being a bit more formal about what we are working on, and invite other developers and users to test our stuff, making it that much more robust and usable.

This means it's becoming more real.

Thursday, October 16, 2008

Troubleshooting Northrop Grumman Node

Troubleshooting the Northrop Grumman Node. I will be on site at 10am working with Marcelo on solving this issue.

530-Authentication Error
530-GSS Major Status: Authentication Failed
530-SSLv3 handshake problems
530-Unable to verify remote side's credentials
530-SSLv3 handshake problems: Couldn't do ssl handshake
530-OpenSSL Error: s3_srvr.c:2010: in library: SSL routines, function SSL3_GET_CLIENT_CERTIFICATE: no certificate returned
530-Could not verify credential
530-Can't get the local trusted CA certificate: Cannot find issuer certificate for local credential with subject: /O=HealthGrid/OU=Globus Toolkit/OUxxxx/OU=xxxxx/CN=xxxx
530 End.

Taking shapes.

Things are starting to precipitate here in and around the NCPHI Grid Lab. People are meeting and starting to talk about moving from a research stance to a production stance and actually making things to be used and seen by the world. I think it is awesome, and I also think it will be a lot of work and will be a bit tricky.

When I think of what I want when I think of public facing production GRID products... I think of what I would like to turn out, and I keep thinking of my dream/killer application: Something with the availability of OpenOffice and the simple functionality of Google Apps.

Open Office is pretty much available for everything. It is also deliciously packaged for everything. You can get them through any given *NIX package manager as RPM or a debian package files, it can be had for solaris and windows and mac, and it just seems to be there after the double-click.

Google Apps are simple and ridiculously functional. They also get to every computing platform out there. You can get Google Earth for *nix, Windows, mac, and it has just the same sort of "drop and unbox" functionality. The only drawback is they are closed source and don't like redistribution... but you wouldn't know it from all the APIs they post. There is little question to how to use their apps, and if you want to get into some of the really complex functionality you can easily Google (tee hee) how to do it.

I want our apps to be like that.

I want a NCPHI node to be one of those very simple and concise packages that only asks for what it needs (maybe the appropriate X509.11 certificate files) and then just installs. Whether it be a package (with a lot of attached packages) in the package manager of your given *NIX, or the installer on windows or mac. Right now it is a multi-step download-n-build-n-massage process that takes a new person multiple days and a seasoned veteran the better part of 3 hours depending on how the downloads go.

In the meantime, I want the apps that can come with a node to be much like Google apps. Easy to download and include, easy to find, perhaps even a few checkboxes from an admin portal that just lets you include and configure the pieces from the get-go, but otherwise just something you can nab and drop-in and have it Just Work.

Doing this is going to be a bit difficult, a lot of the limitations we are facing have to do with Other People's Code. But, most of that code is open sourced, and can be modified, and we are having an okay time talking with other people and they have been gracious and enthusiastic about working with us, and it might not get to that point due to a bunch of constraints that are unforeseen or just unknown at the moment... but that is my target. It's what I want to work-for.

Wednesday, October 15, 2008


  • Updated the MonaLisa configuration on Dallas, Tarrant, and NCPHI
  • Contacted the MonaLisa support about adding PHgrid to the Global configuration. This will show connectivity between the grid nodes.
  • Creating an expect script that will support OGSADAI proxy.
  • Removed all unessential software from the sandbox node in preparation for the VM appliance.

RODS, poicondai-web, and GIS.

Yesterday I got the filtering completed... today I started answering the question "so how do you get google maps to show polygons for zip codes" and I learned quite a few things.

  1. Lots of other people have done this before me, as there are all sorts of cool websites (with code access for pay) that have the overlays for zip codes and phone area codes on top of a google map.
  2. Lots of these sites seem to work by building a KML that google can read.
  3. RODS is one of these sites, and at least that code is free.

What I am trying to do will use KML as a last resort. KML generation means you have to stick a KML document at some public URL and tell google maps to find it, and the CDC end of the grid is all sorts of locked down... and as far as I can tell, Google Maps regrettably does not have a "send me a string of KML" function.

But, there are ways to generate polygons and overlays with just the javascript commands, just like there are ways to drop points on a map with javascript commands, it's just a matter of teasing out coordinates for the borders of the polygons, and Jeremy helped me discover the appropriate PostGIS and postgres tools that are supposed to let me get a zip-code/border database... and he has shown me the way to the RODS code that makes the appropriate queries to get the coordinates that are usually sent to KML. I'll just have to make it so they are sent to javascript arrays instead.

It'll be a bit kludgey, and will probably need some caching, but it should work, and then you know the only thing you need to see to deploy the POIConDai web visualization is access to the service and the appropriate GIS database.

Also, tomorrow will probably be spent focusing on just getting the zip code centroids working and teasing the data out of the NPDS service appropriately. But it's nice to have some part of my mind working on how to turn those dots into polygons for the time being.

Otherwise, we had a meeting with the ESRI/DGI-net guys. Their browser is mega-posh.

International Science Grid This Week

Issue 96: iSGTW 15 October 2008
Opportunistic storage increases grid job success rate

The DZero high-energy physics experiment at Fermilab, an Open Science Grid user, typically submits 60,000-100,000 jobs per week at 23 sites. The experiment’s application executables make many requests for input data in quick succession. Due to the lack of storage local to the processing sites, up until recently much of DZero’s data had to transfer in real-time over the wide area network, leading to high latencies, job timeouts and job failures.OSG worked with member institutions to allow DZero to use opportunistic storage, that is, idle storage on shared machines, at several sites. This represents the first successful deployment of opportunistic storage on OSG, and opens the door for other OSG Virtual Organizations. With allocations of up to 1 TB at sites where it processes jobs, DZero has increased its job success rate from roughly 30% to upwards of 85%.
Read more

Tuesday, October 14, 2008

poicondai moves forward

Yesterday and today, I managed to do a few updates to poicondai, and basically have it to the point where it has a drop-down for ClinicalEffect to filter as needed.

Tomorrow, I will be building up the GIS databases to support making polygons, and hopefully I will be able to get polygonal data on a google map sometime tomorrow or Thursday.

Otherwise, we had a meeting with someone who is a bit better at web design than me, so hopefully the demos will be a lot prettier.

And finally, we got the Dallas problem solved, looked like there was a box in need of a restart and some bottlenecks that needed to be sorted out.

Friday, October 10, 2008

CERN grid may boost drug and climate research

Good article about European grid being used for more than collider data

Click here

Thursday, October 9, 2008

Poicondai is getting some more love

So, after I just got to the base of how to get Introduce to do things (with many, many thanks to Felicia) despite a whole lot of demos going on... I am now given charge to give the Poicondai-web app some more nift (as it can stand to be more nifty).

Thus, I have written down the next steps of RODS-GDBC in the RODS-GDBC wiki-entry. And now I shall go into the next set of requriements for Poicondai-web.

  1. Use the basic test of NPDS-Web to make sure the condition list operates properly
  2. If so, code a drop down of the possible choices, and discover which choices return higher results and indicate them in some way (star, different color)
  3. Get a spatial series and use a pinpointed google map.
  4. Make that a shaded polygonal google map
  5. Make histograms when you click on the zip
  6. Work out one histogram for multiple zips.

Numbers 4, 5 and 6 get more optional as time becomes an issue. Also, someone with better web design skills than me is going to be looking at using some pretty RSS and making things a bit shinier and appealing than my usual non-centered engineer-interface JSP.

I am actually sort of excited, sprucing up a demo means that people liked what we did and are anxious to see it do more. I'll try my best to not keep them waiting.

Creating Globus VM Appliance

Currently reducing the size of the SUSE sandbox system in order to create a clone for a VM Appliance.

Worked on security requirements template.

Automated deployment of a virtual machine

At Utah...

Dr. Julio Facelli and Ron Price chatted about the need to bring the grid service to the data.
So, they pinged Argonne about an automated deployment of a virtual machine that when stood up is the grid service that the data needs.

Ravi Madduri <> responded:

Ron, We have been playing around with similar idea in gRAVI development where we create a grid service wrapper around an executable and also contextualize a VM that can host the application and the service on demand. There is a set of ant tasks that can be used to deploy gRAVI services ( or any gar ) to the nimbus cloud. Here is the README for this tool. It is a functional prototype, but you need an account on the nimbus cloud.

See previous post (<-click here) for further details about the nimbus cloud.

Ron will update as he progresses along this line.

Nimbus - Interesting model for deploying in the cloud

Nimbus provides a free, open source infrastructure for remote deployment and management of virtual machines, allowing you to:

· Create compute clouds (make your own EC2 style service). For examples, see the science clouds page.
· Deploy "one-click" auto-configuring virtual clusters (see the cloud clusters page). They adapt on the fly into new network and security contexts so you can set them up once and run them over and over again, even across different clouds.
· Serve clients that are compatible with the Amazon EC2 service, see What is the EC2 frontend?
· Integrate VMs on a set of resources already configured to manage jobs (i.e., already using a batch scheduler like PBS). See What is the Workspace Pilot?
· Interface to Amazon EC2 resources, see What is the EC2 backend?
· Easily experiment with new remote protocols and backends, see What is the RM API?

Thanks, Ron

Tuesday, October 7, 2008

Setting up RODS-GDBC

I took a hint from RODSAdai and decided to spec out the application I am planning to build for all to view before I get too deep into building the thing. The idea is that if I have questions, I can point them to this "what I am trying to do" document with a breif skeleton of the application.

This way, when people ask "what are you trying to do" I can get them to look here.

I have also installed introduce... and I am now going through the tutorial project, tomorrow I am planning on getting a simple skeleton drafted and contacting Dr. Jeremy Espino about interface changes and the link. I am also not sure how easily introduce projects can be incorporated into Maven and I'm pretty sure the same nasty axis problem will arise since the globus axis will probably need to be used with the client.

World's largest computing grid lives to go live

Contrary to popular belief, the world as we know it didn't implode after the Large Hadron Collider was flipped on. Sure -- someone, somewhere is growing a ninth arm and trying desperately to land a cameo on Fringe, but the planet at large is still humming along just fine. Now, the world's most ginormous computing grid (the Worldwide LHC Computing Grid, or WLCG) has gone live, and the gurus behind it are celebrating the beginning of its momentous data challenge: to analyze and manage over 15 million gigabytes of data each year. The Grid combines the IT power of over 140 computer centers, 100,000 processors and the collaborative efforts of 33 countries. Unfortunately, there's no word on when the official WLCG-based Call of Duty 4 server will be green-lit for action, but we hear it's pretty high on the priorities list.

Read all about it here

Monday, October 6, 2008

Planning and such.

So, RODSAdai and Poicondai are documented, fleshed out, and there ready for people to consume and ask-questions about if people want to recreate them or install them in new places.

The next step is to essentially re-create the OGSA-DAI end of RODSAdai using CaBIG's Introduce framework. Introduce, for all intents and purposes, seems to be really really good at generating a full-out service skeleton and having the tools to package up everything into a .gar file (which is the deployable for a globus toolkit server).

Thus, instead of OGSA-DAI at a given location accepting a query, this will be parameters passed into a webservice. Whether those parameters will be a query or just some filter values will be a discussion for Jeremy (because if they are just filter values, it becomes that much easier to prevent against SQL Injection attacks by using a prepared statement). Also, it will be interesting to see how well parameterized the connection logic could be and whether it would be as universally installable as ogsadai (it probably would not be as versatile).

Otherwise, we haven't really investigated the Dallas slowness at this point, I will also have to ask Jeremy if that Pitt node has been set up or not. I have a few ideas for what to try, but we will see.

Friday, October 3, 2008

3rd Party Certificates within Globus

Verisign Certificates within Globus

grid-proxy-init ERROR: Couldn't verify the authenticity of the user's credential to generate a proxy from.

Error verifying credential: Failed to verify credential
Could not verify credential
Can't get the local trusted CA certificate: Cannot find issuer certificate for local credential with subject: DATA-REMOVED-BY-DAN

GridFTP Error:

error: globus_ftp_client_state.c:globus_l_ftp_client_connection_error:4217:
the server responded with an error
530 530-globus_xio: Authentication Error
530-globus_gsi_callback_module: Could not verify credential
530-globus_gsi_callback_module: Error with signing policy
530-globus_gsi_callback_module: Error in OLD GAA code: CA policy violation:

Solution Summary:

The certificates were extracted using the Portecle application in PEM format. The entire certificate chain should be used in each hash file. Remove the private key data from the file.

The commands:

openssl x509 -issuer_hash -in [file_name].pem was used to determine the hash name the file should be named after.

openssl x509 -issuer -in [file_name].pem was used to determine the access_id_CA that should be used in the signing policy.

openssl x509 -subject -in [file_name].pem was used to determine the cond_subjects that should be used in the signing policy.

The step by step solution:

1. Export your certificate with Internet Explorer using the Personal Information Exchange PKCS12 option.

2. Check the, “Include all certificates and certificate paths” box. NOTE: This should be the only option checked.

3. Upload the exported certificates to the Globus node. (Root, Intermediate, and Private)

4. Use Portecle to view the exported certificates. Portcle is started using the following command: java -jar portecle.jar

5. Right click on the certificate, then use the PEM Encoded option to export private key and public key certificate within Portecle.

6. Remove the private key data from the PEM file that was created.

7. Create a hash name for the PEM file that was created using the following command:

openssl x509 -issuer_hash -in [file_name].pem was used to determine the hash name the file should be named after.
openssl x509 -issuer -in [file_name].pem was used to determine the access_id_CA that should be used in the signing policy.
openssl x509 -subject -in [file_name].pem was used to determine the cond_subjects that should be used in the signing policy.

openssl x509 -in yourfile.pem -noout -hash

8. Rename the file to the hash number displayed in the following format: hash.0

9. .Manually create a signing policy named (hash.signing_policy)

10. Copy the new files to /etc/grid-security/certificates

11. Create a duplicate copy of the hash.0 file for the Issuer_Hash and the hash. Example: awd2dq.0 7847a3s.0 ( There should be two hash files that contain the same certificate chain. Only the names are different)

12. Create signing policy files for each hash file based on Intermediate and Root certificates.

Thursday, October 2, 2008

PHGrid Service Registry 0.1 (beta)

When the PHGrid collaborators met after the PHIN Conference, one of the ideas was to set up a place where all the PHGrid collaborators can share descriptions of services they developed, services they are planning or even services they want to develop.

I just created a very basic service registry on the wiki available at that contains the services that have been developed or that I know of as of now.

This registry initially serves as a human readable repository of service information, but eventually we will extend this with a UDDI registry and Index services that provides a standards based search for services.

This is of course a tiny step, but it is a start. Now we just have to get each service provider add in details and links for their services. Peter took the first step and went ahead and created a page for the RODSA-DAI services and client. This can be used as an example for how other services can be documented.


So there is a big demo tomorrow, and I have been preparing for that by making sure that the RODSA-DAI and Poicondai demos have been polished and a few of their quirks have been ironed out. I also was working on making all the items a lot more portable.

Furthermore, I have been ironing out some issues with a prototype public demo, and doing a lot of documentation. And that documentation is linked-to from this shiny new wiki page that Brian set up and I have filled out: here.

Meanwhile, I have started to look into the caGrid CQL and DCQL, and that stuff looks interesting.

Globus grid services on Windows nodes

I started the installation of the COG 4.1.5 on Windows Vista. According to documentation located in's FAQ section, Windows nodes should be able to access grid services using the COG Kit from

The installation has been stalled due to insufficient access rights of my user account on the local Windows machine. The Windows Admin password need to be entered in order to complete the install. I will talk to the admin in the morning so I can continue the install.

News: cancer Biomedical Informatics Grid® (caBIG®)

The cancer Biomedical Informatics Grid® (caBIG®) initiative announces a milestone in its expansion of support alternatives for end-users, IT staff and senior decision makers implementing caBIG® tools and infrastructure. Three companies, 5AM Solutions, Inc., Ekagra Software Technologies and SRA Corporation, are the first to be licensed by the National Cancer Institute (NCI) to market their services as part of the caBIG® Support Service Providers Program. Other companies in addition to 5AM, Ekagra and SRA are currently engaged in the licensing process, and will be announced in the coming weeks.

caBIG® Support Service Providers are independent entities that are approved by NCI as meeting specific criteria for performance of support services related to caBIG® needs. There are four categories of support: Help Desk Support; Adaptation and Enhancement of caBIG® -Compatible Software Applications; Deployment Support for caBIG® Software Applications Deployment; and Documentation and Training Materials and Services. Services rendered by caBIG® Support Service Providers to their clients are established under separate business arrangements organized by and between the service provider and its clients.

Visit to learn more about caBIG® Support Service Providers or access those Support Service Providers that are currently licensed. If you are interested in applying to become a Service Provider, you can access past and future announcements for caBIG® Support Service Provider at this site as well.