So, I fixed the Quicksilver bug about data replicating on the state-level on dev. Thus, after the next deploy, one will be able to load the states as many times as possible for a given date range and the data will stay the same.
Otherwise, I spent the rest of the day trying to find a fix for the "if I try and load California zip3's, the service crashes" problem... the root of the cause seems to be that I am generating a list of all the zipcodes in California and sending it to the Service, and the list is rather long, longer than the comfy limits of the service.
Luckily, I found a way around it, I can just send a request for the state and not all the zips in said state and it will still aggregate them by zip code.
The only problem is that I can't specify a state with the "main" aggregated national service. So I either have to pull back ALL the zip codes for the entire country (which would take a minute or two) or limited data from only the Colorado poison center. I have sent an email to the lovely poison people to see if there is something I am missing that will work for my needs.
But otherwise, yay. Less horrible crashes and strange behavior while we get the data sorted out.
The other cool thing is I had a chat with William Duck and he is pretty much a statistical Guru. He is helping me look for statistical and web-chart libraries that will help provide better and niftier information via-chart than the nifty, but not as functional google charts (or maybe even see if google charts can do more complicated stuff). That will be a massive help.
Monday, February 23, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment