Archive for the ‘Research’ Category
It’s been more than two years since we reported on Seattle as the new Geneva, that is, as the new epicenter of global health activity. An article in this morning Journal-Sentinel (Water-engineering firms see potential, challenge in developing countries) – which includes an exclusive interview with the Acumen Fund’s chief executive Jacqueline Novogratz – suggests that Milwaukee is angling to do the same for water technology:
It’s an issue that almost certainly will preoccupy business leaders in metro Milwaukee in their strategy to brand the region as an international hub of water technology. The metro area is home to scores of water-engineering companies. Gov. Jim Doyle and the University of Wisconsin-Milwaukee this month announced plans to invest millions of dollars for UWM to become a center of freshwater research.
An 2008 article from the same newspaper (Area’s tide could turn on water technology) provides more evidence:
[F]our of the world’s 11 largest water-technology companies have a significant presence in southeastern Wisconsin, according to an analysis of data from a new Goldman Sachs report.
Wall Street has tracked automakers, railroads and retailers almost since there were stocks and bonds. But water remains a novelty. Goldman Sachs Group Inc. didn’t begin to research water treatment as a stand-alone industrial sector until late 2005.
While several large MNCs have shown an active interest in clean water in developing countries (e.g., Procter and Gamble, Vestergaard Frandsen, Dow) open questions remain as to what role large MNCs will play in providing access to safe water for the one billion people who don’t have it.
(Thanks to Dr. Jessica Granderson for sending the link)
Cross-posted from Design Research for Global Health.
Giving talks is not one of my strong suits, but it seems to be a part of the job requirement. Earlier this month, I had the opportunity (even though I’m no good, I do consider it an opportunity), to give a couple talks, one to the Interdisciplinary MPH Program at Berkeley and one to a group of undergraduate design students, also at Berkeley. Despite the difference in focus, age, and experience of the two groups, the topic was roughly the same: How do we effectively use design thinking as an approach in public health?
The first session was so-so, and I suspect that the few people who were excited about it were probably excited in spite of the talk. It started well, but about halfway through, something began to feel very wrong and that feeling didn’t go away until some time later that evening. Afterwards, I received direct feedback from the instructor and from the students in the form of an evaluation. I recommend this if it is ever presented as an option. Like any “accident”, this one was a “confluence of factors”: lack of clarity and specificity, allowing the discussion to get sidetracked, poor posture, and a tone that conveyed a lack of excitement for the topic.
It’s one thing to get feedback like this, another to act on it.
The second session went much better, gauging by the student feedback, the comments from the instructor, and my own observations. This in spite of a larger group (60 vs. 20) that would be harder to motivate (undergraduates with midterms vs. professionals working on applied problems in public health). I chalk it all up to preparation and planning. Certainly there are people that are capable of doing a great job without preparation – I just don’t think I’m one of those people.
Most of that preparation by the way was not on slides. I did use slides, but only had five for an hour session and that still proved to be too many. Most of the time that I spent on slides, I spent developing a single custom visual to convey precisely the information that was relevant to the students during this session (see image). The rest of the preparation was spent understanding the audience needs by speaking to those running the class; developing a detailed plan for the hour, focusing on how to make the session a highly interactive learning experience; designing quality handouts to support the interactive exercise; and doing my necessary homework. For this last one, I spent 20 minutes on the phone with a surgeon friend, since the session was built around a case study discussing surgical complications and design.
Three resources I found really useful:
- Why Bad Presentations Happen to Good Causes, Andy Goodman, 2006. This commissioned report was developed to help NGOs with their presentations, but I think there is value here for anyone whose work involves presentations. It is evidence-based and provides practical guidance on session design, delivery, slides (PowerPoint), and logistics. Most importantly, it is available as a free download. I was fortunate enough to pick up a used copy of the print edition for US$9 at my local bookstore, which was worth the investment for me because of the design of the physical book. It’s out-of-print now and it looks like the online used copies are quite expensive – at least 3x what I paid – so I recommend the PDF.
- Envisioning Information, Edward Tufte, 1990. I read this when I was writing my dissertation. Folks in design all know about Tufte, but I still recommend a periodic refresher. This is the sort of book that will stay on my shelf. Also potentially useful is The Visual Display of Quantitative Information. For those working in global health, don’t forget how important the display of information can be: (a) Bill Gates and the NYTimes, (b) Hans Rosling at TED.
- Software for creating quality graphics. The drawing tools built into typical office applications, though they have improved in recent years, are still limited in their capability and flexibility, especially if you’re looking at #2 above. In the past 10 days, three people in my socio-professional network have solicited advice on such standalone tools, OmniGraffle (for Mac) and Visio (Windows): a graphic designer in New York, an energy research scientist in California, and a healthcare researcher in DC. Both are great options. I use OmniGraffle these days, though I used to use Visio a few years back. If cost is an issue, there are open-source alternatives available, though I’m not at all familiar with them (e.g., the Pencil plug-in for Firefox).
Last Thursday, I had the opportunity to view a PhotoVoice exhibition at the University of California, Berkeley organized by Haath Mein Sehat (HMS), a group working to improve access to clean water and sanitation in six slums of Hubballi and Mumbai, including Dharavi.
It was exciting to see a group effectively blend the advocacy elements of PhotoVoice with the design elements of cultural probes. The difference between the two approaches is less in the methods and more in the use of the outputs. In this case, they organized the exhibition to raise awareness and break down stereotypes of slum life, and they are using the photographic corpus to guide the design of both programs and technologies related to their core mission.
What I was most interested in from a design perspective were the instructions given to community photographers and how this tied back to the mission of HMS. The results below followed from the simple prompt: “Represent your daily experience with water”.
A few days back Aman wrote a post about Google Flu Trends. Thought I’d add a few thoughts here after reading the draft manuscript that the Google-CDC team posted in advance of its publication in Nature.
By the way, here’s what Nature says: Because of the immediate public-health implications of this paper, Nature supports the Google and the CDC decision to release this information to the public in advance of a formal publication date for the research. The paper has been subjected to the usual rigor of peer review and is accepted in principle. Nature feels the public-health consideration here makes it appropriate to relax our embargo rule
Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L. Detecting influenza epidemics using search engine query data. Draft manuscript for Nature. Retrieved 14 Nov 2008.
Assuming that few folks will read the manuscript or the article, here’s some highlights. I should say I appreciated that the article was clearly written. If you need more context, check out Google Flu Trends How does this work?…
- Targets health-seeking behavior of Internet users, particularly Google users [not sure those are different anymore], in the United States for ILI (influenza-like illness)
- Compared to previous work attempting to link online activity to disease prevalence, benefits from volume: hundreds of billions of searches over 5 years
- Key result – reduced reporting lag to one day compared to CDC’s surveillance system of 1-2 weeks
- Spatial resolution based on IP address goes to nearest big city [for example my current IP maps to Oakland, California right now], but the system is right now only looking to the level of states – this is more detailed CDC’s reporting, which is based on 9 U.S. regions
- CDC data was used for model-building (linear logistic regression) as well as comparison [for stats nerds - the comparison was made with held-out data]
- Not all states publish ILI data, but they were still able to achieve a correlation of 0.85 in Utah without training the model on that state’s data
- There have attempted to look at disease outbreaks of enterics and arboviruses, but without success.
- For those familiar with GPHIN and Healthmap, two other online , the major difference is in the data being examined – Flu Trends looks at search terms while the other systems rely on news sources, website, official alerts, and the such
- There is a possibility that this will not model a flu pandemic well since the search behavior used for modeling is based on non-pandemic variety of flu
- The modeling effort was immense – “450 million different models to test each of the candidate queries”
So what does this mean for developing world applications?
Here’s what the authors say: “Though it may be possible for this approach to be applied to any country with a large population of web search users, we cannot currently provide accurate estimates for large parts of the developing world. Even within the developed world, small countries and less common languages may be challenging to accurately survey.”
The key is whether there are detectable changes in search in response to disease outbreaks. This is dependent on Internet volume, health-seeking search behavior, and language. And if there is no baseline data, like with CDC surveillance data, then what is the best strategy for model-building? How valid will models be from one country to another? That probably depends on the countries. Is it perhaps possible to have a less refined output, something like a multi-level warning system for decision makers to followup with on-the-ground resources? Or should we be focusing on news+ like GPHIN and Healthmap?
Another thought is that we could mine SMS traffic for detecting disease outbreaks. The problem becomes more complicated, since we’re now looking at data that is much more complex than search queries. And there is often segmentation due to the presence of multiple phone providers in one area. Even if the data were anonymized, this raises huge privacy concerns. Still it could be a way to tap in to areas with low Internet penetration and to provide detection based on very real-time data.
More than 12 years (let that time horizon sink in) after the first indications of success, there will be a large scale trial for a new malaria vaccine. The potential global health implications of this are obvious, read the full news article, it has some good tidbits in it:
“With the exception of Mosquirix, there’s no possibility of one coming on the market within five or six years…It took eight more years of development and testing before scientists were ready to conduct a large-scale trial of the vaccine. London-based Glaxo and its partners will begin a $100 million study of Mosquirix later this year, vaccinating 16,000 children in seven African countries. If the results are positive, the drug could be on the market as soon as 2011, making it the first vaccine against the deadly disease. “