When I arrived at the AHM Tuesday morning, Mine Altunay was in the cafeteria, so I took the opportunity to interview her about Monday's security round table (for her comments, see my Day 1 post).
So, unfortunately, I missed the morning's keynote. But I'm told it was very interesting. It was presented by Gael McGill, president and CEO of Digizyme, and director of molecular visualization at Harvard Medical School. Although he had not posted slides from his talk on the AHM Indico site, you can watch some of the animations produced by his company here.
The balance of the morning was devoted to talks on High Throughput Parallel Computing. These were by far the highlight of the day as far as I'm concerned - and I'm sure I'm not the only one who thought so.
During the afternoon coffee break, I asked the morning session moderator, John McGee, how he felt the morning had gone. McGee, who is the director of the Renaissance Computing Institute in North Carolina, was more than happy to oblige.
"The morning session was some lively discussion on some important features for the Open Science Grid as it branches out to campus environments and to more different types of computing architectures and types of usages, as opposed to being more exclusively focused on a handful very large science projects," he told me, adding, "I think there are a lot of implications and the session this morning demonstrates that there's a lot of interest and a lot of activity and a lot of forward motion on that path."
Sadly, my recorder's microphone appears to be broken, so despite interviewing Steven Newhouse twice, I don't have any precise quotes from him to share with you. But when I asked him what stood out for him on Tuesday, he remarked upon a talk by Lance Stout from Clemson University.
Stout's talk explored methods of combining virtualization and existing grid infrastructures. Normally, the jobs that researchers submit to a grid are queries to pre-installed applications. During his talk, Stout reported on several methods he and his colleagues have tested by which they could submit virtual machine images as jobs instead. For example, STAR, a particle physics experiment based at Brookhaven National Laboratory, successfully ran 80,000 tasks - 400,000 CPU hours - and generated a whopping 7 TB of data on the Kestrel architecture Stout and his colleagues were testing.
Stout finished by proposing a workflow for using virtualization technology on OSG, in which virtual organizations would submit VM images to a catalogue for approval by sites before running them on OSG.
McGee closed the morning's proceedings with a panel discussion of HTPC, which left us with as many enticing questions as answers. There are many avenues OSG is on the brink of exploring, which makes this the perfect time to step back and ask, "Is this what we really want to do?"
Stay tuned for Day 3!