Share |

Content about Middleware

May 27, 2009

Feature - ETICS 2 offers help to software professionals

Image courtesy ETICS 2

Software professionals have been know to describe the task of building, configuring and integrating new software in as little as two words: “nightmare activity.”  But with E-infrastructure for Testing, Integration and Configuration of Software Phase 2, or ETICS 2, they have an all-in-one solution that helps configure and build software, and at the same time check its quality. The result of three years of project activities, this system provides tools and resources to build and test runs, thereby simplifying complex and often repetitive activities.
“By automating many day-to-day tasks, ETICS 2 supports software professionals obtain higher quality software, a shorter time-to-market, a lower risk on schedule and reduced project costs,” says Alberto Di Meglio, ETICS 2 project manager at CERN.
The ETICS 2 system exploits grid software and distributed computing infrastructures. It is

April 15, 2009

Feature - Grid sails to the aid of shipbuilding New FSG ships anchored next to the factory building where they were constructed. Image courtesy of FSG Shipyards in Europe cannot compete on price alone against overseas competitors, especially those based in the Far East.Consequently, European shipbuilders must  concentrate on high-quality construction projects that are specially tailored to their customer's requirements. Each ship is a unique product, produced only once or in a very small run. To improve their competitive position, modern European shipyards must harness the most advanced simulation and design tools to produce complex structures cost effectively, in order to reduce the technical and economic risks at a time when orders for new ships are down worldwide.The goal? The complete, virtual design of a ship. The solution: the grid. At the Flensburger Schiffbau Gesellschaft (FSG) shipyard, managers wanted to find ways to design their ships so that they used less of costly materials such as steel — while r

April 15, 2009

Feature - PanDA makes huge job sets more bearable

Simplified view of core PanDA architecture. Click on image to see the complete diagram in a larger size. Image courtesy of BNL.

Keeping track of huge job sets processed on hundreds of compute clusters around the world through the LHC Computing Grid might send the most organized of logical thinkers into a tizzy. The PanDA (Production and Distributed Analysis) system, developed for the ATLAS collaboration at the Large Hadron Collider, lets scientists stay cool while it takes charge of distributing jobs, collecting results and managing workflow.An important feature of PanDA is that it allows the user to submit one job, called a pilot job, which coordinates a series of jobs that the user has put together and configured. When launched, the pilot job contacts the PanDA server, which in turn locates available resources and sends the collected jobs to run based on their relative priorities. The pilot system manages the workflow efficiently,

February 25, 2009

Announcement - UNICORE Summit: call for papers Image courtesy of unicore.eu  Event: UNICORE Summit 2009, to be held in conjunction with Euro-Par 2009 Venue: Delft, The Netherlands Dates: UNICORE Summit: 25 August 2009Submissions due: 9 June, 2009Notifications of acceptance: 21 July 2009Camera ready version due: 4 August 2009Submission information: Format (LNCS)Submit as email attachment to unicore-summit@fz-juelich.de   Registration (handled by Euro-Par 2009) and hotel information The UNICORE grid technology provides a seamless, secure and intuitive access to distributed grid resources. UNICORE is a full-grown and well-tested grid middleware system, which today is used in daily production worldwide. Beyond this production usage, the UNICORE technology serves as a solid basis in many European and international projects. In order to foster these ongoing developments, UNICORE is available as open source under BSD licence.The UNICORE Summit is a unique opportunity for grid users, developers, administrators, researchers, and

February 4, 2009

Feature - G-eclipse: easier interface to both grids and clouds The g-Eclipse interface in action.Image courtesy of g- Eclipse Consortium The g- Eclipse Consortium released the g- Eclipse framework, which developers claim will provide an easy-to-use workbench for accessing both grid and cloud infrastructures. The software provides a graphical workbench that enables seamless access with the same simplicity as accessing the Internet from a browser. "This enables interoperabilty between different grid and cloud infrastructures on the client side," said Harald Kornmayer, a researcher at NEC Laboratories Europe, who led the project.It currently supports EGEE's gLite grid middleware (aimed for scientific domains), and the GRIA middleware (used by industry and commerce), as well as AmazonWeb Services' cloud computing and storage offerings.A key feature of the g- Eclipse workbench is its independence from the underlying grid and cloud technology. "We started with the goal of accessing available scientific grid infra

December 17, 2008

Feature - Truth serum for researchers on the grid William S. Vickrey (1914-1996)Image courtesy of nobelprize.org. Needs images When you request time on a shared computing resource, are you always scrupulously honest about your job’s urgency and requirements?  Andrew Mutz and his colleagues at the University of California, Santa Barbara have developed a scheduler that discourages you from fibbing. It maximizes “rewards” and minimizes “costs” to users when their stated scheduling preferences are true.  Not quite ready for prime time, it applies some time-tested concepts in a new way.The team’s reservation-based version of the Portable Batch System (PBS) scheduler uses techniques derived from William Vickrey’s Nobel prize-winning work on the economic theory of incentives. Several variations of a scheme called the Vickrey Auction exist, the common feature being an incentive to bidders to bid their true value.  The team refers to its scheduling scheme as the Dynamic Programming

August 27, 2008

Feature - OSG 1.0: Stable, Secure, Reliable Open Science Grid collaborators, project members, partners and users at the All Hands meeting in March 2008 hosted by the Renaissance Computing Institute in North Carolina.Image courtesy of OSG. Open Science Grid 1.0 is here. Although scientists have been generating results on OSG resources for a couple of years using earlier versions, the VDT-based software has now reached a confident “1.0” level of stability, security and reliability.A team at the University of Wisconsin, an institution with a long tradition of distributed computing research, leads the OSG software integration, packaging and deployment efforts, and has built all the OSG releases, including 1.0.After multiple rigorous validation cycles in which stakeholders participated, over a dozen VOs granted the official 'Green Flag' to OSG 1.0.“OSG 1.0 is an evolutionary, as opposed to revolutionary improvement over the previous software release,” says Alain Roy, software coordinator for OSG. &ldquo

August 6, 2008

  Opinion - What clouds and grids can learn from each other Cloud computing adds an extra dimension of flexibility.Image courtesy of Ben Rhydding, sxc.hu  During the past 10 years, hundreds of grid projects have come and gone, passing away after funding ran dry. Most didn’t have a realistic strategy for sustainability, let alone a viable business model for their infrastructures, tools, applications or services. Often, the only asset left after the project’s end was the hands-on expertise gained by those involved, which is certainly valuable in the long run, but doesn’t justify the effort and funding. So far, in my opinion, grids didn’t keep up to their full promise.What went wrong?Sure, grids, by their very nature, are complex to design, build and maintain; and applications are cumbersome to run. It might take another 10 years of trial and error (and re-writing grid middleware?) to navigate the labyrinth of new technologies and paradigms, such as Utility Computing, Autonomic Co

July 30, 2008

Feature - The future of public health: The grid gains traction Dr. Ida A. Bengston (1881-1952) was one of the first women employed on the scientific staff of the Hygienic Laboratory of the Public Health Service, the predecessor to the National Institutes of Health. Bengston was particularly noted for her studies of bacterial toxins.This photo is symbolic of the importance of laboratory equipment to the CDC’s progress in the improvement of world wide public health standards.Image courtesy of CDC, Betty Partin. Drawing on data located in Washington, Oregon and Idaho, a University of Washington researcher simulates a tuberculosis outbreak in these three states and shares her work with researchers throughout the country. Well, not so fast—seamlessly gathering and distributing information in this way is still a vision of the future being brought to life by Tom Savel and his colleagues at the National Center for Public Health Informatics (NCPHI), part of the Centers for Disease Control and Prevention (CDC).The U.S. public h

July 2, 2008

Feature - GridFTP moves your data and your news Cluster-to-cluster data movement with GridFTP using up to 64 nodes on each end of TeraGrid’s 30 Gbit/s wide area network between Urbana, IL and San Diego, CA.Images courtesy of Raj Kettimuthu (plot) and Corky and Holly Siegel (duck). GridFTP, a data transfer protocol optimized for high-bandwidth wide-area networks, handles an average of more than 2.5 million data transfers a day.The Large Hadron Collider, the Southern California Earthquake Center, the Relativistic Heavy Ion Collider, the Laser Interferometer Gravitational Wave Observatory, the European Space Agency, the Disaster Recovery Center in Japan and even the British Broadcasting Corporation use it. Based on the old workhorse FTP, GridFTP supports reliable and restartable data transfers and provides extensions for high-performance operation and security. It is a specification (meaning anyone can write code to implement it) for which standards are defined within the Open Grid Forum. Globus Allianc

July 2, 2008

Feature - Multiple middleware The National Institute of Nuclear Physics, or INFN, is dedicated to the study of the fundamental constituents of matter, and conducts theoretical and experimental research in the fields of subnuclear, nuclear and astroparticle physics. Image courtesy of INFN. Coexistence Since the birth of computational and data grids, a variety of middleware has been developed, deployed and used in a multitude of isolated e-infrastructures, and concern over middleware interoperability is growing. Recently, three researchers in Italy—Roberto Barbera of the University of Catania, Marco Fargetta of Consorzio COMETA, and Emidio Giorgio of INFN Catania—found a new approach to grid interoperation, based on “middleware co-existence.” In this approach, different middleware is deployed on the same infrastructure, allowing users to access and share resources within a single, well-defined policy, regardless of the middleware used, and with a common authentication and authorization scheme.Their appro

June 25, 2008

Grid computing walks the standard line: thinking inside the box With many projects involved, truly seamlesss interoperability can be a challenge.Image courtesy of NorduGrid and Vicky White“Standard” is often equated with “average” or “boring.” How can you innovate or invent when you’re bound by standards and regulations? How can you push the boundaries when you’re stuck inside a box?Yet how can you create something on a grand scale—something that can slot into place with other grand things—unless you create something interoperable. Something . . . standard.In this special feature, former iSGTW editor (and now GridTalk editor) Cristy Burne reports on this easily overlooked aspect of grid computing. Why should we care? Standardizing grids: the current landscape Challenges for the future The way forward A standard in action: GridFTP A “de facto” standard: VOMSBONUS FEATURE: What does the grid community have to say about standards? (See what people from inst

June 18, 2008

Image of the week - Matchmaking on the grid Jobs begin in the MATCHING site at the far left in the image (click for full image). The color-coded jobs are then sent to OSG compute sites, where they are stacked. Yellow chips on the stack are queued jobs and green ones are running. Completed jobs are sent to the DONE site at the far right. The color bar for ranking OSG sites is visible in the upper right of the image.  Image courtesy of David Borland, Mats Rynge, John McGee and Ray Idaszak, RENCI From the user’s perspective, the distributed grid computing framework known as the Open Science Grid is a seamless interface to computing system across the U.S. that allows jobs to be done more quickly and efficiently than any single computing system. Under the virtual “hood,” however, a lot happens to make the grid environment flow smoothly. This visualization created at the Renaissance Computing Institute (RENCI), an OSG partner, helps grid engineers and programmers understand how jobs of varying types and size

June 11, 2008

Feature - Placing Kepler at the center of your computing system A scientific workflow describes a series of structured computations that arise in scientific problem-solving. Typically a sequence of analysis tools are invoked in a routine manner. Workflows often include sequences of format translations that ensure that the tools can process each other's outputs, and perform routine verification and validation of the data and the outputs to ensure that the computation as a whole remains on track. (Adapted from Munindar P. Singh and Mladen A. Vouk ) The LiDAR workfow communicates both with the portal and the Grid layers (click for larger version showing layers ). This central workflow layer, controlled by the Kepler workfow manager, coordinates the multiple distributed Grid components in a single environment as a data analysis pipeline. It submits and monitors jobs onto the Grid, and handles third party transfer of derived intermediate products among consecutive compute clusters, as defned by the workfow descri

May 28, 2008

  Feature – AmieGold, “ton amie” in TeraGrid Image courtesy of LONI It is now easier to add a computing resource to TeraGrid. Steven Brandt, a researcher with the Louisiana State University Center for Computation and Technology, developed a tool to help bridge local and central accounting data. End users want easy access to multiple sets of resources. This is only possible when account and allocation data from the sites are synchronized and uploaded to a central manager. AMIE, a tool that provides software and protocols for distributing account and allocation data, serves this function for TeraGrid.  Resource providers must install accounting systems locally, and integrate them to work with AMIE, not always a straightforward operation. The lucky TeraGrid resource providers that use the Gold Allocation Manager just got a break. "AmieGold provides a bridge between AMIE and Gold. TeraGrid sites can install both programs simultaneously and add their systems to the TeraGrid quickly and easily,&

May 21, 2008

Opinion - Securing the multi-platform grid There’s secure, and then there’s making things really secure. OMII project member and photographer Sergio Andreozzi shot this image in Florence. He said: “Each lock is attached by a couple who just got married, as a symbol of a strong union.”Image courtesy of Sergio Andreozzi, Istituto Nazionale di Fisica Nucleare (INFN), CNAF All locked up tight One of the biggest challenges facing scientists who wish to make use of multi-platform grid infrastructures today is reconciling the different security systems inherent in the various platforms. For the last two years, the Open Middleware Infrastructure Institute for Europe (OMII-Europe) has been developing a flexible framework for integrating the three dominant platforms in use in Europe: UNICORE, gLite and Globus. (OMII-Europe is a separate initiative from OMII-UK, which is often referred to as OMII for historical reasons.) A major part of this work has been “unpicking” the diffe

April 30, 2008

Announcement - Call for participation: eScience 2008, December, U.S. The eScience 2008 conference will bring together leading international and interdisciplinary research communities, developers and users of eScience applications and enabling information technologies.Image courtesy of eScience 2008Submissions are invited for the IEEE Computer Society Technical Committee on Scalable Computing 2008 eScience Conference, to be held 7-12 December 2008 in Indiana, U.S. Papers and proposals for tutorials; posters, exhibits, demos; workshops and special sessions are welcomed. Registration will open 1 July 2008.The eScience 2008 conference will bring together leading international and interdisciplinary research communities, developers and users of eScience applications and enabling information technologies. Topics of interestTopics for proposed papers and sessions should be related to eScience, grid and cloud computing. Submission deadlinesPapers and proposals for tutorials are due 20 July 2008. The deadline for proposals for posters,

April 30, 2008

Announcement - Registration open: Open Science Grid Users’ Meeting, June, U.S. The Open Science Grid is a national, distributed computing grid for data-intensive research. This image was taken at the OSG All-Hands meeting, held 3-6 March 2008. The next Open Science Grid Users’ Meeting will be held from 16-17 June 2008.Image courtesy of OSGRegistration is open for the Open Science Grid Users’ Meeting to be held at Brookhaven National Laboratory in New York, U.S., from 16-17 June 2008. The registration deadline is 9 May 2008. A tentative conference program is available and will continue to be updated in the coming weeks.This meeting is aimed mainly at users or prospective users of the Open Science Grid who:are responsible for development, maintenance or operation of applications that run on the OSG infrastructure, orare involved in any aspect of VO-specific infrastructure for OSG use, for example: workflow, meta-scheduling or portal management, orare users or prospective users of the OSG in any capacity and are

April 30, 2008

  Feature - The Sicilian grid tames “unbeatable” monster Born from the god Thyphoon and the dragon Echidna, the Chimera was a terrible and unbeatable monster with a lion’s head, goat’s body and snake’s tail. The CHIMERA particle multidetector takes its name from this monster.Images courtesy of the CHIMERA Collaboration Not a fire-breathing monster, although just as extraordinary, CHIMERA—or Charged Heavy Ion Mass and Energy Resolving Array—is one of the few tools that allow study of the fascinating process of nuclear fragmentation, where high-speed particles crumble into various pieces of exotic nuclear matterA particle multidetector installed at the INFN’s Laboratori Nazionali del Sud in Catania, Italy, CHIMERA measures and observes this process, producing conspicuous amounts of valuable data that are used by researchers to further technical innovations, understand the nature and behavior of matter, and much more.The evolution of global scienceAs a worldwide collaboration, CHIME

April 30, 2008

  Feature - The new Nimbus: first steps in the clouds Cloud computing services provide users with flexible compute capacity, allowing each user to lease a portion of the greater “cloud” of resources.Image courtesy of exper While grids are composed of many diverse resources, their applications usually require a very specific, validated environment. As a result, applications that work on a developer’s desktop may only function “out of the box” on a small fraction of the total number of compute resources potentially available to scientists on the grid. This is one of the primary obstacles users face in grid computing.A grid of your virtual machinesOne solution is to take the developer’s desktop and scale it to hundreds of nodes by mapping the desktop onto hundreds of virtual machines and deploying them onto grid resources. To facilitate this mode of using the grid, researchers at the Computation Institute at the University of Chicago recently announced the availability of

April 30, 2008

  Opinion - The role of open source in grid computing: past, present and future Open source software allows individual users to work with an initial shared material, adapt and tailor it to suit their own specific needs, and pass on their improvements to the rest of the community for continued improvement.Image courtesy of Eric Gjerde It is not long now until the first Open Source Grid and Cluster Conference, to be held in Oakland, California from 13-15 May 2008. This upcoming event got me thinking about the role of open source in grid and cluster computing, in the past, present, and future.    My involvement with open source dates to the early days of Globus, in the late 1990s. At that time, I (and my colleagues Carl Kesselman and Steve Tuecke) resolved that, in order to reduce barriers to grid technology adoption, Globus software should be freely available to anyone. To this end, we chose to release Globus software under a variant of the BSD Unix license. (Later we moved to the more modern

February 20, 2008

  Feature - New bridge over muddled waters of grid storage Interoperability between the two islands of SRM and SRB opens up opportunities for different grids to share data; for example, scientists using the Worldwide LHC Computing Grid could now process data from the U.S. TeraGrid. Image courtesy of The Gentle Two islands of storage technology have been joined, according to a presentation at the EGEE User Forum in Clermont-Ferrand, France, last week. Jens Jensen, of Rutherford Appleton Laboratory in the UK, told attendees how his team have successfully bridged two key grid technologies: the Storage Resource Broker (SRB) and Storage Resource Management (SRM). A demonstration of the bridging was first run at SC07 in Reno, Nevada, on behalf of the GridPP-led team.Two storage islands SRM and SRB are both ways of accessing the storage available on a grid—either using tapes, disks or disk arrays. Usually, storage elements running SRB can’t be seen by a grid running SRM, and vice versa. Most grids run one o

January 30, 2008

  Technology - Sustainable multi-scale simulations using Grid Remote Procedure Call The potential of GridRPC has been most recently demonstrated at SC07 in October 2007, where it was used in a 60-hour simulation distributed across 1129 TeraGrid and AIST processors on the Trans-Pacific Grid infrastructure. Image courtesy of AIST Scientific grid environments often rely on compute resources of varying capacity—scattered across multifarious locations—to process problems of miscellaneous size. So how can you write an application program that allows you to make effective use of such distributed computing resources, especially over months or even years? One way is to ensure your programming model is flexible, scalable and fault-tolerant. Your application must be able to request additional computing resources on-the-fly and according to availability; effectively manage a large number of parallel activities; and automatically recover from cluster node or interconnection failures. A solution that meets these cr

November 21, 2007

  Feature - Grid Technology Cookbook provides recipe for success The Grid Technology Cookbook has something for everyone, offering a comprehensive look at grid technologies developing around the world.Image courtesy of SURA Just as an ordinary cookbook provides everything you need to achieve in the kitchen, SURA’s recently released Grid Technology Cookbook brings together everything you need to achieve in grid computing. Introducing basic grid concepts and case studies through to more in-depth topics such as grid programming and standards, the cookbook is intended to assist researchers and technical professionals in understanding and implementing grid technology.“The Grid Technology Cookbook can bring people up to speed quickly,” says Paul Avery, physics professor at the University of Florida and member of the Open Science Grid Executive Board. “This includes administrators, program officers, scientific project leaders and scientists who need to expand their computational scale. It will serve as valu

November 21, 2007

Link of the week - Online learning: the International Winter School on Grid Computing The online International Winter School on Grid Computing requires 80 hours of commitment and is open to up to 30 participants. Applications are open now. This image comes from the summer school equivalent.Image courtesy of ICEAGE Can’t make it to the 2008 ICEAGE International Summer School on Grid Computing in Hungary in July? Why not study online instead? Enrolling in the summer school’s winter incarnation: the online International Winter School on Grid Computing, which kicks off 6 February 2008 and runs until 5 March. The International Summer Schools on Grid Computing—run by the ICEAGE project—were established in 2003 and have proven to be a great success, both for teaching staff and students. Introducing numerous grid technologies through lectures and practical exercises, the summer schools are a unique gathering place for globally recognised grid computing figures from all over the world. “[This sum