The rise of autonomic computing
How do you maximize the amount of oil you can extract from an oil field?
One way is to use the Instrumented Oil Field, or IOF, an application coordinated by the Center for Subsurface Modeling at the University of Texas and used by a consortium of academic and industry researchers. It uses a network of sensors embedded underground to monitor the state of the reservoir while the oil is being extracted.
The IOF calculates deposits of oil which can be safely extracted and economically viable, while classifying other parts that cannot be reached as “bypassed oil.” However, if the model relies purely on fixed initial conditions, up to 60% can be inaccurately deemed unreachable.
One way around this problem is to use autonomic computing, whose goal is to build systems and applications which manage themselves by responding to the data. They configure and adapt themselves in real time, analogous to the structure of a self-regulating biological ecosystem.
IOF is just one example of the successful use of applying autonomics. In a lecture given during the EGEE 5th User Forum last week, Manish Parashar, the founding director of the Center for Autonomic Computing and The Applied Software Systems Laboratory at Rutgers University, said that this approach can also be applied to complex grid infrastructures — which are similarly complex, and becoming so intricate that they are not achieving their full potential.
When cloud and multicore are added to the mix, the resulting complexity can even hamper rather than help scientists’ efforts to build their experiments.
And as science becomes more and more data-driven, the computer systems and infrastructures being implemented are increasingly complex. While distributed computer networks are being used to investigate biology, treating grids as cyber-ecosystems in their own right — both at the level of software and hardware — could change how scientific applications are developed and even how science itself is done.
Parashar stresses that autonomics is more than just adaptive coding. Rather than just trying to optimize code and dealing with failures when they occur, systems based on an autonomic framework can react to failures and adapt to different requirements. For example, you may want your grid to optimize performance normally, but at other times you may want to prioritize high reliability or security.
By automating the process, based on human directed policies, the cyber-ecosystem can manage its own infrastructure. It makes the process far more flexible.
Parashar suggests three practical motivations for autonomics. First, the volume and complexity of data produced within grid structures can be time consuming for administrators to absorb and react to. Second, the costs of hardware and power requirements are growing. Third, as we increasingly rely on our e-infrastructures their impact is huge and small failures can have drastic effects.
Autonomics also offers a potential way for grids and clouds to be combined effectively because it can help decide what to run where, when and how. For example, clouds can be used as an accelerator within the computing infrastructure, to reduce runtime for applications based on budgeting requirements of the user. Alternatively, clouds can be used as an automated failsafe mechanism in the event of a failure within the grid infrastructure.
For Parashar, e-infrastructures like EGI and Teragrid provide an ecosystem which offers new mechanisms for scientists to build experiments. “It’s about decreasing the gap between the user and the tools we build,” says Parashar.
“To me it just seems pragmatic and obvious.”
—Seth Bell, for iSGTW