Parthenon Software Group

Introduction to Autonomic Computing

A problem exists in the world of distributed computing management. This problem concerns complexity. Ever since the first device was created there has been demand, and as demand and usage continues to rise, so do the complications of its management. Enter autonomic computing. With the main goal of combating this issue, autonomic computing has been touted as a means to maintain the constant barrage of use. The problem is, there isn't a known working model.

Autonomic computing?

In a world where more services and resources are quickly taking on automated features, it was only a matter of time before computing become the next thing to pursue. Simply put, the intention of automatic computing is to enable computing resources to manage the system automatically. In addition, it will also sense unforeseen issues and adapt to the changes as needed. And updating? It takes care of that too.

What is more, the concept includes the ideas of self-configuration, self-healing, self-optimization and self-protection. Such features come from effectors, sensors and programmed knowledge as well as a planner/adapter for utilizing programmed policies based on perceptions of self and the environment.

Sound like HAL 9000 much?

What does this do?

One of the objectives behind autonomic computing is to free up administrators from the simple tasks, allowing them to focus on the higher tasks. At the same time, it's assumed that autonomic computing will make up for the severe lack of highly trained administrators. It's another case of demand overrunning the number of administrators available to maintain computing systems.

Moreover, issues of cost, time consumption, and frequency of errors are also addressed. Obviously, it takes time and effort to manually control distributed resources. Turning to an automated system means lowered costs and more time for other tasks. In keeping suit, lessening the proclivity for errors can also be added to the mix. This isn't to say that errors won't occur, just that there's a reduced likelihood of them happening. That aside, the benefits have been stressed time and time again.

So where is it?

The idea of autonomic computing was first introduced in 2001, but so far there haven't been any real world examples of such a setup. The problem might just be at the core of all the effort. How so? The complexity of the distributed systems is the core problem right? It's also the factor getting in the way of further efforts. Computing systems are in a constant state of change. As a result, complexity also changes and might even become even more daunting. There are diverse and varying pieces to each distributed computing system that make it complicated. On top of that are all the tasks each system is programmed to govern.

Furthermore, there's also the different elements and factors to technology that it takes time to make everything work in a particular way, and that's not even taking all the possible components into consideration.

In the future. . .

Seamless and invisible. That's what autonomic computing is suppose to be in the its end stages. Nevertheless, past and current endeavors haven't quite reached that point. Still, this is not to say that it isn't possible for future undertakings. The necessity of such a system has been lauded in multiple situations. Who knows, it might just come into fruition later on; if not for our generation, maybe future generations.


Web. Mobile. Open Source.

Accomplish your software projects fast with our experience.

Get A Free Estimate