Autonomic Computing: Learning to Repair Systems Effectively
The goal of this research is to integrate known supervised learning methods into an autonomic computer system. The current RAINBOW architecture models a system where an adaptation engine looks at the system state to assess problems and determines a proper course of action based on pre-programmed expertise. However, in circumstances where expert knowledge is not readily available for a system, learning proper actions without a priori knowledge is often more necessary in the real world. This research seeks to develop a learning engine to learn the proper adaptations to repair problems in systems.
Specifically, through the use of reinforcement learning methods it is possible for a system to learn the proper actions by actually trying out actions and seeing how well they do. I will compare Q-learning, SARSA, and actor-critic learning on our RAINBOW simulated system. Each of these algorithms works differently on most data, so learning which is best. Since they approach reinforcement learning in a different way, comparing the success rate and speed of the algorithms on various scenarios will lead us to useful conclusions about their efficacy. My research aims to clarify the benefits and costs associated with these algorithms.