|Motivation: Currently the most popular approach in the race towards TeraFlops is the purchase of gigantic computer farms at outrageous costs. Most of these super-computers are financed by public funds, as grant organizations strive to encourage the sharing of computing resources among different research entities. Unfortunately, this sharing of resources implies the management of user accounts, permissions and security which renders this situation rare in practice. In parallel, numerous office computers could be linked to an external super computer if one could provide the owner with a guarantee of both the security of his environment and the validity of the computed tasks. These two seemingly different contexts both involve the same confidence issues. Thus a confidence management mechanism is required which could be installed on a level superior to that of the user accounts of the different computers and computer farms. We thus aim to use different available authentification and encrypting mechanisms (SSL, RSA, AES, etc.) and apply them to a meta-scheduler of distributed computing, in an effort to give today’s disconnected research centers a super-computer on a national scale.
Process coordination and scheduling is not a new science. Numerous research projects have been carried out in an effort to model the behaviour of a computer with respect to one or more processors. In the area of distributed computing, the behaviour networks tightly coupled to latencies and predictable bandwidths have been modeled in detail. What is expected when one links together several computer farms situated at locations which are one hundred kilometres apart? What would be the ideal arrangement for such a heterogeneous system? We aim to parametrize the influential factors so as to develop a suitable operating system.