Categories
Computing Operating Systems Research

The conflicting, dual definitions on the purpose of an Operating System

I was posed this question in my studies. An operating system is said to have two, conflicting definitions of purpose:

  1. Presenting a virtual machine with a user-friendly GUI to a user which isolates them from the low-level hardware
  2. It must manage, efficiently, the limited resources of the hardware system

Firstly I tend to agree that the two conflict in nature. If we think of the operating system on the whole and then think of the goal of efficiently managing resources it will not take long to realise that the nature of an operating system is not entirely geared towards efficiency. On the surface the limited resources of the computer are often wasted on things like graphical effects and enhancements like transparency and animation. Often the resources required to perform these seemingly meaningless tasks are very taxing on the system hardware.

If I look at the question again, the phrase “manage in the most efficient way the (always) limited resource of the computer system” (University of Liverpool, 2010), I would say that it does manage the resources of the operating system quite efficiently, because, regardless of the task at hand, the computer manages to operate quite smoothly when doing its multi-tasking and just looking at the task manager in Windows 7 you are able to see just how many tasks are running at any one time, and the computer still operates responsively and seemingly effortlessly. What I’ve just mentioned does entirely depend on the spec of hardware that your computer is running, RAM and CPU speed etc. but if you follow the minimum requirements performance is generally as it is expected.

My conclusion is that I do think that they work together quite effectively as, in this specific answer of mine, Windows 7 as an operating system is very user friendly while still managing the computers resource efficiently and quickly (despite its predecessor, Vista, which did not manage resources as well). The concept of a process is absolutely vital to the success of both managing the hardware efficiently and providing a user friendly environment. A process is defined as a dynamic activity who’s properties change as time progresses (Brookshear, J, pp.134), coupled with multiprogramming is a way in which different activities and resources are managed and organised, without this there would be chaos and I believe the computer would be sent back to the days of batch processing single tasks.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Research

Would it be a good idea to carry type designations in data cells of hardware?

I would like to answer this question without concentrating too much on polymorphism itself. Looking at this suggestion I immediately think back to the RISC vs. CISC argument. Basically, by adding the type designation with the hardware device we are leaning towards the RISC attitude of computing instructions; I say this because giving the CPU the type designation directly from the hardware device will decrease the computations of the CPU and allow it to do what it needs to do because the work has already been done on the side of the hardware.

While this would potentially be a good idea as one would argue that it would lessen the amount of errors potentially occurred, I do also see an increasing amount of error occurrence. The reason why I say this is by referring to the chapters read in the book by Brookshear (mentioned below) where (very abstract summary) it is stated that certain electric interference may alter the state of a bit which would potentially corrupt the data being stored or sent (Brookshear, 2009). The more we rely on outside components holding imperative data and having it transferred to and from components, the more we need to cater for the occurrence of errors. If the work remained in the CPU, it would only need to receive the “dumb” data from the devices and we would have less scope for errors coming from the hardware component.

Another issue I see with this is that there will be an increased size in the flow of data along the controllers and buses of the computer. The rate of transfer along the bus is more than likely far slower than the CPU’s required time to compute the instruction along with its types.

My final issue against this notion would be the financial feasibility. As increased programming would go into the hardware devices then increased complexity would go into the software developed for the control of the device and the costs would no doubt be increased as well.

With all that said, I think in an ideal world this idea would be good to share the load of the CPU.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.