Categories
Computing Development Research Software

What are good problem solving techniques for developing algorithms

From my personal experience I often use past experiences to aid me in the problem solving process. I find that experience is one of the most valuable tools in the problem solving process. At the same time one should not be confined to past experiences as new methods of problem solving are always being discovered, discovering and developing new ways of doing things is, in fact, one of the main points of the IT industry.

Another method of problem solving I use is researching similar problems using the Internet. The Internet is a fantastic resource for finding users with similar problems or who have already solved similar algorithms to the ones I am trying to solve. If the problem I am trying to solve is proving difficult to find then, depending on the size of the algorithm I am trying to develop, it is still possible to research or have experience with sub-sets of the algorithm, which is a good way to get your ‘foot in the door’ as described by Brookshear (2009, p.218).

Perhaps too obvious to mention but I feel it is worth mentioning, all of the above constitutes to the method of trial and error. It is common that you will try a possible solution that does not work and then move to perhaps another few solutions before finding the most suitable, or correct algorithm.

Mentioned by Brookshear (2009, p.216), G. Polya, a mathematician, has developed an outline of the problem solving process which constist of :

  1. Understand the problem.
  2. Devise a plan for solving the problem.
  3. Carry out the plan.
  4. Evaluate the solution for accuracy and for its potential as a tool for solving other problems.

It is important to remember, as also described by Brookshear (2009, p.216) “we should emphasize that these phases are not steps to be followed when trying to solve a problem but rather phases that will be completed sometime during the process”.

Two other methods mentioned by Brookshear (2009, p. 220) are the ‘top-down methodology’ or ‘stepwise refinement’; this is the process of “first breaking the original problem at hands in terms of several subproblems”. The other, opposite method is the ‘bottom-up methodology’ in which we look at solving the problem starting with the specifics and going down to the general.

Another method I sometimes use, as stated by Anders and Simon (1980), is to verbalise the problem at hand, “Within the theoretical framework of human information processing, we discuss different types of processes underlying verbalization and present a model of how subjects, in response to an instruction to think aloud, verbalize information that they are attending to in short-term memory (STM). Verbalizing information is shown to affect cognitive processes only if the instructions require verbalization of information that would not otherwise be attended to”. While this study is referring to perhaps a different question answer scenario than the one we are discussing, I do believe it is relevant and affective to sometimes speak aloud when trying to solve a problem.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Ericsson, A. K. & Simon, A. H. (1980) ‘Verbal Reports as Data’, Psychological Review, 87 (3), pp.215-251, PsycNET [Online]. Available from: http://psycnet.apa.org.ezproxy.liv.ac.uk/doi/10.1037/0033-295X.87.3.215 (Accessed: 3 October 2010).

Categories
Computing Operating Systems Research

The conflicting, dual definitions on the purpose of an Operating System

I was posed this question in my studies. An operating system is said to have two, conflicting definitions of purpose:

  1. Presenting a virtual machine with a user-friendly GUI to a user which isolates them from the low-level hardware
  2. It must manage, efficiently, the limited resources of the hardware system

Firstly I tend to agree that the two conflict in nature. If we think of the operating system on the whole and then think of the goal of efficiently managing resources it will not take long to realise that the nature of an operating system is not entirely geared towards efficiency. On the surface the limited resources of the computer are often wasted on things like graphical effects and enhancements like transparency and animation. Often the resources required to perform these seemingly meaningless tasks are very taxing on the system hardware.

If I look at the question again, the phrase “manage in the most efficient way the (always) limited resource of the computer system” (University of Liverpool, 2010), I would say that it does manage the resources of the operating system quite efficiently, because, regardless of the task at hand, the computer manages to operate quite smoothly when doing its multi-tasking and just looking at the task manager in Windows 7 you are able to see just how many tasks are running at any one time, and the computer still operates responsively and seemingly effortlessly. What I’ve just mentioned does entirely depend on the spec of hardware that your computer is running, RAM and CPU speed etc. but if you follow the minimum requirements performance is generally as it is expected.

My conclusion is that I do think that they work together quite effectively as, in this specific answer of mine, Windows 7 as an operating system is very user friendly while still managing the computers resource efficiently and quickly (despite its predecessor, Vista, which did not manage resources as well). The concept of a process is absolutely vital to the success of both managing the hardware efficiently and providing a user friendly environment. A process is defined as a dynamic activity who’s properties change as time progresses (Brookshear, J, pp.134), coupled with multiprogramming is a way in which different activities and resources are managed and organised, without this there would be chaos and I believe the computer would be sent back to the days of batch processing single tasks.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Research

Would it be a good idea to carry type designations in data cells of hardware?

I would like to answer this question without concentrating too much on polymorphism itself. Looking at this suggestion I immediately think back to the RISC vs. CISC argument. Basically, by adding the type designation with the hardware device we are leaning towards the RISC attitude of computing instructions; I say this because giving the CPU the type designation directly from the hardware device will decrease the computations of the CPU and allow it to do what it needs to do because the work has already been done on the side of the hardware.

While this would potentially be a good idea as one would argue that it would lessen the amount of errors potentially occurred, I do also see an increasing amount of error occurrence. The reason why I say this is by referring to the chapters read in the book by Brookshear (mentioned below) where (very abstract summary) it is stated that certain electric interference may alter the state of a bit which would potentially corrupt the data being stored or sent (Brookshear, 2009). The more we rely on outside components holding imperative data and having it transferred to and from components, the more we need to cater for the occurrence of errors. If the work remained in the CPU, it would only need to receive the “dumb” data from the devices and we would have less scope for errors coming from the hardware component.

Another issue I see with this is that there will be an increased size in the flow of data along the controllers and buses of the computer. The rate of transfer along the bus is more than likely far slower than the CPU’s required time to compute the instruction along with its types.

My final issue against this notion would be the financial feasibility. As increased programming would go into the hardware devices then increased complexity would go into the software developed for the control of the device and the costs would no doubt be increased as well.

With all that said, I think in an ideal world this idea would be good to share the load of the CPU.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Research Software

RISC vs. CISC and Programming

CISC architecture is known as ‘complex instruction set computer’, this architecture is based on the argument that a CPU is able to perform large numbers of complex instructions and therefore should be used to do so, even if they are redundant (Brookshear, 2009, p.85). RISC architecture is known as ‘reduced instruction set computer’, this architecture is based on the argument that the CPU should have a minimal set of instructions due to the fact that once a CPU has a certain minimum amount of instructions, adding any more does not increase capabilities and will only decrease the speed and efficiency of the machine (Brookshear, 2009, p.85).

The strengths of RISC over CISC are very apparent when reading though the paper by George (1990). In his paper he states that while CISC is more popular and will no doubt remain so in the near future due to backward-compatibility for current software, RISC has far more advantages when going forward due to the “constantly changing body of knowledge used for processor design”.

From my readings I believe that CISC processing is best suited for RAD (Rapid Application Development). In this day and age, at the pace of the IT and software industry it will help us with concentrating more on breaking new boundaries in development of less-complex software and design by taking the tedious work of programming operations that are run-of-the-mill requirements for every day applications away and allowing us to concentrate on more complex and new developments. Issues that may hamper this, however, is due to the fact that CISC processors are less efficient and take longer to process than RISC, and speed of execution is one of the most important factors in software today.

RISC processors allow us to have almost a “clean slate” when programming. With its minimal instruction set it can cater for new methods of developing the simplest of operations at a higher speed of what has been discovered and pre-programmed into the CPU like in CISC processors. RISC will also allow for more complex and low-level (more than likely larger, more advanced systems) development at more efficient performance due to the lack of excessive and potentially un-used instruction sets slowing down performance.

As far as future replacements, the popularity of the CISC chip may be replaced by the RISC chip as stated in the paper by George (2009). I do tend to agree on this but also by perhaps adding the ability for custom instructions to be stored in the CPU which would allow user/operating system/software defined instructions to be added dynamically for optimisation of that particular system configuration

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

George, A.D (1990) “An overview of RISC vs. CISC”, System Theory, 1990. Twenty-second Southeastern Symposium on, 11-13 March, Cookeville. IEEE, pp.436-438.

Categories
Computing Research Software

Software Architects – How will this position change in 10 years?

The question at hand I would like to discuss is, how do I think the position of a Software Architect will change in the next 10 years?

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

 

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

References

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).

 

 

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of

a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of

the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

Bibliography

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).