Categories
Computing Development Research

OOP vs Structured Programming – Why is OOP better?

Structured programming, from my readings, described as the forerunner to object oriented programming (Wikipedia, 2010), had all the best intentions of OO without the best execution. Due to the fact that Structured programming was hierarchical tends to make every entity rely entirely on other entities to function, the existence of subsystems was good but when compared to the separation of these subsystems in OO I can see how it is a better choice.

In terms of our natural way of thinking, in my mind, OO programming represents our natural way of thought quite well. A human being is an entity on its own, with data of its own (name, age, etc.) and ‘functions’ of its own (ability to walk, talk, breath, etc.), if we think of an employee we know it is a human being with some additional ‘extensions’ to it (extension of the class/entity human being). Even as far as thinking of the world as a program with different entities applied to different roles is quite relatable to most common thought. I believe this is probably why, aside from its efficiency, OO is so well accepted.

Structured programming would become more confusing as the size of the application grew, instead of breaking the subsystems into stand-alone entities they are all intertwined with one another. Following the flow of something like this would be difficult for most people.

A good real world example I have dealt with in this regard, quite often, is where I have to deal with programming of systems in PHP from before PHP was object oriented (or, depending on the previous programmer, their preference for procedural over OO programming today). These systems consist of a couple of files consisting of thousands of lines of code with functions and procedural code mixed together. Following the logic in these systems often takes days to understand whereas when dealing with object oriented code it is easy to find where the logic and functionality for an entity or process is.

References

Wikiepedia (2010). Structured Programming [Online]. Available from: http://en.wikipedia.org/wiki/Structured_programming (Accessed: 10 October 2010).

Categories
Computing Development Research

PHP vs JavaScript – What are the differences?

The two programming languages I would like to compare are firstly, PHP and secondly, JavaScript.

PHP (PHP Hypertext Preprocessor) is a young language when compared to the likes of Perl and C and other largely popular languages. It was started in 1995 by Rasmus Lerdorf initially as a simple set of Perl scripts he named ‘Personal Home Page Tools’, later he expanded it and implemented it in C and then released it as an open source code base for all to use (The PHP Group, 2010). It has since then evolved from a procedural language to an OO language in its PHP 5 release (July 2004).

It is strictly a server side language which means the compilation, translation and actions are performed on the host computer it is running on. Syntax is similar to C/Java but it does not have certain rules of the other programming languages (such as declaring a variable as a certain type before using it, or the need to actually declare variables before using them). PHP’s purpose is predominantly for web applications but it can be used for non-web application scripting (command line tools etc.) although, from my experience, it is not well suited for background processing as it does tend to use up far more resources than languages like Perl and Python.

JavaScript is a language that was developed by Netscape to run on Netscape browsers and despite its name, is not a subset of the Java language or developed by Sun (Crockford, 2001). It is technically not an OO language, although it does have functions and objects it does not group entities into Classes. JavaScript is a client-side language meaning that it uses the client computers resources and compiler to execute its instructions, this makes it very light on the load of a server but also puts performance in the hands of the hardware that the users are accessing the script from, which, of course due to the nature of JavaScript being for web pages, is probably the most broad spectrum one could want when programming for consumers.

Both langauges I feel are often misunderstood due to the fact that just about any technical person is able to write a simple script in PHP and JavaScript and call themselves programmers. Often due to lack of training and expertise, the languages are given a bad name.

Similarities:

1)    Both languages are almost exclusively for the web and were created specifically for web use in the mid 90s.

2)    Syntax styles are both based on C

3)    Until only recently with PHP going full OO, both languages were not officially OO languages.

4)    Both languages are platform independent (compiler or ‘runtime environment’ is required though)

Differences

1)    PHP is server side while JavaScript is client side.

2)    PHP makes use of classes while JavaScript only has constructors and functions.

3)    JavaScript is used for a lot of visual effects and enhancements for web GUIs.

4)    Users are able to disable all JavaScript while browsing the internet due to its client-side compilation.

 

A simple example to outline the server-side/client-side JavaScript and PHP differences would be form validation. If you implement validation on a web form using JavaScript (eg: not allowing users to enter in an invalid email address or leave their name blank), the user is able to bypass the security checks by simply changing their browsers options to disable JavaScript. In PHP, if you validate form fields, the user must first submit the form to the server and the server will decide whether the data passed to it is correct or not, there is no way for the user to disable this from their computer.

An example of how similar the syntaxes are can be shown in the basic function below to convert a list of names to uppercase.

PHP:

function convertUpper($names) {

for ($i = 0; $i < count($names); $i++) {

$names [$i] = strtoupper($names[$i]);

}

return $names;

}

$names = array(‘michael’, ‘john’, ‘samantha’);

$names = convertUpper($names);

JavaScript

function convertUpper(names)  {

for (i = 0; i < names.length; i++) {

names[i] = names[i].toUpperCase();

}

return names;

}

var names = { ‘michael’, ‘john’, ‘samantha’ };

names = convertUpper(names);

References

Crockford, D (2001) JavaScript: The World’s Most Misunderstood Programming Language [Online]. Available from http://www.crockford.com/javascript/javascript.html (Accessed: 10 October 2010).

The PHP Group (2010) History of PHP [Online]. Available from http://www.php.net/manual/en/history.php.php (Accessed: 10 October 2010).

 

Categories
Computing Development Research Software

What makes one algorithm better than another?

There are two main aspects which make an algorithm better than another. The first one I would like to discuss is the measure of how efficient an algorithm is in terms of the time the algorithm takes to complete its calculations and come to a result (an end).

Consider the examples as outlined by Brookshear (2009) of a binary search and a sequential search for a specific item in an ordered list. While the sequential search runs through a list one by one until it finds an entry, on average this search will search through at least half of the list – if the list is very large this may add up to quite a sufficient amount of time per search. A binary search divides a list into two, looks at the middle entry and calculates whether the result is in the first or last half of the resultant lists, then further breaks the respective half in half again, and repeats this process until it finds the item it is looking for. One can plainly see that the binary search method will access a record far less times on average than the sequential search.

Another aspect of measurement is the physical space the algorithm consumes when performing its calculations/operations. A very simple way of explaining this is the one of sorting a list alphabetically. The two methods are firstly the bubble sort versus secondly creating a new list in the correct order. In the bubble sort the method is to sort the list in the form of looking at it in small pieces and sorting them by shuffling around the entries until the entire list is sorted. This way we only occupy the memory/space of the original list. In the method of sorting into a new list, the memory occupied by the original list is used and another entire list is created from the same entries sorted into the correct order. The latter occupies double the amount of memory than the former.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

 

Categories
Computing Development Research Software

What are good problem solving techniques for developing algorithms

From my personal experience I often use past experiences to aid me in the problem solving process. I find that experience is one of the most valuable tools in the problem solving process. At the same time one should not be confined to past experiences as new methods of problem solving are always being discovered, discovering and developing new ways of doing things is, in fact, one of the main points of the IT industry.

Another method of problem solving I use is researching similar problems using the Internet. The Internet is a fantastic resource for finding users with similar problems or who have already solved similar algorithms to the ones I am trying to solve. If the problem I am trying to solve is proving difficult to find then, depending on the size of the algorithm I am trying to develop, it is still possible to research or have experience with sub-sets of the algorithm, which is a good way to get your ‘foot in the door’ as described by Brookshear (2009, p.218).

Perhaps too obvious to mention but I feel it is worth mentioning, all of the above constitutes to the method of trial and error. It is common that you will try a possible solution that does not work and then move to perhaps another few solutions before finding the most suitable, or correct algorithm.

Mentioned by Brookshear (2009, p.216), G. Polya, a mathematician, has developed an outline of the problem solving process which constist of :

  1. Understand the problem.
  2. Devise a plan for solving the problem.
  3. Carry out the plan.
  4. Evaluate the solution for accuracy and for its potential as a tool for solving other problems.

It is important to remember, as also described by Brookshear (2009, p.216) “we should emphasize that these phases are not steps to be followed when trying to solve a problem but rather phases that will be completed sometime during the process”.

Two other methods mentioned by Brookshear (2009, p. 220) are the ‘top-down methodology’ or ‘stepwise refinement’; this is the process of “first breaking the original problem at hands in terms of several subproblems”. The other, opposite method is the ‘bottom-up methodology’ in which we look at solving the problem starting with the specifics and going down to the general.

Another method I sometimes use, as stated by Anders and Simon (1980), is to verbalise the problem at hand, “Within the theoretical framework of human information processing, we discuss different types of processes underlying verbalization and present a model of how subjects, in response to an instruction to think aloud, verbalize information that they are attending to in short-term memory (STM). Verbalizing information is shown to affect cognitive processes only if the instructions require verbalization of information that would not otherwise be attended to”. While this study is referring to perhaps a different question answer scenario than the one we are discussing, I do believe it is relevant and affective to sometimes speak aloud when trying to solve a problem.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Ericsson, A. K. & Simon, A. H. (1980) ‘Verbal Reports as Data’, Psychological Review, 87 (3), pp.215-251, PsycNET [Online]. Available from: http://psycnet.apa.org.ezproxy.liv.ac.uk/doi/10.1037/0033-295X.87.3.215 (Accessed: 3 October 2010).

Categories
Computing Operating Systems Research

The conflicting, dual definitions on the purpose of an Operating System

I was posed this question in my studies. An operating system is said to have two, conflicting definitions of purpose:

  1. Presenting a virtual machine with a user-friendly GUI to a user which isolates them from the low-level hardware
  2. It must manage, efficiently, the limited resources of the hardware system

Firstly I tend to agree that the two conflict in nature. If we think of the operating system on the whole and then think of the goal of efficiently managing resources it will not take long to realise that the nature of an operating system is not entirely geared towards efficiency. On the surface the limited resources of the computer are often wasted on things like graphical effects and enhancements like transparency and animation. Often the resources required to perform these seemingly meaningless tasks are very taxing on the system hardware.

If I look at the question again, the phrase “manage in the most efficient way the (always) limited resource of the computer system” (University of Liverpool, 2010), I would say that it does manage the resources of the operating system quite efficiently, because, regardless of the task at hand, the computer manages to operate quite smoothly when doing its multi-tasking and just looking at the task manager in Windows 7 you are able to see just how many tasks are running at any one time, and the computer still operates responsively and seemingly effortlessly. What I’ve just mentioned does entirely depend on the spec of hardware that your computer is running, RAM and CPU speed etc. but if you follow the minimum requirements performance is generally as it is expected.

My conclusion is that I do think that they work together quite effectively as, in this specific answer of mine, Windows 7 as an operating system is very user friendly while still managing the computers resource efficiently and quickly (despite its predecessor, Vista, which did not manage resources as well). The concept of a process is absolutely vital to the success of both managing the hardware efficiently and providing a user friendly environment. A process is defined as a dynamic activity who’s properties change as time progresses (Brookshear, J, pp.134), coupled with multiprogramming is a way in which different activities and resources are managed and organised, without this there would be chaos and I believe the computer would be sent back to the days of batch processing single tasks.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Development Operating Systems

My current “IT” setup, is it performing adequately?

As a desktop operating system I am using Windows 7 Professional (64bit). The networks I use at both my home office and work office are a combination of Ethernet/Bus and Wifi.  I do believe it is worth mentioning that all my ‘test’ and ‘live’ environments that I work on (aside from some of my testing being done on my PC/Laptop itself) are Linux based systems, specifically Ubuntu Linux and CentOS Linux as well as dealing with my immediate colleagues who are running Apple Mac.

I believe that the systems are performing to the baseline of their expectations in some degree and doing exactly what they need in another degree. Speed being a major drawback, and permissions across shared drives/folders being more of an inconvenience than a show-stopper.

From an operating system perspective, I do feel that Windows is lacking in its command line shell when compared to its Unix based counterparts. The ease of installing server based software for the work that I do (Apache, PHP, MySQL, SSL) on some parts are very easy but for more advanced setups such as SSL with Apache on a Windows machine is not as easy as it is on Linux. The pros of Windows I believe is the vast availability of software, the lack of being tied to specific hardware (such as with Apple) and the ease of use is very straightforward (especially when compared to Linux), as well as how easy it is to add/remove software without it being provided through repositories specifically made for the OS (such as with Apple and Linux).

As far as the network goes, when dealing with data streaming or copying large amounts of data or just working with a network drive (especially when using SVN across a repository with 1000s of files), Wifi falls very short of what I personally would accept as efficient performance. Currently we are running 100MBIT LAN to help alleviate this but then there is another bottleneck of our network server being quite old, but that does not really matter in this specific situation.

Due to working with multiple programmers, the disadvantage of the slowed network drive performance does outweigh each developer developing locally and doing an SVN commit with each code change to check on the  test server (our network drives are located on the test server), this allows us to edit directly on the test server and see our changes instantly without committing and updating the test servers code base. Although the time it takes to commit all our changes to live from our network drive is far longer than it should be. I would say the positive is still outweighing the negative.

At the moment our internal office network is secured by a basic ADSL router, our Wifi by the built in router wireless security and all of our work done on the test and live servers are done over SSH. I do believe that for the purposes of our organisation the security is sufficient. While one could look at it as the “standard” level of security, we do not deal with vital user information in our local network. Each computer is equipped with software antivirus and firewalls that update daily. I personally believe that overdoing the security, while it does lower your risk, can be overkill if you are not dealing with mission critical data (such as banking information).

If I were to modify just one element I would designate more responsibility to our test server as at the moment it is serving as the DHCP server while the DNS and routing/gateway is the router. To centralise the DNS being handled on the test server as well it would alleviate us needing to alter our hosts file when working remotely to point to the test servers external IP instead of its internal LAN IP. If DNS was handled by the test server the IP address designation could be altered at the source instead of each user computer.

Categories
Computing Research

Would it be a good idea to carry type designations in data cells of hardware?

I would like to answer this question without concentrating too much on polymorphism itself. Looking at this suggestion I immediately think back to the RISC vs. CISC argument. Basically, by adding the type designation with the hardware device we are leaning towards the RISC attitude of computing instructions; I say this because giving the CPU the type designation directly from the hardware device will decrease the computations of the CPU and allow it to do what it needs to do because the work has already been done on the side of the hardware.

While this would potentially be a good idea as one would argue that it would lessen the amount of errors potentially occurred, I do also see an increasing amount of error occurrence. The reason why I say this is by referring to the chapters read in the book by Brookshear (mentioned below) where (very abstract summary) it is stated that certain electric interference may alter the state of a bit which would potentially corrupt the data being stored or sent (Brookshear, 2009). The more we rely on outside components holding imperative data and having it transferred to and from components, the more we need to cater for the occurrence of errors. If the work remained in the CPU, it would only need to receive the “dumb” data from the devices and we would have less scope for errors coming from the hardware component.

Another issue I see with this is that there will be an increased size in the flow of data along the controllers and buses of the computer. The rate of transfer along the bus is more than likely far slower than the CPU’s required time to compute the instruction along with its types.

My final issue against this notion would be the financial feasibility. As increased programming would go into the hardware devices then increased complexity would go into the software developed for the control of the device and the costs would no doubt be increased as well.

With all that said, I think in an ideal world this idea would be good to share the load of the CPU.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.