IT Positions in South Africa, do they require tertiary education?

In South Africa, there is a strong emphasis on experience when it comes to technical or IT positions. IT job ads in South Africa are generally opened up with a requirement of “IT related Degree or Diploma” but, in my experiences of being an applicant without a degree (only a 1-year Comp Sci. diploma) as well as being in the position of hiring applicants, this is often just a small benefit when it comes to the final choice. In the interviews I have taken part in I have never had an in depth discussion about the qualifications either I myself have or an applicant that I am interviewing has.

In this (IT) industry in South Africa, it’s a known fact that for technical positions (programmers, technicians etc.), salary is almost solely based on the amount of experience an applicant has.

Another point I found when being an applicant with a diploma is that the difference between a 1 diploma and a 3 year degree is quite vast. Not exactly 2 years ahead due to degrees generally being spread out with a lot of holidays and the 1 year diplomas being 48 out of the 52 weeks in the year, but still a very significant amount more learning is gained in the degree program. Yet, job ads were looking for “either a degree or diploma”.

I do keep up to date with job postings for IT related positions in my country and I have found that recently, a number of jobs are asking for a “3 year degree” rather than simply a degree or diploma.

For positions in management, which mainly consist of project manager positions, project management training has always been a requirement in my experience of reading through positions, but along with that comes a requirement of, generally, a minimum of 3 years of experience in project management.

This brings me to the catch-22 of the job market in South Africa that I found in my first 2 years of being in the industry. Practically nobody will even interview you without relevant experience, let alone hire you. Without experience you cannot get a job, and without a job you most certainly can’t get (valuable) experience.

I feel the main area where specific training is not a major requirement is in the field of IT support. Many positions are available in support and most of them involve being trained in the specific scenarios of the organisation on the job. Even with regards to hardware support, I find the most talented hardware support people have learned themselves. The courses such as the A+ and other technical courses, which I have done personally, are almost trivial to someone who has “played around” with computers for a few years. In my opinion, it is difficult to teach such an area as the reasons behind hardware/software failing are not standard, they require trial and error even for the experienced hardware technician.

For technical positions, I have always, and my colleagues in the industry, been given mini-projects to complete and/or written general tests. Personally I give applicants a mini-project as I do not believe that written tests are a good when it comes to programming, specifically. Knowing functions off by heart doesn’t mean much, as generally we all have access to the internet to solve these issues, rather the ability to solve a problem and produce a project with good quality code.

To conclude, the situation in South Africa has become more focused on pre-trained individuals over the years – but this is generally just an initial prerequisite to avoid interviewing potentially mis-fit applicants. The strongest focus does still lie on experience in the specific field that is being applied for. 

Working in a large organisation vs. a small company – advantages and disadvantages for IT admins

Being from a majority small-company background I can empathise more towards the small company advantages and disadvantages. I do, however have a number of close friends who are in larger companies holding IT positions and have had numerous conversations on this subject.

Advantages for IT administrators in small companies:

  • Most of the time, the diversion from the IT administrators standard tasks will be to do other IT tasks, in this scenario I am referring to a position as “network administrator/ database administrator/ user consultant and others”. Doing these different IT tasks will enhance their ability and knowledge of other areas of the business which will increase their skill set as well as potentially increasing their skill at their assigned IT administrator position (better understanding of other elements of the business related to the IT administrator position).
  • There is less chance that the employee will become bored of doing the same thing on a daily basis. The IT administrator generally is quite a static job (physically) and performing tasks like user consulting, network administrator allow for more physical interaction which is appealing to some.
  • Working in a small business, performing many tasks helps develop the entrepreneurial abilities of the IT administrator. If the IT administrator is set to run his/her own business the experience here is valuable. Vitez (n.d.) states “Business owners can attempt to pass basic entrepreneurship principals to their workers by requiring them to complete functions outside their normal work capacity. This concept is often known as cross training”.

Disadvantages for IT administrators in small companies:

  • While the IT administrator’s skill set may be growing, the figure of speech “jack of all trades, master of none” springs to mind. If they are constantly working on multitudes of different IT “specialties” they will not have as much time to master their abilities. Comparing to an employee who spends all of their time doing any one of the tasks the small business IT administrator is doing, the IT administrator will more than likely be the less competent of the two when it comes to that singular skill.
  • On the contrary to the advantage of not becoming bored, there is the possibility that the small business IT administrator will become bored with their job. If the passion lies with being an IT administrator and they have a dislike for dealing with users, SQL or any of the other tasks they may become bored or more likely annoyed with their job. Perhaps the tasks they are required to perform are demeaning and “beneath them”.
  • The extra roles may end up being very demanding, the IT administrator could be overworked and burned out from the stress of performing too many different tasks. Small businesses usually do not have the funding for a multitude of employees for each function so the pressure may not be relieved due to financial restraints. Perhaps the business owner could afford a salary increase, but not a new employee.
  • Small companies often are new and do not offer great job stability.

Advantages for IT administrators in large companies:

  • Larger companies have employees for specific tasks. The IT administrator will not be required, or seldom be required, to perform tasks outside of the specific role. Almost all of the IT administrator’s time is devoted to performing and mastering their skills for their role allowing for them to become very specialised in the specific tasks.
  • Larger companies often have budget for furthering employee education. As shown by Buchanan & Huczynski (2010, p.156) the example of Barclays Bank setting up its own corporate university. This will allow the IT administrator to acquire more qualifications which bring more value to his/her expertise.
  • Larger companies often have larger budgets. Along with increased specialisation aided by my above two points, the IT administrator potentially has better earning potential within the organisation. The IT administrator also has potential for a subordinate to be employed to carry out the “menial” tasks required of their position.
  • Stability of working for a large company provides the feeling of job security.

Disadvantages for IT administrators in large companies:

  • The requirement for skills outside of their specific role is replaced by other employees. The IT administrator may become stagnant and may begin to feel redundant. The repetition of the same function may become boring. Vitez (n.d.) mentions an example of the Ford motor company which brought about the assembly line / mass production method of business. While the IT administrator is certainly not an assembly line worker, the same feeling of “disappointment or unrest” may occur as with the assembly line worker, doing the same task over and over again.
  • The IT administrator may not feel important or involved in the company. Lack of involvement in the everyday running of the business may lead to a feeling of detachment with the core value and identity of the business.
  • Larger organisations are often stuck in their ways, using legacy systems or processes decided by a predecessor and this may be a system/process the IT administrator is unhappy with or does not like. Changing systems and processes in a large company is an expensive task and often is “not an option”, creative thinking is therefore limited if it does not fit within the current bounds of the organisation.

The small vs large company employer does generally boil down to personal preference. From my experiences and discussions, a large company offers a more recognisable feeling of importance (eg: if working for a well-known brand), while small companies tend to be more casual and easy going. As Buchanan and Huczynski (2010) point out, different types of people prefer different types of working environments.

References

Buchanan, A. & Huczynski, A. (2010) Organizational Behavior. 7th Edition. Upper Saddle River: Prentice Hall.

Vitez, O (n.d.) Specialization of Labor [Online] chron Small Business. Available from: http://smallbusiness.chron.com/specialization-labor-3890.html (Accessed: 13 February 2011).

How can IT enhance a Managers Function, Role and Skill?

Managers have many functions, roles and skills, below I will illustrate one of each and how IT is able to improve performance in each of these examples.

  • FUNCTION – Manage Time and Resources Efficiently

As described by Management-Hub.com (n.d.), and as we all know well, time is “precious and vital”. A manager needs to manage his/her time well between his/her team and superiors as well as his/her time spent on organisational goals that require his/her personal capacity and skills.

I.T has introduced calendar software, such as Google’s Calendar (www.google.com/calendar) that is able to send SMS (text), E-Mail and Pop-up (if the calendar happens to be open) alerts to users on their cellular phone, laptop, computer or land line (depending on the carriers ability to read and/or receive text messages). Assuming that the manager will have with him at least one network enabled or communication enabled device at all times (which I think is a fairly safe assumption) this allows him/her to be constantly reminded of his appointments.

  • ROLE – Intermediary between employee groups and top management

About-Personal-Growth.com (n.d.) describes a manager as the “middle person in between top management level and the team that reports to him”. As most organisations are hierarchical, as well as the requirement for efficiency, managers are usually the liaison between upper management and regular employees. Due to the sheer size of certain organisations it would be difficult for a manager to keep track of exact discrepancies, complaints, issues, performance reviews and other requests from either party. I.T has brought with it the ability to track accountability and exact details of communications with tools as basic as E-mail. A manager will be able to refer to messages from management/employees directly when addressing issues with either party without missing any details.

  • SKILL – Good Planning

This is, in my opinion, one of the most important skills a manager requires – some may perhaps think more when referring to a project manager but I do think it’s equally important in all areas of management. Without organised planning the manager is unable to assess progress on achieving organisational goals. As About-Personal-Growth.com points out, “having goals and planning out the directions allow for effective time management and saves cost and resources”.

Planning also ties up with adaptability to change, both positive and negative. I think this would tie in with Buchanan and Huczynski‘s (2010, p.52) quotation of Ansoff in which Ansoff states that managers who are unable to develop an entrepreneurial way of thinking “must be replaced”.

References

About-Personal-Growth.com (n.d.) Managers – Roles and Responsibilities [Online]. Available from: http://www.about-personal-growth.com/managers.html (Accessed: 6 February 2011).

Buchanan, A. & Huczynski, A. (2010) Organizational Behavior. 7th Edition. Upper Saddle River: Prentice Hall.

Management-Hub.com (n.d.) Roles & Responsibilities of a Manager in an Organization [Online]. Available from: http://www.management-hub.com/hr-manager-roles.html (Accessed: 6 February 2011).

 

Ethical Hacking?

I believe that there are situations in which hacking a system would be ethical. Adams and McCrindle (2008) describe grey-hat attacks as having the aim of “identifying potential vulnerabilities and inform the organization of their weaknesses”; they also state that the reason for seeing grey-hat attacks as unethical is due to unintended consequences that may follow the attacks. I do not think that grey-hat techniques are ethical because of the risks they involve and that it is unethical to attack a system that you do not know or are unable to rectify (due to your lack of knowledge for the inner system).

In scenarios such as the response by hackers to WikiLeaks, users are hacking organisations and sites that fail to support WikiLeaks. An article by Neal (2010) described how a 16 year old boy from the Netherlands was arrested for his part in the “Operation Payback” DDoS attack on MasterCard and Visa. I also disagree with these tactics as no good is coming from it.

A scenario in which I would be pro-hacking is where the system in question is either involved in illegal activities or is involved in inciting illegal activities. Of course the hacking of this system would come after the correct measures of due diligence had been adhered to; such as reporting the system to their host, or to the authorities. An article by Brandt (2004) described how the NSA (National Security Agency – America) appeared at the “Defcon 12 hackers’ conference” to seek out highly skilled “hackers” to work for their organisation. Conspiracy theories aside, this scenario would be another ethical realm of hacking, to investigate illegal activities to help fight crime; anything from tracking down distributors of child pornography over the internet, to those who publish credit card details to the public.

References

Adams, A & McCrindle, J (2008) Pandora’s Box: Social and professional issues of the information age. England: John Wiley & Sons Ltd.

Brandt, A (2010) Feds Seek a Few Good Hackers [Online] PC World. Available from: http://www.pcworld.com/article/117226/feds_seek_a_few_good_hackers.html (Accessed: 12 December 2010).

Neal, D (2010) Dutch teen arrested over WikiLeaks revenge hacks [Online] V3. Available from: http://www.v3.co.uk/v3/news/2273867/wikileaks-paypal-hack (Accessed: 12 December 2010).

 

Responsibilities for Computing Professionals in Developing Material for the Internet

Responsibilities of the Computing Professional

The responsibilities of the computing professional, as covered in my previous posts, are both ethical and legal. It is our duty to inform and guide from our experience and expertise. The cliché of using our ‘powers’ for ‘good’ and not ‘evil’ can be broadly applied; as with almost any other profession.

Responsibilities Relating to Development of Internet Material

The word development here has a double connotation. Firstly the actual programming of “material” which could constitute any system that generates content or systems available on the internet or allows the generation of content on the internet. As discussed by Adams and McCrindle (2008, p.352), a number of malicious examples of software, created by computing professionals, are readily available on the Internet.

I’d like to briefly outline the relevant examples.

  1. Trojan Horses: These are quite literally as their name suggests, programs that pose as something innocent (most of the time), but hold inside them harmful code that will potentially damage your data or perform some other illicit task.
  2. Virus: This is a term many use to encompass all forms of malicious software, but is itself a specific type of malicious software. It can be carried with a Trojan Horse and usually replicates itself to other files and programs on the computer. Most of the time the program carries out a task that usually causes harm to data and possibly even hardware.
  3. Worm: These infections ‘worm’ their way through a network without requiring the means of a Trojan Horse or Virus to spread. If they are to spread outside of the current network they may also be carried via Trojan Horses.
  4. Zombie: These are programs designed to allow ‘back doors’ to a system so that it can be remotely accessed to perform a number of tasks (often used for Distributed Denial of Service attacks).

Secondly, perhaps a less direct means of our responsibility as computing professionals can be the “written” (typed) information we spread across the internet. Publicly releasing knowledge that could jeopardise systems is an ethical issue we need to take seriously. Sometimes, this may be a difficult decision to make but it is always something that should not be taken lightly.

Responsibilities Relating to the Usage of the Internet

Due to the global nature of the internet, its reach going into many secure facilities, government agencies, banks and other authorities; we must ensure that securing the implementations of these systems is a top priority. Adams and McCrindle (2008, p.368) describe black, white and grey hat crackers and the controversial issue of whether grey hat techniques are in the best interests of the organisation or not. Personally I am partial to both it being wrong and right as it really boils down to the situation at hand. If they grey-hat techniques simply identify back doors or other security threats without interfering or having negative effects on the current system, and provided the grey hat crackers do not plaster the vulnerabilities all over the internet – it may be acceptable. A paper by the Electronic Frontier Foundation mentions that grey-hat techniques may violate a number of laws such as the Computer Fraud and Abuse Act, Anti-Circumvention Provisions of the DMCA, Copyright Law and other state laws, so it is probably best to either secure your research or request permission beforehand when doing such techniques.

References

Adams, A & McCrindle, J (2008) Pandora’s Box: Social and professional issues of the information age. England: John Wiley & Sons Ltd.

Electronic Frontier Foundation (n.d.) A “Grey Hat” Guide [Online]. Available from: http://www.eff.org/issues/coders/grey-hat-guide (Accessed: 5 December 2010).

Ethical Responsibilities of the Computing Professional

What responsibilities do we as computing professionals have in our industry? Do we have a responsibility solely to follow the goals and policies of our company?

Computer professionals, in my opinion, have ethical responsibilities but I do believe that in some circumstances these responsibilities are unattainable due to external circumstance.

In general, I believe a computer professional should be able to grasp and understand the goal of the intended system or systems they are working on. Not only to make ethical judgement but to perform their role in the development of such system from an informed point of view. If the professional is aware of the overall goal that the system is being developed for and the implications of such a system, he or she should be able to make judgement whether they approve or disapprove of the ethics behind such a system.

The problem with ethics is that different people, cultures etc. have different beliefs in right and wrong. So in this scenario a code of ethics for the organisation should be established to avoid any blurred interpretation, also so that the perspective employees can review them before deciding to apply for a job at the organisation (Payne, 2003).

To directly answer the question of what computing professionals responsibility to society at large are, I would say, is to keep the views of the user and the law in mind, while adhering to their responsibility in their organisation. To look at it from a user’s perspective and think of the effects that the system may have, both positively and negatively on the general populous. As well, to not knowingly jeopardise a system by infringing on copyrights or patents (Adams & McCrindle, 2008, p.10).

That being said, I do not think it the blame should lie on the professional. Today with the cost of living, you cannot choose to leave your current employer (and salary) due to your beliefs that what they are doing is, perhaps, wrong in your definition.

I feel that the goal of such projects and the determining of right and wrong in the broader scheme should lie in the area of business ethics and would be aimed at the organisation and decision makers of the project more than the professionals involved in carrying out such tasks.

To summarise I would say the responsibility of the professional is to carry out their role in the project to their best ability and concentration, to ‘care’ about what they are doing with the bigger picture in mind, rather than just going through the motions. This will hopefully ensure a quality production. The business ethics of right and wrong is more the responsibility of the organisation.

References

Adams, A.A. & McCrindle, R.J. (2008) Pandora’s box: Social and professional issues of the information age. West Sussex, England: John Wiley & Sons, Ltd.

Payne, D (2003) ‘Engineering ethics and business ethics: commonalities for a comprehensive code of ethics’, IEEE Region 5, 2003 Annual Technical Conference, pp.81-87, IEEE Xplore [Online]. DOI: 10.1109/REG5.2003.1199714 (Accessed: 7 November 2010).

What questions to ask in the Turing Test?

This issue is based on the “Turing Test” – you can read about the Turing Test and what it is by clicking here (to summarise, it is a test first proposed by Alan Turing where a human and a computer are asked a series of questions, and if the interrogator is unable to tell which is the computer, the computer has passed the “Turing Test” – the computer is able to “think”).

The 5 questions that I would ask the “computer” in the Turing Test would be the following:

  1. What was the most influential event of your childhood and how do you feel this event affects you today?
  2. Who are you as a person?
  3. Describe your feelings if you were to be given the opportunity to fly to the moon?
  4. If you were to draw yourself as an abstract painting, what colours and shapes would you use and why?
  5. What emotions have been involved in answering the questions that I have given you up to this point and what do you feel is the strongest question out of the 4?

I have chosen these questions as they are all fairly psychological and open to interpretation. While each individual question may be able to be answered individually, the group of questions describe a person’s personality and character in an abstract manner.

By looking at the answers to questions 1 and 4 you should be able to get the same idea as the answer to question 2 should give you, question 5 should culminate all questions and should be difficult to simulate a valid response, it is also completely dependent on the answers given to the previous 4 questions. Each question can change the final response in its own way. The second part of question 5 also can be interpreted based on human emotions involved in the answers to the previous 4 questions.

There is no definitive correct answer to any of the questions but human intuition will give the upper hand in deciding whether the answers given tie up to being human or machine.

PS: This is a highly debated topic of whether this test can really test for machine thought (AI), and some have proven that a series of random pre-programmed answers based on keywords may pass the test

Ethical issues with advances in medical technology

I’d like to discuss the issue outlined as follows:

“Medical treatment has advanced to the point that numerous parts of the human body can now be replaced with artificial parts or parts from human donors. It is conceivable that this might someday include parts of the brain. What ethical problems would such capabilities raise? If a patient’s neurons were replaced one at a time with artificial neurons, would the patient remain the same person? Would the patient ever notice a difference? Would the patient remain human?” Brookshear (2009, p.553).

The ethical problems with replacing parts of the brain by “artificial” parts would be those of “playing God”. The beliefs behind God and religion would play a large role in the ethics behind this, defeating death when many may turn to the belief that we die at a certain time for a reason.

Aside from that, there is also the issue that prolonging the lives of beings can aid to over population. The more living beings there are on the planet the more resources are required to sustain the living, this would eat into the earths already limited resources.

If a patient had artificial neurons replacing their existing ones, the theory behind the neurons themselves are that they ‘learn’ from experience, if the artificial neurons are placed with the other neurons, they should learn from the other neurons and therefore the person should remain the same. Although this depends on the question of how many neurons are replaced and what ‘intelligence’ did the neurons contain that were lost may definitely affect the person.

I have done some research on the question of ‘What makes us human?’, briefed over a few websites on the discussion, which I will list:

Interestingly enough none of them really delved into the topic of us being living, breathing organisms (thus, making us human) – more so they bridge on the idea that our intelligence is what makes us human, our emotions and creativity.

With the dawn of this new era of AI (even though it dates back to the 1950’s), I think that we are going to have to re-visit our definition of what makes us a human to include the above.

Personally, I think if we maintain our emotion, unique personality and creativity, we are still humans, irrelevant of the fact that we may have some artificial organs. If we lose the ability to have emotion, feelings and ideas then I feel we have lost what makes us human. What is the point of replacing an entire brain if the result is a totally different ‘person’ in the same skin? I do believe there are boundaries that should be kept. This subject is huge and I don’t think it is quite as easy as one may imagine to answer whether it is right or wrong. Personally, I lean towards being in favour of it, but then, as I mentioned above, how are we going to tackle overpopulation and the diminishing resources of this planet?

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd

OOP vs Structured Programming – Why is OOP better?

Structured programming, from my readings, described as the forerunner to object oriented programming (Wikipedia, 2010), had all the best intentions of OO without the best execution. Due to the fact that Structured programming was hierarchical tends to make every entity rely entirely on other entities to function, the existence of subsystems was good but when compared to the separation of these subsystems in OO I can see how it is a better choice.

In terms of our natural way of thinking, in my mind, OO programming represents our natural way of thought quite well. A human being is an entity on its own, with data of its own (name, age, etc.) and ‘functions’ of its own (ability to walk, talk, breath, etc.), if we think of an employee we know it is a human being with some additional ‘extensions’ to it (extension of the class/entity human being). Even as far as thinking of the world as a program with different entities applied to different roles is quite relatable to most common thought. I believe this is probably why, aside from its efficiency, OO is so well accepted.

Structured programming would become more confusing as the size of the application grew, instead of breaking the subsystems into stand-alone entities they are all intertwined with one another. Following the flow of something like this would be difficult for most people.

A good real world example I have dealt with in this regard, quite often, is where I have to deal with programming of systems in PHP from before PHP was object oriented (or, depending on the previous programmer, their preference for procedural over OO programming today). These systems consist of a couple of files consisting of thousands of lines of code with functions and procedural code mixed together. Following the logic in these systems often takes days to understand whereas when dealing with object oriented code it is easy to find where the logic and functionality for an entity or process is.

References

Wikiepedia (2010). Structured Programming [Online]. Available from: http://en.wikipedia.org/wiki/Structured_programming (Accessed: 10 October 2010).

PHP vs JavaScript – What are the differences?

The two programming languages I would like to compare are firstly, PHP and secondly, JavaScript.

PHP (PHP Hypertext Preprocessor) is a young language when compared to the likes of Perl and C and other largely popular languages. It was started in 1995 by Rasmus Lerdorf initially as a simple set of Perl scripts he named ‘Personal Home Page Tools’, later he expanded it and implemented it in C and then released it as an open source code base for all to use (The PHP Group, 2010). It has since then evolved from a procedural language to an OO language in its PHP 5 release (July 2004).

It is strictly a server side language which means the compilation, translation and actions are performed on the host computer it is running on. Syntax is similar to C/Java but it does not have certain rules of the other programming languages (such as declaring a variable as a certain type before using it, or the need to actually declare variables before using them). PHP’s purpose is predominantly for web applications but it can be used for non-web application scripting (command line tools etc.) although, from my experience, it is not well suited for background processing as it does tend to use up far more resources than languages like Perl and Python.

JavaScript is a language that was developed by Netscape to run on Netscape browsers and despite its name, is not a subset of the Java language or developed by Sun (Crockford, 2001). It is technically not an OO language, although it does have functions and objects it does not group entities into Classes. JavaScript is a client-side language meaning that it uses the client computers resources and compiler to execute its instructions, this makes it very light on the load of a server but also puts performance in the hands of the hardware that the users are accessing the script from, which, of course due to the nature of JavaScript being for web pages, is probably the most broad spectrum one could want when programming for consumers.

Both langauges I feel are often misunderstood due to the fact that just about any technical person is able to write a simple script in PHP and JavaScript and call themselves programmers. Often due to lack of training and expertise, the languages are given a bad name.

Similarities:

1)    Both languages are almost exclusively for the web and were created specifically for web use in the mid 90s.

2)    Syntax styles are both based on C

3)    Until only recently with PHP going full OO, both languages were not officially OO languages.

4)    Both languages are platform independent (compiler or ‘runtime environment’ is required though)

Differences

1)    PHP is server side while JavaScript is client side.

2)    PHP makes use of classes while JavaScript only has constructors and functions.

3)    JavaScript is used for a lot of visual effects and enhancements for web GUIs.

4)    Users are able to disable all JavaScript while browsing the internet due to its client-side compilation.

 

A simple example to outline the server-side/client-side JavaScript and PHP differences would be form validation. If you implement validation on a web form using JavaScript (eg: not allowing users to enter in an invalid email address or leave their name blank), the user is able to bypass the security checks by simply changing their browsers options to disable JavaScript. In PHP, if you validate form fields, the user must first submit the form to the server and the server will decide whether the data passed to it is correct or not, there is no way for the user to disable this from their computer.

An example of how similar the syntaxes are can be shown in the basic function below to convert a list of names to uppercase.

PHP:

function convertUpper($names) {

for ($i = 0; $i < count($names); $i++) {

$names [$i] = strtoupper($names[$i]);

}

return $names;

}

$names = array(‘michael’, ‘john’, ‘samantha’);

$names = convertUpper($names);

JavaScript

function convertUpper(names)  {

for (i = 0; i < names.length; i++) {

names[i] = names[i].toUpperCase();

}

return names;

}

var names = { ‘michael’, ‘john’, ‘samantha’ };

names = convertUpper(names);

References

Crockford, D (2001) JavaScript: The World’s Most Misunderstood Programming Language [Online]. Available from http://www.crockford.com/javascript/javascript.html (Accessed: 10 October 2010).

The PHP Group (2010) History of PHP [Online]. Available from http://www.php.net/manual/en/history.php.php (Accessed: 10 October 2010).