Categories
Business Computing Research Software

How can IT enhance a Managers Function, Role and Skill?

Managers have many functions, roles and skills, below I will illustrate one of each and how IT is able to improve performance in each of these examples.

  • FUNCTION – Manage Time and Resources Efficiently

As described by Management-Hub.com (n.d.), and as we all know well, time is “precious and vital”. A manager needs to manage his/her time well between his/her team and superiors as well as his/her time spent on organisational goals that require his/her personal capacity and skills.

I.T has introduced calendar software, such as Google’s Calendar (www.google.com/calendar) that is able to send SMS (text), E-Mail and Pop-up (if the calendar happens to be open) alerts to users on their cellular phone, laptop, computer or land line (depending on the carriers ability to read and/or receive text messages). Assuming that the manager will have with him at least one network enabled or communication enabled device at all times (which I think is a fairly safe assumption) this allows him/her to be constantly reminded of his appointments.

  • ROLE – Intermediary between employee groups and top management

About-Personal-Growth.com (n.d.) describes a manager as the “middle person in between top management level and the team that reports to him”. As most organisations are hierarchical, as well as the requirement for efficiency, managers are usually the liaison between upper management and regular employees. Due to the sheer size of certain organisations it would be difficult for a manager to keep track of exact discrepancies, complaints, issues, performance reviews and other requests from either party. I.T has brought with it the ability to track accountability and exact details of communications with tools as basic as E-mail. A manager will be able to refer to messages from management/employees directly when addressing issues with either party without missing any details.

  • SKILL – Good Planning

This is, in my opinion, one of the most important skills a manager requires – some may perhaps think more when referring to a project manager but I do think it’s equally important in all areas of management. Without organised planning the manager is unable to assess progress on achieving organisational goals. As About-Personal-Growth.com points out, “having goals and planning out the directions allow for effective time management and saves cost and resources”.

Planning also ties up with adaptability to change, both positive and negative. I think this would tie in with Buchanan and Huczynski‘s (2010, p.52) quotation of Ansoff in which Ansoff states that managers who are unable to develop an entrepreneurial way of thinking “must be replaced”.

References

About-Personal-Growth.com (n.d.) Managers – Roles and Responsibilities [Online]. Available from: http://www.about-personal-growth.com/managers.html (Accessed: 6 February 2011).

Buchanan, A. & Huczynski, A. (2010) Organizational Behavior. 7th Edition. Upper Saddle River: Prentice Hall.

Management-Hub.com (n.d.) Roles & Responsibilities of a Manager in an Organization [Online]. Available from: http://www.management-hub.com/hr-manager-roles.html (Accessed: 6 February 2011).

 

Categories
Computing Ethics Philosophy Research Software

What questions to ask in the Turing Test?

This issue is based on the “Turing Test” – you can read about the Turing Test and what it is by clicking here (to summarise, it is a test first proposed by Alan Turing where a human and a computer are asked a series of questions, and if the interrogator is unable to tell which is the computer, the computer has passed the “Turing Test” – the computer is able to “think”).

The 5 questions that I would ask the “computer” in the Turing Test would be the following:

  1. What was the most influential event of your childhood and how do you feel this event affects you today?
  2. Who are you as a person?
  3. Describe your feelings if you were to be given the opportunity to fly to the moon?
  4. If you were to draw yourself as an abstract painting, what colours and shapes would you use and why?
  5. What emotions have been involved in answering the questions that I have given you up to this point and what do you feel is the strongest question out of the 4?

I have chosen these questions as they are all fairly psychological and open to interpretation. While each individual question may be able to be answered individually, the group of questions describe a person’s personality and character in an abstract manner.

By looking at the answers to questions 1 and 4 you should be able to get the same idea as the answer to question 2 should give you, question 5 should culminate all questions and should be difficult to simulate a valid response, it is also completely dependent on the answers given to the previous 4 questions. Each question can change the final response in its own way. The second part of question 5 also can be interpreted based on human emotions involved in the answers to the previous 4 questions.

There is no definitive correct answer to any of the questions but human intuition will give the upper hand in deciding whether the answers given tie up to being human or machine.

PS: This is a highly debated topic of whether this test can really test for machine thought (AI), and some have proven that a series of random pre-programmed answers based on keywords may pass the test

Categories
Computing Development Research Software

What makes one algorithm better than another?

There are two main aspects which make an algorithm better than another. The first one I would like to discuss is the measure of how efficient an algorithm is in terms of the time the algorithm takes to complete its calculations and come to a result (an end).

Consider the examples as outlined by Brookshear (2009) of a binary search and a sequential search for a specific item in an ordered list. While the sequential search runs through a list one by one until it finds an entry, on average this search will search through at least half of the list – if the list is very large this may add up to quite a sufficient amount of time per search. A binary search divides a list into two, looks at the middle entry and calculates whether the result is in the first or last half of the resultant lists, then further breaks the respective half in half again, and repeats this process until it finds the item it is looking for. One can plainly see that the binary search method will access a record far less times on average than the sequential search.

Another aspect of measurement is the physical space the algorithm consumes when performing its calculations/operations. A very simple way of explaining this is the one of sorting a list alphabetically. The two methods are firstly the bubble sort versus secondly creating a new list in the correct order. In the bubble sort the method is to sort the list in the form of looking at it in small pieces and sorting them by shuffling around the entries until the entire list is sorted. This way we only occupy the memory/space of the original list. In the method of sorting into a new list, the memory occupied by the original list is used and another entire list is created from the same entries sorted into the correct order. The latter occupies double the amount of memory than the former.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

 

Categories
Computing Development Research Software

What are good problem solving techniques for developing algorithms

From my personal experience I often use past experiences to aid me in the problem solving process. I find that experience is one of the most valuable tools in the problem solving process. At the same time one should not be confined to past experiences as new methods of problem solving are always being discovered, discovering and developing new ways of doing things is, in fact, one of the main points of the IT industry.

Another method of problem solving I use is researching similar problems using the Internet. The Internet is a fantastic resource for finding users with similar problems or who have already solved similar algorithms to the ones I am trying to solve. If the problem I am trying to solve is proving difficult to find then, depending on the size of the algorithm I am trying to develop, it is still possible to research or have experience with sub-sets of the algorithm, which is a good way to get your ‘foot in the door’ as described by Brookshear (2009, p.218).

Perhaps too obvious to mention but I feel it is worth mentioning, all of the above constitutes to the method of trial and error. It is common that you will try a possible solution that does not work and then move to perhaps another few solutions before finding the most suitable, or correct algorithm.

Mentioned by Brookshear (2009, p.216), G. Polya, a mathematician, has developed an outline of the problem solving process which constist of :

  1. Understand the problem.
  2. Devise a plan for solving the problem.
  3. Carry out the plan.
  4. Evaluate the solution for accuracy and for its potential as a tool for solving other problems.

It is important to remember, as also described by Brookshear (2009, p.216) “we should emphasize that these phases are not steps to be followed when trying to solve a problem but rather phases that will be completed sometime during the process”.

Two other methods mentioned by Brookshear (2009, p. 220) are the ‘top-down methodology’ or ‘stepwise refinement’; this is the process of “first breaking the original problem at hands in terms of several subproblems”. The other, opposite method is the ‘bottom-up methodology’ in which we look at solving the problem starting with the specifics and going down to the general.

Another method I sometimes use, as stated by Anders and Simon (1980), is to verbalise the problem at hand, “Within the theoretical framework of human information processing, we discuss different types of processes underlying verbalization and present a model of how subjects, in response to an instruction to think aloud, verbalize information that they are attending to in short-term memory (STM). Verbalizing information is shown to affect cognitive processes only if the instructions require verbalization of information that would not otherwise be attended to”. While this study is referring to perhaps a different question answer scenario than the one we are discussing, I do believe it is relevant and affective to sometimes speak aloud when trying to solve a problem.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Ericsson, A. K. & Simon, A. H. (1980) ‘Verbal Reports as Data’, Psychological Review, 87 (3), pp.215-251, PsycNET [Online]. Available from: http://psycnet.apa.org.ezproxy.liv.ac.uk/doi/10.1037/0033-295X.87.3.215 (Accessed: 3 October 2010).

Categories
Computing Research Software

RISC vs. CISC and Programming

CISC architecture is known as ‘complex instruction set computer’, this architecture is based on the argument that a CPU is able to perform large numbers of complex instructions and therefore should be used to do so, even if they are redundant (Brookshear, 2009, p.85). RISC architecture is known as ‘reduced instruction set computer’, this architecture is based on the argument that the CPU should have a minimal set of instructions due to the fact that once a CPU has a certain minimum amount of instructions, adding any more does not increase capabilities and will only decrease the speed and efficiency of the machine (Brookshear, 2009, p.85).

The strengths of RISC over CISC are very apparent when reading though the paper by George (1990). In his paper he states that while CISC is more popular and will no doubt remain so in the near future due to backward-compatibility for current software, RISC has far more advantages when going forward due to the “constantly changing body of knowledge used for processor design”.

From my readings I believe that CISC processing is best suited for RAD (Rapid Application Development). In this day and age, at the pace of the IT and software industry it will help us with concentrating more on breaking new boundaries in development of less-complex software and design by taking the tedious work of programming operations that are run-of-the-mill requirements for every day applications away and allowing us to concentrate on more complex and new developments. Issues that may hamper this, however, is due to the fact that CISC processors are less efficient and take longer to process than RISC, and speed of execution is one of the most important factors in software today.

RISC processors allow us to have almost a “clean slate” when programming. With its minimal instruction set it can cater for new methods of developing the simplest of operations at a higher speed of what has been discovered and pre-programmed into the CPU like in CISC processors. RISC will also allow for more complex and low-level (more than likely larger, more advanced systems) development at more efficient performance due to the lack of excessive and potentially un-used instruction sets slowing down performance.

As far as future replacements, the popularity of the CISC chip may be replaced by the RISC chip as stated in the paper by George (2009). I do tend to agree on this but also by perhaps adding the ability for custom instructions to be stored in the CPU which would allow user/operating system/software defined instructions to be added dynamically for optimisation of that particular system configuration

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

George, A.D (1990) “An overview of RISC vs. CISC”, System Theory, 1990. Twenty-second Southeastern Symposium on, 11-13 March, Cookeville. IEEE, pp.436-438.

Categories
Computing Research Software

Software Architects – How will this position change in 10 years?

The question at hand I would like to discuss is, how do I think the position of a Software Architect will change in the next 10 years?

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

 

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

References

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).

 

 

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of

a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of

the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

Bibliography

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).