Categories
Computing Ethics Research

Ethical Hacking?

I believe that there are situations in which hacking a system would be ethical. Adams and McCrindle (2008) describe grey-hat attacks as having the aim of “identifying potential vulnerabilities and inform the organization of their weaknesses”; they also state that the reason for seeing grey-hat attacks as unethical is due to unintended consequences that may follow the attacks. I do not think that grey-hat techniques are ethical because of the risks they involve and that it is unethical to attack a system that you do not know or are unable to rectify (due to your lack of knowledge for the inner system).

In scenarios such as the response by hackers to WikiLeaks, users are hacking organisations and sites that fail to support WikiLeaks. An article by Neal (2010) described how a 16 year old boy from the Netherlands was arrested for his part in the “Operation Payback” DDoS attack on MasterCard and Visa. I also disagree with these tactics as no good is coming from it.

A scenario in which I would be pro-hacking is where the system in question is either involved in illegal activities or is involved in inciting illegal activities. Of course the hacking of this system would come after the correct measures of due diligence had been adhered to; such as reporting the system to their host, or to the authorities. An article by Brandt (2004) described how the NSA (National Security Agency – America) appeared at the “Defcon 12 hackers’ conference” to seek out highly skilled “hackers” to work for their organisation. Conspiracy theories aside, this scenario would be another ethical realm of hacking, to investigate illegal activities to help fight crime; anything from tracking down distributors of child pornography over the internet, to those who publish credit card details to the public.

References

Adams, A & McCrindle, J (2008) Pandora’s Box: Social and professional issues of the information age. England: John Wiley & Sons Ltd.

Brandt, A (2010) Feds Seek a Few Good Hackers [Online] PC World. Available from: http://www.pcworld.com/article/117226/feds_seek_a_few_good_hackers.html (Accessed: 12 December 2010).

Neal, D (2010) Dutch teen arrested over WikiLeaks revenge hacks [Online] V3. Available from: http://www.v3.co.uk/v3/news/2273867/wikileaks-paypal-hack (Accessed: 12 December 2010).

 

Categories
Computing Ethics Law Research

Responsibilities for Computing Professionals in Developing Material for the Internet

Responsibilities of the Computing Professional

The responsibilities of the computing professional, as covered in my previous posts, are both ethical and legal. It is our duty to inform and guide from our experience and expertise. The cliché of using our ‘powers’ for ‘good’ and not ‘evil’ can be broadly applied; as with almost any other profession.

Responsibilities Relating to Development of Internet Material

The word development here has a double connotation. Firstly the actual programming of “material” which could constitute any system that generates content or systems available on the internet or allows the generation of content on the internet. As discussed by Adams and McCrindle (2008, p.352), a number of malicious examples of software, created by computing professionals, are readily available on the Internet.

I’d like to briefly outline the relevant examples.

  1. Trojan Horses: These are quite literally as their name suggests, programs that pose as something innocent (most of the time), but hold inside them harmful code that will potentially damage your data or perform some other illicit task.
  2. Virus: This is a term many use to encompass all forms of malicious software, but is itself a specific type of malicious software. It can be carried with a Trojan Horse and usually replicates itself to other files and programs on the computer. Most of the time the program carries out a task that usually causes harm to data and possibly even hardware.
  3. Worm: These infections ‘worm’ their way through a network without requiring the means of a Trojan Horse or Virus to spread. If they are to spread outside of the current network they may also be carried via Trojan Horses.
  4. Zombie: These are programs designed to allow ‘back doors’ to a system so that it can be remotely accessed to perform a number of tasks (often used for Distributed Denial of Service attacks).

Secondly, perhaps a less direct means of our responsibility as computing professionals can be the “written” (typed) information we spread across the internet. Publicly releasing knowledge that could jeopardise systems is an ethical issue we need to take seriously. Sometimes, this may be a difficult decision to make but it is always something that should not be taken lightly.

Responsibilities Relating to the Usage of the Internet

Due to the global nature of the internet, its reach going into many secure facilities, government agencies, banks and other authorities; we must ensure that securing the implementations of these systems is a top priority. Adams and McCrindle (2008, p.368) describe black, white and grey hat crackers and the controversial issue of whether grey hat techniques are in the best interests of the organisation or not. Personally I am partial to both it being wrong and right as it really boils down to the situation at hand. If they grey-hat techniques simply identify back doors or other security threats without interfering or having negative effects on the current system, and provided the grey hat crackers do not plaster the vulnerabilities all over the internet – it may be acceptable. A paper by the Electronic Frontier Foundation mentions that grey-hat techniques may violate a number of laws such as the Computer Fraud and Abuse Act, Anti-Circumvention Provisions of the DMCA, Copyright Law and other state laws, so it is probably best to either secure your research or request permission beforehand when doing such techniques.

References

Adams, A & McCrindle, J (2008) Pandora’s Box: Social and professional issues of the information age. England: John Wiley & Sons Ltd.

Electronic Frontier Foundation (n.d.) A “Grey Hat” Guide [Online]. Available from: http://www.eff.org/issues/coders/grey-hat-guide (Accessed: 5 December 2010).

Categories
Ethics Law Philosophy Research

The WIPO Copyright Treaty & Feasibility of Copyrights

As discussed by Adams and McCrindle (2008, p.423), the WIPO Copyright Treaty includes an increased moral right to the author of the work, as per their example in Germany and France that income derived from an authors work must always partially flow back to the author. Also mentioned repeatedly by Adams and McCrindle the development of patents and copyrights were brought about to encourage creativity and reward innovation. The basis of this I am in full agreement of and I do believe that creators of new innovations and ideas must be accredited and compensated for their work. In the WIPO Copyright Treaty (Adams and McCrindle, 2008, p.422), the copyright law extends to the life of the author plus 70 years after the authors death.

The limitations of copyrights I can see would be simply up to the copyright owners’ decisions on how to distribute or how much they distribute their work for. As depicted in a discussion on Google Answers (2002), where an author published a book at a very high price and then died leaving the copyright to no heirs – the public must wait 70 years until they are able to reprint the work at a more reasonable price to increase circulation.

In the feasibility of copyright value-adds and levies are really only accurately argued when considered alongside the fees that the publisher/producer etc. are adding on-top. Many argue from an idealist point of view that (commonly the argument is against musicians) artists should be doing what they do to enjoy the art and not to be all about the money; but in the world we live in – money is an important aid to quality of life and enjoyment (note: I am not saying it is what gives quality to life, but it does help a lot when compared to poverty), to quote Adams and McCrindle again, without reward for innovation and creativity, would there be as many innovations and hard work put into developing new medicines and techniques for helping people? Even music and entertainment is something important to this world.

Never mind being rich and famous but just having monetary compensation to pay bills while enhancing the new potentially life-saving innovations is something we should definitely consider feasible.

While some may take advantage of these laws we have to consider the good coming from it.

References

Adams, A & McCrindle, J (2008) Pandora’s Box: Social and professional issues of the information age. England: John Wiley & Sons Ltd.

Google Answers (2002) Q: Copyrights after an author’s death [Online]. Google: Available from: http://answers.google.com/answers/threadview?id=21037 (Accessed: 28 November 2010).

Categories
Law Research

Privacy and Data Protection laws in South Africa

The South African Bill of Rights states that everyone has the right to privacy which includes the right to not have their person, home or property searched, their posessions seized or the privacy of their communications infringed (South African Government, 2009).

The same Bill of Rights states that everyone has the right of access to any information held by the state and “any information that is held by another person and that is required for the excersize or protection of any rights” (South African Government, 2009).

South Africa also has the “ECT Act” (Electronic Communications and Transactions Act), which covers personal information that has been obtained through electronic transactions, which defines a set of rules between the person the information is about and the person/organisation (“data controller”) who is holding that information. This act states that the data controller must abide by all of the following points:

“(1) A data controller must have the express written permission of the data subject for the collection, collation, processing or disclosure of any personal information on that data subject unless he or she is permitted or required to do so by law.

(2) A data controller may not electronically request, collect, collate, process or store personal information on a data subject which is not necessary for the lawful purpose for which the personal information is required.

(3) The data controller must disclose in writing to the data subject the specific purpose for which any personal information is being requested, collected, collated, processed or stored.

(4) The data controller may not use the personal information for any other purpose than the disclosed purpose without the express written permission of the data subject, unless he or she is permitted or required to do so by law.

(5) The data controller must, for as long as the personal information is used and for a period of at least one year thereafter, keep a record of the personal information and the specific purpose for which the personal information was collected.

(6) A data controller may not disclose any of the personal information held by it to a third party, unless required or permitted by law or specifically authorised to do so in writing by the data subject.

(7) The data controller must, for as long as the personal information is used and for a period of at least one year thereafter, keep a record of any third party to whom the personal information was disclosed and of the date on which and the purpose for which it was disclosed.

(8) The data controller must delete or destroy all personal information which has become obsolete.

(9) A party controlling personal information may use that personal information to compile profiles for statistical purposes and may freely trade with such profiles and statistical data, as long as the profiles or statistical data cannot be linked to any specific data subject by a third party.” (South African Government, 2002).

In contrast to the UK, South Africa does not specifically have a Data Protection Act, if we look at the Data Protection Act 1998 for the United Kingdom (United Kingdom Government) we see that it’s section on “Rights of access to personal data” are almost the same as South Africa’s but contains a much more comprehensive overview on the subject.

Interestingly enough the U.S does not have a specific Data Protection Act. They have the “Privacy Act of 1974” and the “Computer Matching and Privacy Act” but both of which only apply to personal information held by the government and does not include other entities. The U.S has another act, “The Privacy Act” which can be described as follows: “The act set forth some basic principles of “fair information practice,” and provided individuals with the right of access to information about themselves and the right to challenge the contents of records. It requires that personal information may only be disclosed with the individual’s consent or for purposes announced in advance. The act also requires federal agencies to publish an annual list of systems maintained by the agency that contain personal information.” (Stratford & Stratford, 1998).

References

South African Government (2009) Chapter 2 – Bill of Rights [Online]. Available from: http://www.info.gov.za/documents/constitution/1996/96cons2.htm#14 (Accessed: 14 November 2010).

South African Government (2002) Electronic Communications and Transactions Act, 2002, No. 25 of 2002 [Online]. Available from: http://www.internet.org.za/ect_act.html (Accessed: 14 November 2010).

Stratford, J.S & Stratford, J (1998) ‘Data Protection and Privacy in the United States and Europe’, IASSIST Conference, 21 May, Yale University. New Haven, Connecticut: University of California.

United Kingdom Government (1998) Data Protection Act 1998 [Online]. Available from: http://www.legislation.gov.uk/ukpga/1998/29/contents (Accessed: 14 November 2010).

Categories
Computing Ethics Philosophy Research

Ethical Responsibilities of the Computing Professional

What responsibilities do we as computing professionals have in our industry? Do we have a responsibility solely to follow the goals and policies of our company?

Computer professionals, in my opinion, have ethical responsibilities but I do believe that in some circumstances these responsibilities are unattainable due to external circumstance.

In general, I believe a computer professional should be able to grasp and understand the goal of the intended system or systems they are working on. Not only to make ethical judgement but to perform their role in the development of such system from an informed point of view. If the professional is aware of the overall goal that the system is being developed for and the implications of such a system, he or she should be able to make judgement whether they approve or disapprove of the ethics behind such a system.

The problem with ethics is that different people, cultures etc. have different beliefs in right and wrong. So in this scenario a code of ethics for the organisation should be established to avoid any blurred interpretation, also so that the perspective employees can review them before deciding to apply for a job at the organisation (Payne, 2003).

To directly answer the question of what computing professionals responsibility to society at large are, I would say, is to keep the views of the user and the law in mind, while adhering to their responsibility in their organisation. To look at it from a user’s perspective and think of the effects that the system may have, both positively and negatively on the general populous. As well, to not knowingly jeopardise a system by infringing on copyrights or patents (Adams & McCrindle, 2008, p.10).

That being said, I do not think it the blame should lie on the professional. Today with the cost of living, you cannot choose to leave your current employer (and salary) due to your beliefs that what they are doing is, perhaps, wrong in your definition.

I feel that the goal of such projects and the determining of right and wrong in the broader scheme should lie in the area of business ethics and would be aimed at the organisation and decision makers of the project more than the professionals involved in carrying out such tasks.

To summarise I would say the responsibility of the professional is to carry out their role in the project to their best ability and concentration, to ‘care’ about what they are doing with the bigger picture in mind, rather than just going through the motions. This will hopefully ensure a quality production. The business ethics of right and wrong is more the responsibility of the organisation.

References

Adams, A.A. & McCrindle, R.J. (2008) Pandora’s box: Social and professional issues of the information age. West Sussex, England: John Wiley & Sons, Ltd.

Payne, D (2003) ‘Engineering ethics and business ethics: commonalities for a comprehensive code of ethics’, IEEE Region 5, 2003 Annual Technical Conference, pp.81-87, IEEE Xplore [Online]. DOI: 10.1109/REG5.2003.1199714 (Accessed: 7 November 2010).

Categories
Computing Ethics Philosophy Research Software

What questions to ask in the Turing Test?

This issue is based on the “Turing Test” – you can read about the Turing Test and what it is by clicking here (to summarise, it is a test first proposed by Alan Turing where a human and a computer are asked a series of questions, and if the interrogator is unable to tell which is the computer, the computer has passed the “Turing Test” – the computer is able to “think”).

The 5 questions that I would ask the “computer” in the Turing Test would be the following:

  1. What was the most influential event of your childhood and how do you feel this event affects you today?
  2. Who are you as a person?
  3. Describe your feelings if you were to be given the opportunity to fly to the moon?
  4. If you were to draw yourself as an abstract painting, what colours and shapes would you use and why?
  5. What emotions have been involved in answering the questions that I have given you up to this point and what do you feel is the strongest question out of the 4?

I have chosen these questions as they are all fairly psychological and open to interpretation. While each individual question may be able to be answered individually, the group of questions describe a person’s personality and character in an abstract manner.

By looking at the answers to questions 1 and 4 you should be able to get the same idea as the answer to question 2 should give you, question 5 should culminate all questions and should be difficult to simulate a valid response, it is also completely dependent on the answers given to the previous 4 questions. Each question can change the final response in its own way. The second part of question 5 also can be interpreted based on human emotions involved in the answers to the previous 4 questions.

There is no definitive correct answer to any of the questions but human intuition will give the upper hand in deciding whether the answers given tie up to being human or machine.

PS: This is a highly debated topic of whether this test can really test for machine thought (AI), and some have proven that a series of random pre-programmed answers based on keywords may pass the test

Categories
Computing Ethics Research

Ethical issues with advances in medical technology

I’d like to discuss the issue outlined as follows:

“Medical treatment has advanced to the point that numerous parts of the human body can now be replaced with artificial parts or parts from human donors. It is conceivable that this might someday include parts of the brain. What ethical problems would such capabilities raise? If a patient’s neurons were replaced one at a time with artificial neurons, would the patient remain the same person? Would the patient ever notice a difference? Would the patient remain human?” Brookshear (2009, p.553).

The ethical problems with replacing parts of the brain by “artificial” parts would be those of “playing God”. The beliefs behind God and religion would play a large role in the ethics behind this, defeating death when many may turn to the belief that we die at a certain time for a reason.

Aside from that, there is also the issue that prolonging the lives of beings can aid to over population. The more living beings there are on the planet the more resources are required to sustain the living, this would eat into the earths already limited resources.

If a patient had artificial neurons replacing their existing ones, the theory behind the neurons themselves are that they ‘learn’ from experience, if the artificial neurons are placed with the other neurons, they should learn from the other neurons and therefore the person should remain the same. Although this depends on the question of how many neurons are replaced and what ‘intelligence’ did the neurons contain that were lost may definitely affect the person.

I have done some research on the question of ‘What makes us human?’, briefed over a few websites on the discussion, which I will list:

Interestingly enough none of them really delved into the topic of us being living, breathing organisms (thus, making us human) – more so they bridge on the idea that our intelligence is what makes us human, our emotions and creativity.

With the dawn of this new era of AI (even though it dates back to the 1950’s), I think that we are going to have to re-visit our definition of what makes us a human to include the above.

Personally, I think if we maintain our emotion, unique personality and creativity, we are still humans, irrelevant of the fact that we may have some artificial organs. If we lose the ability to have emotion, feelings and ideas then I feel we have lost what makes us human. What is the point of replacing an entire brain if the result is a totally different ‘person’ in the same skin? I do believe there are boundaries that should be kept. This subject is huge and I don’t think it is quite as easy as one may imagine to answer whether it is right or wrong. Personally, I lean towards being in favour of it, but then, as I mentioned above, how are we going to tackle overpopulation and the diminishing resources of this planet?

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd

Categories
Computing Development Research

OOP vs Structured Programming – Why is OOP better?

Structured programming, from my readings, described as the forerunner to object oriented programming (Wikipedia, 2010), had all the best intentions of OO without the best execution. Due to the fact that Structured programming was hierarchical tends to make every entity rely entirely on other entities to function, the existence of subsystems was good but when compared to the separation of these subsystems in OO I can see how it is a better choice.

In terms of our natural way of thinking, in my mind, OO programming represents our natural way of thought quite well. A human being is an entity on its own, with data of its own (name, age, etc.) and ‘functions’ of its own (ability to walk, talk, breath, etc.), if we think of an employee we know it is a human being with some additional ‘extensions’ to it (extension of the class/entity human being). Even as far as thinking of the world as a program with different entities applied to different roles is quite relatable to most common thought. I believe this is probably why, aside from its efficiency, OO is so well accepted.

Structured programming would become more confusing as the size of the application grew, instead of breaking the subsystems into stand-alone entities they are all intertwined with one another. Following the flow of something like this would be difficult for most people.

A good real world example I have dealt with in this regard, quite often, is where I have to deal with programming of systems in PHP from before PHP was object oriented (or, depending on the previous programmer, their preference for procedural over OO programming today). These systems consist of a couple of files consisting of thousands of lines of code with functions and procedural code mixed together. Following the logic in these systems often takes days to understand whereas when dealing with object oriented code it is easy to find where the logic and functionality for an entity or process is.

References

Wikiepedia (2010). Structured Programming [Online]. Available from: http://en.wikipedia.org/wiki/Structured_programming (Accessed: 10 October 2010).

Categories
Computing Development Research

PHP vs JavaScript – What are the differences?

The two programming languages I would like to compare are firstly, PHP and secondly, JavaScript.

PHP (PHP Hypertext Preprocessor) is a young language when compared to the likes of Perl and C and other largely popular languages. It was started in 1995 by Rasmus Lerdorf initially as a simple set of Perl scripts he named ‘Personal Home Page Tools’, later he expanded it and implemented it in C and then released it as an open source code base for all to use (The PHP Group, 2010). It has since then evolved from a procedural language to an OO language in its PHP 5 release (July 2004).

It is strictly a server side language which means the compilation, translation and actions are performed on the host computer it is running on. Syntax is similar to C/Java but it does not have certain rules of the other programming languages (such as declaring a variable as a certain type before using it, or the need to actually declare variables before using them). PHP’s purpose is predominantly for web applications but it can be used for non-web application scripting (command line tools etc.) although, from my experience, it is not well suited for background processing as it does tend to use up far more resources than languages like Perl and Python.

JavaScript is a language that was developed by Netscape to run on Netscape browsers and despite its name, is not a subset of the Java language or developed by Sun (Crockford, 2001). It is technically not an OO language, although it does have functions and objects it does not group entities into Classes. JavaScript is a client-side language meaning that it uses the client computers resources and compiler to execute its instructions, this makes it very light on the load of a server but also puts performance in the hands of the hardware that the users are accessing the script from, which, of course due to the nature of JavaScript being for web pages, is probably the most broad spectrum one could want when programming for consumers.

Both langauges I feel are often misunderstood due to the fact that just about any technical person is able to write a simple script in PHP and JavaScript and call themselves programmers. Often due to lack of training and expertise, the languages are given a bad name.

Similarities:

1)    Both languages are almost exclusively for the web and were created specifically for web use in the mid 90s.

2)    Syntax styles are both based on C

3)    Until only recently with PHP going full OO, both languages were not officially OO languages.

4)    Both languages are platform independent (compiler or ‘runtime environment’ is required though)

Differences

1)    PHP is server side while JavaScript is client side.

2)    PHP makes use of classes while JavaScript only has constructors and functions.

3)    JavaScript is used for a lot of visual effects and enhancements for web GUIs.

4)    Users are able to disable all JavaScript while browsing the internet due to its client-side compilation.

 

A simple example to outline the server-side/client-side JavaScript and PHP differences would be form validation. If you implement validation on a web form using JavaScript (eg: not allowing users to enter in an invalid email address or leave their name blank), the user is able to bypass the security checks by simply changing their browsers options to disable JavaScript. In PHP, if you validate form fields, the user must first submit the form to the server and the server will decide whether the data passed to it is correct or not, there is no way for the user to disable this from their computer.

An example of how similar the syntaxes are can be shown in the basic function below to convert a list of names to uppercase.

PHP:

function convertUpper($names) {

for ($i = 0; $i < count($names); $i++) {

$names [$i] = strtoupper($names[$i]);

}

return $names;

}

$names = array(‘michael’, ‘john’, ‘samantha’);

$names = convertUpper($names);

JavaScript

function convertUpper(names)  {

for (i = 0; i < names.length; i++) {

names[i] = names[i].toUpperCase();

}

return names;

}

var names = { ‘michael’, ‘john’, ‘samantha’ };

names = convertUpper(names);

References

Crockford, D (2001) JavaScript: The World’s Most Misunderstood Programming Language [Online]. Available from http://www.crockford.com/javascript/javascript.html (Accessed: 10 October 2010).

The PHP Group (2010) History of PHP [Online]. Available from http://www.php.net/manual/en/history.php.php (Accessed: 10 October 2010).

 

Categories
Computing Development Research Software

What makes one algorithm better than another?

There are two main aspects which make an algorithm better than another. The first one I would like to discuss is the measure of how efficient an algorithm is in terms of the time the algorithm takes to complete its calculations and come to a result (an end).

Consider the examples as outlined by Brookshear (2009) of a binary search and a sequential search for a specific item in an ordered list. While the sequential search runs through a list one by one until it finds an entry, on average this search will search through at least half of the list – if the list is very large this may add up to quite a sufficient amount of time per search. A binary search divides a list into two, looks at the middle entry and calculates whether the result is in the first or last half of the resultant lists, then further breaks the respective half in half again, and repeats this process until it finds the item it is looking for. One can plainly see that the binary search method will access a record far less times on average than the sequential search.

Another aspect of measurement is the physical space the algorithm consumes when performing its calculations/operations. A very simple way of explaining this is the one of sorting a list alphabetically. The two methods are firstly the bubble sort versus secondly creating a new list in the correct order. In the bubble sort the method is to sort the list in the form of looking at it in small pieces and sorting them by shuffling around the entries until the entire list is sorted. This way we only occupy the memory/space of the original list. In the method of sorting into a new list, the memory occupied by the original list is used and another entire list is created from the same entries sorted into the correct order. The latter occupies double the amount of memory than the former.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.