Categories
Computing Development Research

OOP vs Structured Programming – Why is OOP better?

Structured programming, from my readings, described as the forerunner to object oriented programming (Wikipedia, 2010), had all the best intentions of OO without the best execution. Due to the fact that Structured programming was hierarchical tends to make every entity rely entirely on other entities to function, the existence of subsystems was good but when compared to the separation of these subsystems in OO I can see how it is a better choice.

In terms of our natural way of thinking, in my mind, OO programming represents our natural way of thought quite well. A human being is an entity on its own, with data of its own (name, age, etc.) and ‘functions’ of its own (ability to walk, talk, breath, etc.), if we think of an employee we know it is a human being with some additional ‘extensions’ to it (extension of the class/entity human being). Even as far as thinking of the world as a program with different entities applied to different roles is quite relatable to most common thought. I believe this is probably why, aside from its efficiency, OO is so well accepted.

Structured programming would become more confusing as the size of the application grew, instead of breaking the subsystems into stand-alone entities they are all intertwined with one another. Following the flow of something like this would be difficult for most people.

A good real world example I have dealt with in this regard, quite often, is where I have to deal with programming of systems in PHP from before PHP was object oriented (or, depending on the previous programmer, their preference for procedural over OO programming today). These systems consist of a couple of files consisting of thousands of lines of code with functions and procedural code mixed together. Following the logic in these systems often takes days to understand whereas when dealing with object oriented code it is easy to find where the logic and functionality for an entity or process is.

References

Wikiepedia (2010). Structured Programming [Online]. Available from: http://en.wikipedia.org/wiki/Structured_programming (Accessed: 10 October 2010).

Categories
Computing Development Research

PHP vs JavaScript – What are the differences?

The two programming languages I would like to compare are firstly, PHP and secondly, JavaScript.

PHP (PHP Hypertext Preprocessor) is a young language when compared to the likes of Perl and C and other largely popular languages. It was started in 1995 by Rasmus Lerdorf initially as a simple set of Perl scripts he named ‘Personal Home Page Tools’, later he expanded it and implemented it in C and then released it as an open source code base for all to use (The PHP Group, 2010). It has since then evolved from a procedural language to an OO language in its PHP 5 release (July 2004).

It is strictly a server side language which means the compilation, translation and actions are performed on the host computer it is running on. Syntax is similar to C/Java but it does not have certain rules of the other programming languages (such as declaring a variable as a certain type before using it, or the need to actually declare variables before using them). PHP’s purpose is predominantly for web applications but it can be used for non-web application scripting (command line tools etc.) although, from my experience, it is not well suited for background processing as it does tend to use up far more resources than languages like Perl and Python.

JavaScript is a language that was developed by Netscape to run on Netscape browsers and despite its name, is not a subset of the Java language or developed by Sun (Crockford, 2001). It is technically not an OO language, although it does have functions and objects it does not group entities into Classes. JavaScript is a client-side language meaning that it uses the client computers resources and compiler to execute its instructions, this makes it very light on the load of a server but also puts performance in the hands of the hardware that the users are accessing the script from, which, of course due to the nature of JavaScript being for web pages, is probably the most broad spectrum one could want when programming for consumers.

Both langauges I feel are often misunderstood due to the fact that just about any technical person is able to write a simple script in PHP and JavaScript and call themselves programmers. Often due to lack of training and expertise, the languages are given a bad name.

Similarities:

1)    Both languages are almost exclusively for the web and were created specifically for web use in the mid 90s.

2)    Syntax styles are both based on C

3)    Until only recently with PHP going full OO, both languages were not officially OO languages.

4)    Both languages are platform independent (compiler or ‘runtime environment’ is required though)

Differences

1)    PHP is server side while JavaScript is client side.

2)    PHP makes use of classes while JavaScript only has constructors and functions.

3)    JavaScript is used for a lot of visual effects and enhancements for web GUIs.

4)    Users are able to disable all JavaScript while browsing the internet due to its client-side compilation.

 

A simple example to outline the server-side/client-side JavaScript and PHP differences would be form validation. If you implement validation on a web form using JavaScript (eg: not allowing users to enter in an invalid email address or leave their name blank), the user is able to bypass the security checks by simply changing their browsers options to disable JavaScript. In PHP, if you validate form fields, the user must first submit the form to the server and the server will decide whether the data passed to it is correct or not, there is no way for the user to disable this from their computer.

An example of how similar the syntaxes are can be shown in the basic function below to convert a list of names to uppercase.

PHP:

function convertUpper($names) {

for ($i = 0; $i < count($names); $i++) {

$names [$i] = strtoupper($names[$i]);

}

return $names;

}

$names = array(‘michael’, ‘john’, ‘samantha’);

$names = convertUpper($names);

JavaScript

function convertUpper(names)  {

for (i = 0; i < names.length; i++) {

names[i] = names[i].toUpperCase();

}

return names;

}

var names = { ‘michael’, ‘john’, ‘samantha’ };

names = convertUpper(names);

References

Crockford, D (2001) JavaScript: The World’s Most Misunderstood Programming Language [Online]. Available from http://www.crockford.com/javascript/javascript.html (Accessed: 10 October 2010).

The PHP Group (2010) History of PHP [Online]. Available from http://www.php.net/manual/en/history.php.php (Accessed: 10 October 2010).

 

Categories
Computing Development Research Software

What makes one algorithm better than another?

There are two main aspects which make an algorithm better than another. The first one I would like to discuss is the measure of how efficient an algorithm is in terms of the time the algorithm takes to complete its calculations and come to a result (an end).

Consider the examples as outlined by Brookshear (2009) of a binary search and a sequential search for a specific item in an ordered list. While the sequential search runs through a list one by one until it finds an entry, on average this search will search through at least half of the list – if the list is very large this may add up to quite a sufficient amount of time per search. A binary search divides a list into two, looks at the middle entry and calculates whether the result is in the first or last half of the resultant lists, then further breaks the respective half in half again, and repeats this process until it finds the item it is looking for. One can plainly see that the binary search method will access a record far less times on average than the sequential search.

Another aspect of measurement is the physical space the algorithm consumes when performing its calculations/operations. A very simple way of explaining this is the one of sorting a list alphabetically. The two methods are firstly the bubble sort versus secondly creating a new list in the correct order. In the bubble sort the method is to sort the list in the form of looking at it in small pieces and sorting them by shuffling around the entries until the entire list is sorted. This way we only occupy the memory/space of the original list. In the method of sorting into a new list, the memory occupied by the original list is used and another entire list is created from the same entries sorted into the correct order. The latter occupies double the amount of memory than the former.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

 

Categories
Computing Development Research Software

What are good problem solving techniques for developing algorithms

From my personal experience I often use past experiences to aid me in the problem solving process. I find that experience is one of the most valuable tools in the problem solving process. At the same time one should not be confined to past experiences as new methods of problem solving are always being discovered, discovering and developing new ways of doing things is, in fact, one of the main points of the IT industry.

Another method of problem solving I use is researching similar problems using the Internet. The Internet is a fantastic resource for finding users with similar problems or who have already solved similar algorithms to the ones I am trying to solve. If the problem I am trying to solve is proving difficult to find then, depending on the size of the algorithm I am trying to develop, it is still possible to research or have experience with sub-sets of the algorithm, which is a good way to get your ‘foot in the door’ as described by Brookshear (2009, p.218).

Perhaps too obvious to mention but I feel it is worth mentioning, all of the above constitutes to the method of trial and error. It is common that you will try a possible solution that does not work and then move to perhaps another few solutions before finding the most suitable, or correct algorithm.

Mentioned by Brookshear (2009, p.216), G. Polya, a mathematician, has developed an outline of the problem solving process which constist of :

  1. Understand the problem.
  2. Devise a plan for solving the problem.
  3. Carry out the plan.
  4. Evaluate the solution for accuracy and for its potential as a tool for solving other problems.

It is important to remember, as also described by Brookshear (2009, p.216) “we should emphasize that these phases are not steps to be followed when trying to solve a problem but rather phases that will be completed sometime during the process”.

Two other methods mentioned by Brookshear (2009, p. 220) are the ‘top-down methodology’ or ‘stepwise refinement’; this is the process of “first breaking the original problem at hands in terms of several subproblems”. The other, opposite method is the ‘bottom-up methodology’ in which we look at solving the problem starting with the specifics and going down to the general.

Another method I sometimes use, as stated by Anders and Simon (1980), is to verbalise the problem at hand, “Within the theoretical framework of human information processing, we discuss different types of processes underlying verbalization and present a model of how subjects, in response to an instruction to think aloud, verbalize information that they are attending to in short-term memory (STM). Verbalizing information is shown to affect cognitive processes only if the instructions require verbalization of information that would not otherwise be attended to”. While this study is referring to perhaps a different question answer scenario than the one we are discussing, I do believe it is relevant and affective to sometimes speak aloud when trying to solve a problem.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Ericsson, A. K. & Simon, A. H. (1980) ‘Verbal Reports as Data’, Psychological Review, 87 (3), pp.215-251, PsycNET [Online]. Available from: http://psycnet.apa.org.ezproxy.liv.ac.uk/doi/10.1037/0033-295X.87.3.215 (Accessed: 3 October 2010).

Categories
Computing Operating Systems Research

The conflicting, dual definitions on the purpose of an Operating System

I was posed this question in my studies. An operating system is said to have two, conflicting definitions of purpose:

  1. Presenting a virtual machine with a user-friendly GUI to a user which isolates them from the low-level hardware
  2. It must manage, efficiently, the limited resources of the hardware system

Firstly I tend to agree that the two conflict in nature. If we think of the operating system on the whole and then think of the goal of efficiently managing resources it will not take long to realise that the nature of an operating system is not entirely geared towards efficiency. On the surface the limited resources of the computer are often wasted on things like graphical effects and enhancements like transparency and animation. Often the resources required to perform these seemingly meaningless tasks are very taxing on the system hardware.

If I look at the question again, the phrase “manage in the most efficient way the (always) limited resource of the computer system” (University of Liverpool, 2010), I would say that it does manage the resources of the operating system quite efficiently, because, regardless of the task at hand, the computer manages to operate quite smoothly when doing its multi-tasking and just looking at the task manager in Windows 7 you are able to see just how many tasks are running at any one time, and the computer still operates responsively and seemingly effortlessly. What I’ve just mentioned does entirely depend on the spec of hardware that your computer is running, RAM and CPU speed etc. but if you follow the minimum requirements performance is generally as it is expected.

My conclusion is that I do think that they work together quite effectively as, in this specific answer of mine, Windows 7 as an operating system is very user friendly while still managing the computers resource efficiently and quickly (despite its predecessor, Vista, which did not manage resources as well). The concept of a process is absolutely vital to the success of both managing the hardware efficiently and providing a user friendly environment. A process is defined as a dynamic activity who’s properties change as time progresses (Brookshear, J, pp.134), coupled with multiprogramming is a way in which different activities and resources are managed and organised, without this there would be chaos and I believe the computer would be sent back to the days of batch processing single tasks.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Development Operating Systems

My current “IT” setup, is it performing adequately?

As a desktop operating system I am using Windows 7 Professional (64bit). The networks I use at both my home office and work office are a combination of Ethernet/Bus and Wifi.  I do believe it is worth mentioning that all my ‘test’ and ‘live’ environments that I work on (aside from some of my testing being done on my PC/Laptop itself) are Linux based systems, specifically Ubuntu Linux and CentOS Linux as well as dealing with my immediate colleagues who are running Apple Mac.

I believe that the systems are performing to the baseline of their expectations in some degree and doing exactly what they need in another degree. Speed being a major drawback, and permissions across shared drives/folders being more of an inconvenience than a show-stopper.

From an operating system perspective, I do feel that Windows is lacking in its command line shell when compared to its Unix based counterparts. The ease of installing server based software for the work that I do (Apache, PHP, MySQL, SSL) on some parts are very easy but for more advanced setups such as SSL with Apache on a Windows machine is not as easy as it is on Linux. The pros of Windows I believe is the vast availability of software, the lack of being tied to specific hardware (such as with Apple) and the ease of use is very straightforward (especially when compared to Linux), as well as how easy it is to add/remove software without it being provided through repositories specifically made for the OS (such as with Apple and Linux).

As far as the network goes, when dealing with data streaming or copying large amounts of data or just working with a network drive (especially when using SVN across a repository with 1000s of files), Wifi falls very short of what I personally would accept as efficient performance. Currently we are running 100MBIT LAN to help alleviate this but then there is another bottleneck of our network server being quite old, but that does not really matter in this specific situation.

Due to working with multiple programmers, the disadvantage of the slowed network drive performance does outweigh each developer developing locally and doing an SVN commit with each code change to check on the  test server (our network drives are located on the test server), this allows us to edit directly on the test server and see our changes instantly without committing and updating the test servers code base. Although the time it takes to commit all our changes to live from our network drive is far longer than it should be. I would say the positive is still outweighing the negative.

At the moment our internal office network is secured by a basic ADSL router, our Wifi by the built in router wireless security and all of our work done on the test and live servers are done over SSH. I do believe that for the purposes of our organisation the security is sufficient. While one could look at it as the “standard” level of security, we do not deal with vital user information in our local network. Each computer is equipped with software antivirus and firewalls that update daily. I personally believe that overdoing the security, while it does lower your risk, can be overkill if you are not dealing with mission critical data (such as banking information).

If I were to modify just one element I would designate more responsibility to our test server as at the moment it is serving as the DHCP server while the DNS and routing/gateway is the router. To centralise the DNS being handled on the test server as well it would alleviate us needing to alter our hosts file when working remotely to point to the test servers external IP instead of its internal LAN IP. If DNS was handled by the test server the IP address designation could be altered at the source instead of each user computer.

Categories
Computing Research

Would it be a good idea to carry type designations in data cells of hardware?

I would like to answer this question without concentrating too much on polymorphism itself. Looking at this suggestion I immediately think back to the RISC vs. CISC argument. Basically, by adding the type designation with the hardware device we are leaning towards the RISC attitude of computing instructions; I say this because giving the CPU the type designation directly from the hardware device will decrease the computations of the CPU and allow it to do what it needs to do because the work has already been done on the side of the hardware.

While this would potentially be a good idea as one would argue that it would lessen the amount of errors potentially occurred, I do also see an increasing amount of error occurrence. The reason why I say this is by referring to the chapters read in the book by Brookshear (mentioned below) where (very abstract summary) it is stated that certain electric interference may alter the state of a bit which would potentially corrupt the data being stored or sent (Brookshear, 2009). The more we rely on outside components holding imperative data and having it transferred to and from components, the more we need to cater for the occurrence of errors. If the work remained in the CPU, it would only need to receive the “dumb” data from the devices and we would have less scope for errors coming from the hardware component.

Another issue I see with this is that there will be an increased size in the flow of data along the controllers and buses of the computer. The rate of transfer along the bus is more than likely far slower than the CPU’s required time to compute the instruction along with its types.

My final issue against this notion would be the financial feasibility. As increased programming would go into the hardware devices then increased complexity would go into the software developed for the control of the device and the costs would no doubt be increased as well.

With all that said, I think in an ideal world this idea would be good to share the load of the CPU.

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

Categories
Computing Research Software

RISC vs. CISC and Programming

CISC architecture is known as ‘complex instruction set computer’, this architecture is based on the argument that a CPU is able to perform large numbers of complex instructions and therefore should be used to do so, even if they are redundant (Brookshear, 2009, p.85). RISC architecture is known as ‘reduced instruction set computer’, this architecture is based on the argument that the CPU should have a minimal set of instructions due to the fact that once a CPU has a certain minimum amount of instructions, adding any more does not increase capabilities and will only decrease the speed and efficiency of the machine (Brookshear, 2009, p.85).

The strengths of RISC over CISC are very apparent when reading though the paper by George (1990). In his paper he states that while CISC is more popular and will no doubt remain so in the near future due to backward-compatibility for current software, RISC has far more advantages when going forward due to the “constantly changing body of knowledge used for processor design”.

From my readings I believe that CISC processing is best suited for RAD (Rapid Application Development). In this day and age, at the pace of the IT and software industry it will help us with concentrating more on breaking new boundaries in development of less-complex software and design by taking the tedious work of programming operations that are run-of-the-mill requirements for every day applications away and allowing us to concentrate on more complex and new developments. Issues that may hamper this, however, is due to the fact that CISC processors are less efficient and take longer to process than RISC, and speed of execution is one of the most important factors in software today.

RISC processors allow us to have almost a “clean slate” when programming. With its minimal instruction set it can cater for new methods of developing the simplest of operations at a higher speed of what has been discovered and pre-programmed into the CPU like in CISC processors. RISC will also allow for more complex and low-level (more than likely larger, more advanced systems) development at more efficient performance due to the lack of excessive and potentially un-used instruction sets slowing down performance.

As far as future replacements, the popularity of the CISC chip may be replaced by the RISC chip as stated in the paper by George (2009). I do tend to agree on this but also by perhaps adding the ability for custom instructions to be stored in the CPU which would allow user/operating system/software defined instructions to be added dynamically for optimisation of that particular system configuration

References

Brookshear, J.G (2009) Computer Science: An Overview. 10th ed. China: Pearson Education Asia Ltd.

George, A.D (1990) “An overview of RISC vs. CISC”, System Theory, 1990. Twenty-second Southeastern Symposium on, 11-13 March, Cookeville. IEEE, pp.436-438.

Categories
Computing Research Software

Software Architects – How will this position change in 10 years?

The question at hand I would like to discuss is, how do I think the position of a Software Architect will change in the next 10 years?

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

 

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

References

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).

 

 

In ten years’ time I believe this position will definitely still exist and I am quite sure there will be many more holding this position than today.

In the future I have a strong belief that software architects will have to structure their software planning based more heavily on developing middleware and collaborative systems (“cloud computing”) than developing brand new systems. In an article by Pokharel and Park (2009) it is mentioned that “Cloud computing is the future generation of computing”. Today we see a number of large applications becoming increasingly popular and in my personal experience developing new software that “talks to” external systems is becoming more and more popular. In one project I have worked on it was a potential deal-breaker if our software was unable to talk to SAP.

In a paper by Medvidovic (2002), it is stated that “While architecture is an early model of

a system that highlights the system’s critical conceptual properties using high-level abstractions, middleware enables that system’s realization and ensures the proper composition and interaction of

the implemented components” followed by “The relationship between the two areas and their respective shortcomings suggest the possibility of coupling architecture modelling and analysis approaches with middleware technologies in order to get ‘the best of both worlds’ “. I believe this is a substantial statement supporting my beliefs.

 

The current focus of software architecture, in my personal experience, does seem to be aimed at the creation of vastly “stand-alone” components with the only real reliability being on the operating system the software will be running on. The large, successful developments of today will be the focus of the middleware of tomorrow.

 

I believe that in 10 years’ time the training required to perform the duties of a Software Architect will be the same as today but will include new training based on existing systems and how to make them work together efficiently via API’s/middleware as well as the requirement for more experience in developing these modules.

 

As far as automation goes, I believe the current movement towards the use of programming frameworks will only increase, while low level programming will be far less of a requirement when planning the development of a system. Using components of a framework (one can think of a framework as a library of “functions” to perform common tasks so that the programmer does not need to do it themselves) will enable a much larger focus on the bigger picture and allow for more complex systems to be developed.

 

Bibliography

Medvidovic, N. (2002) ‘On the role of middleware in architecture-based software development’ SEKE, 27, pp.299-306, ACM [Online]. Available from http://doi.acm.org/10.1145/568760.568814 (Accessed: 12 September 2010).

Pokharel, M and Park, J. (2009) ‘Cloud computing: future solution for e-governance’ ACM International Conference Proceeding Series, 322, pp.409-410, ACM [Online]. Available from http://doi.acm.org/10.1145/1693042.1693134 (Accessed: 12 September 2010).

Categories
Money

Smart Banking in South Africa

Many people are scared or simply too lazy to change their bank; I do not fall into either of these categories. As with my history of buying cars, I have much the same attitude towards banking – if I haven’t truely researched my options; how am I supposed to make an educated decision?

Here in South Africa we are “monopolised” (exaggeration, of course) by our big 4 (ABSA, FNB, NedBank and Standard Bank), our parents and grandparents have been using them for years and therefore we simply follow blindly. The big 4 are all as bad as each other, some (FNB) more innovative than others – but at the end of the day you are usually paying far too much for what you get.

I’m not going to delve into the big 4, as most of us are quite aware of at least one of the big 4 and the rest are usually quite similar.

Two fairly new-comers to the South African banking industry are:

  1. Virgin Money
  2. Capitec Bank

For this post, I’m going to propose how using the above 2 banks you can bank efficiently and benefit both yourself and your credit record while doing so.

 

Let’s start with Capitec Bank

Capitec Bank’s “Global One” account provides consumers with a debit card linked to a savings account. The fees (currently) per month are R4.50 for the account and includes Internet Banking at no charge.  The interest rate is, as with all other banks, linked to the prime lending rate, but is significantly higher than just about any other daily banking account – especially considering its only R4.50 per month administration fee. Even if you earn a low salary you are more than likely going to make that R4.50 back in interest due to the rates that Capitec provides (a higher rate for amounts under R10 000 – favouring the less fortunate for a change).

Capitec also provides SMS alerts for all transactions, charged at 40c per SMS, which can be disabled if you don’t wish to have it. The major savings come in to play when it comes EFT’s and debit orders. An EFT payment into your bank account is free, a debit order is R2.75, a manual EFT payment is R2.50 and Debit Card swipes are free as well. When it comes to cash withdrawals, Capitec have gone the route of a number of other financial providers and allowed for cash withdrawals from shop tellers; if you draw from Pick n Pay, Shoprite, Boxer, Checkers or PEP its a R1 fixed fee – if you draw from a Capitec ATM its R3.50 fixed fee and if you draw from any other ATM its R7.00 – so basically if you draw over a certain amount from one of the big 4 banks, you’re probably being charged less than one of their own customers!

Another great thing about Capitec is that payments from other banks usually land up in your Capitec account the same day or next day depending on what time the payment was made.

 

Next, let’s take a look at Virgin Money

Virgin Money’s Credit Card provides one of the lowest debit interest rates (currently 14%) and a better-than-average credit interest rate of 3% currently. This card also provides free swipes on petrol transactions – and of course, as with any credit card – there is no transaction fees for credit swipes. To quote their website you have “up to 25 days to settle your amount owing at the end of each month” – so, no interest charged until you’ve surpassed that 25 day time period.

 

There’s nothing more I feel is worth mentioning on the two as what I’ve outlined should theoretically be all you use your card for. Remember, keep away from purchasing on budget!

So what are the rules you should be sticking to to get the best out of your banking situation?

  1. Store all your cash in your Capitec Bank account.
  2. If you need cash, withdraw from Capitec at one of the shopping centre tills.
  3. If you’re shopping, use your Virgin Money credit card. Why? Because you don’t pay interest until 25 days after the end of the month; leave your cash in Capitec where it can take advantage of that sweet interest rate!
  4. ALWAYS be aware of how much you can afford to spend, don’t spend more than you have and try not to buy things that rely on the fact that you’re going to get your salary soon.
  5. Settle your full credit card debt before the interest kicks in (to be safe, I wouldnt settle it less than 5 days before the interest kicks in, just in case).

By following these rules and using the above banks you can save yourself a significant amount of money, not to mention earn some interest!

PS: By using your credit card for payments you are helping your credit record (having no credit doesn’t improve your credit record – you’d like to buy a house one day, wouldn’t you?)

Any better tips welcome, just leave a comment :-)