Category Archives: Algorithm

Posts about algorithms

Algorithms for computing Pythagoras Sum

While helping a friend of mine with programming assignments, I stumbled upon a question on two different algorithms of calculating Pythagoras Sum (Pythagoras Addition), through a function called pythag

One algorithm is straight-forward, which is just  c = \sqrt{a^2 + b^2}. \,

The other, however, is very intriguing, it makes use of absolute, min, max and several lines of weird mathematical operations to get the result. Continue reading

Long

From int to long

The past few days, I have been working on a programming assignment to develop efficient multiplication algorithm. I have passed all graded test cases but got stuck at the non-graded challenge task, task Y. The requirement is 500k-digits numbers multiplication in 2 seconds.

My algorithm made use of both Karatsuba algorithm and a technique to combine neighbouring digits together (similar to converting from decimal to a large base).

Initially, I used int for all the variables, which allowed me to combine 8 digits at a time. This got me 2/10 points for the task. Then I thought why not use long? (I was using Java so no long long, and we were prohibited from using BigInteger) After changing my variables to long, I discovered that I could only increase the max digits from 8 to 10 before errors in the result showed up. I knew that theoretically the max digits should be around 16, but I did not bother finding out why and assumed that other parts of my code is limiting the max digits.

Two days later, after exhausting other possible optimization techniques, I still could not get better than 2/10, so I decided to investigate the issue with max digits. This time, I used the debugger and checked all the intermediate steps carefully. And I finally found out what was wrong: I constructed an array with base 10 raised to different powers, for combining the digits. However, the array was in the int range, therefore causing the error when increased the threshold for max digits. After resolving the issue (changing it to long), I was able to get 4/10 for the task. It might be that I need a much better algorithm for full marks, but at that point of time, I was filled with joy for figuring the bug out.

Long term goal

Setting a long term goal is important. I only truly appreciated it after experiencing regrets. When I started playing games, I was always able to identify an ultimate goal for me and work towards it. If I found out that the goal was too hard to achieve, I would drop the game. If I managed to achieve the goal after putting in a lot of effort, I will look for another one.

However, game is too short compared to life. What I thought was the long term goal was too short. Before I graduated from junior college, all I wanted was entering a good university with good computer science department. I assumed that this would lead me to my next goal naturally. Now it looks like I was thinking too short term. I failed to see many other factors that would affect the really long term, like the well-being of the IT industry, the general perception of technology and what I envision myself to be 20 years down the road.

It is funny that I actually did an exercise back in junior college on “what I imagine myself to be in 20 years”. It was my best chance to plan for long term. However, I wasted it by putting down vague descriptions like “working in a company of global scale”, without actually thinking through what it meant.

Now I really need a long term goal, but how long? Should it be more than 5 years? Or even 10 years? Going back to the discussion of games, it should be challenging, and it should take a lot of effort. Obviously getting an internship would not fulfill the criteria of challenging, neither would graduating from NUS with a job offer. This time, I need to think through, considering the feasibility and the challenge, the possible avenues and obstacles.

TBC…

Programming concepts in real life – Algorithm

This is the second post on the series of interesting thoughts of “programming concepts in real-life”. Today it occurred to me that one fundamental concept of programming – algorithm, is also heavily used in real life outside the IT world.

What is algorithm

Algorithm is a step-by-step procedure for calculations.

Algorithm used in programming

There are a lot of examples of algorithms in programming, the most basic ones include  breadth-first search (BFS) and  depth-first search (DFS).

Algorithm in real life

Algorithms are applied in our everyday life.

DFS/BFS (Depth-first-search/Breadth-first-search)

For example, for a freshman who just entered NUS, he may choose to study the area of his interest, such as marketing or psychology. He can do this in two ways, one is to really devote all the time and effort into this area and do nothing else for a very long time. So for marketing, he/she will be learning introductory modules on marketing, intermediate modules on marketing, advanced modules on marketing or even do a research on marketing. Only after he/she feels there is nothing else to do on marketing, then the person will consider other options. And surprisingly(or rather not surprisingly), this is exactly the concept of a particular algorithm called DFS. In DFS, you prioritize depth more than breadth, so when you have a chance to go wider or deeper, you will go deeper first.

And of course you can see how the opposite algorithm BFS works for real life. That is when a person takes all the introductory modules for different areas, however after all that still indecisive enough to make a decision for specialization. So he/she would take all the intermediate level modules for different areas and so on. This may work very badly for a freshman but the concept is similar to a liberal arts curriculum, where students are exposed to different disciplines for 1 or 2 years before they decide their major.

Amortized complexity optimization – CS3230 lecture on 13 Feb 2015

We use amortized complexity optimization in real life, for cleaning our rooms. Normally we do not clean our rooms every single day. We wait for the room to fall below a certain cleanliness standard or when there is an important visitor, then we start cleaning the room. Cleaning the room is an expensive operation which takes a lot of time. Hence, by doing it infrequently, only when absolutely necessary, we reduce the total amount of work needed to keep our room clean.

This idea is similar to amortized complexity optimization in algorithm analysis, where we optimize the average runtime to achieve better run time.

 

Other algorithms such as greedy algorithm and dynamic programming are also practised in real life(often without being realized).

 


More of this

CS2020 Learning Sorting

Sorting was one of the first things that I learnt for programming, and inevitably we had to go through this as part of this algorithm module.

The truth is I never really paid much attention to sorting until now, since I  knew that I can just use the buit-in qsort function in most cases. However, this is certainly not we learnt during this module, and we are dealing with more problematic sorting, involving analysis of different types of sorting algorithm in terms of running time and space usage(and I realised what we have learnt from past few weeks is very similar to the first two chapters of book Introduction to Algorithms)

The different sorting methods covered so far are:

  • Bubble sort Stable

A total of n iterations for n inputs, for each iteration, swap the two neighbouring elements if they are not sorted.

Bubble sort has a loop invariant – at the end of jth iteration, the biggest j elements are correctly sorted in the final j positions in the array

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Selection sort  Unstable

A total of n iterations for n inputs, for each iteration, find the smallest element from unsorted part and put it(by swapping) at the end of sorted part.

Selection sort also has a loop invariant – at the end of jth iteration, the smallest j elements are correctly sorted in the first j positions in the array

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Insertion sort  Stable

A total of n iterations for n inputs, for each iteration, put the first unsorted element  to the correct position of the sorted part.

Insertion sort has a loop invariant, which is slightly different from selection sort – at the end of jth iteration, the first j positions in the array is sorted, however, they may be subjected to “insertion” on the subsequent iterations.

Running time is O(n²), best case is O(n) if array is already sorted, Space usage is O(n).

  • Merge sort  Stable

It uses the idea of divide and conquer, divide the array into two halves on each iteration and sort them during the merging process. It is relatively easier to sort the array during merging as we assume that the arrays to be merged are already sorted in the first place.

Invariant – the arrays to be merged are sorted before the start of merging.

Running time is O(n log n), best case is still O(n log n) if array is already sorted, space usage is O(n log n) for a simple approach, which can be improved to  2n+O(1).

  • Quick sort Unstable(Classic)

For each iteration, a pivot is chosen and the array is partitioned into two parts, one part with all elements smaller than(or equal to) the pivot, the other part with all elements bigger than the pivot.

The invariant is the same as the mechanism of quick sort.

For the process of partition, running time is O(n), space usage is n+O(1), where O(1) accounts for the process of swapping. The running time for the whole process can be improved by optimizations(dealing with duplicates by three-way-partition and etc.). The worst case is O(n²).

The pivot can be chosen deterministically or randomly.

  • BogoSort Unstable

This sorting method involves choosing a random permutation of the array A and return it if it is sorted. A related joke(or maybe not) is QuantumBogoSort which uses quantum computing concept and many-worlds interpretation