On Wednesday, our prof spends a lecture wrapping up the main
key concepts we needed to know for the exam. Concepts such as object oriented
programming, abstract data type, inheritance, raise exception, try and except
exceptions, unittest, recursion, binary tree, linked list, and run time analysis
were covered as topics to review for the exam. During the lecture we discussed
on each of these topics based on how they were relevant to programming and how
they were used in python programming. Also, our prof explained the format of
the exam paper as well as what to do and not to do. Up to this point, the
course is pretty much finished except for the final exam. Although we had spent
large amount of time analyzing algorithms, I still require more practice to
fully grasp the concept of analyzing algorithms. Throughout this semester, I gained
better understanding on many computing techniques that I am inexperienced at
the beginning of this year. With the time left until the final exam, I will
review all of the concepts I learned in this course and re-attempt questions appeared
on tests and labs to prepare for the exam.
CSC148 SLOG
Wednesday 2 April 2014
SLOG # 10:
At the beginning of this week, we discussed on why objects
such as a list cannot be stored as key values of a dictionary. When {[1, 2]: 1}
for instance was to be tested in the python shell, it will return an error
saying list is unhashable type. Python dictionaries use objects that are
hashable as key values to avoid mutability of a mutable object, so errors will
not occur. Hashable object is defined to be an object that never changes and
can be compared to other objects. Since mutable objects such as lists do not
have the characteristics of a hashable object, then these objects cannot be
used as key values for a dictionary.
Thursday 27 March 2014
SLOG # 9: Week 11 – Sorting and Efficiency
When it comes to sorting and efficiency, often the algorithm used as well as the size of elements will determine the efficiency for sorting or searching. Nowadays, various algorithms were designed to facilitate the efficiency when running a program. For instance Timsort was an algorithm implemented into python as a built-in sort. As we have analyzed during the lab this week, Timsort was most efficient sorting algorithm when compared with insertion, bubble, merge, and quick sort. As a hybrid algorithm of merge sort and insertion sort, Timsort does approaches runtime of a merge sort when list of elements need to be sorted becomes large at worst case scenario with performance of O(nlog n). This is same as performance of merge sort.
In the lab, the data we collected data that shows selection sort
and bubble sort seems to have the worst run time efficiency with performance of
O(n^2) at worst case, since they have to compare approximately elements n^2
times to sort the list in correct order.
Run time efficiency for quick sort depends on how the pivots
were chosen each time, where predictable pivot points chosen may results in
O(n^2) performance. As shown during the lectures, my prof mentioned choosing
random pivots during quick sort can help avoid possible worst run time of
O(n^2). Similarly, a sorted list of elements give insertion sort run time of
O(n), and reverse sorted elements gives run time of O(n^2). Due to the numbers
of comparisons required to sort an element in the correct position in elements
that are already sorted, insertion sort have this characteristic. From time to
time, the order of the elements to be sorted will affect the run time of an
algorithm as demonstrated by quick sort and insertion sort.
Often recursive call may call on another recursive call that
may have already been evaluated. To avoid redundancy of recursive calls,
returned values can be stored such as in a list in a higher level scope, so repeated
calls can refer to the stored value rather than making redundant evaluations. For
example, the Fibonacci function example we discussed earlier can apply such strategy to reduce the function run time. Similar strategy was used in binary search tree to search for, or inserting a
value in the tree. Since left branch
only have nodes smaller than current node value, and right branch only have
nodes bigger than current node value, binary tree will search exactly like
binary search algorithm. This means each step, the amount of element current looking at is halved
each time.
Sorting depends on the efficiency of the algorithm from time
to time. Through my experiences of making both efficient and inefficient codes
in the past, I become aware of the fact that efficiency will affect the
overall performance of a program, and how data were organized together will affect the
efficiency as a result.
Friday 21 March 2014
SLOG # 8: Week 10
This week, we have been discussing about varieties of sorting algorithms in addition to their runtime efficiency. To start the week, our prof explained assignment 2’s function designs based on the interpretations of the assignment 2 PDF. After spending sometimes going over the assignment, we discussed on how default empty list parameters is a parameter that is mutable.
For example:
def f(n: list=[]):
n += [1]
return n
If inside a function, we append integer 1 in the list
parameter given, then each time when we call on the function without giving the
parameter, the parameter would refer to the default list [] first time and
store the integer 1 within the list. The reference is kept with 1 added. The
next time when we call on f() again after the first f() call, we would expect
the return to be [1, 1], since the function will reference to the list
parameter n for the default list. Although this would not affect the returned
list for non-default function calls. The reference is kept and accessed during
the next f() call just like the sample calls below.
>>> f()
[1]
>>> f()
[1, 1]
>>> f([0,2])
[0, 2, 1]
>>> f()
[1, 1, 1]
This week we focused on studying algorithms such as quick
sort, and merge sort as well as their run time. For quick sort, the idea is to
choose a pivot point where a list with more than one element can be broken to
two lists where the first list consists of elements smaller than the element
before the pivot, where the other list consist of elements greater than the
pivot if we were to sort the list in ascending order. From the two new
sub-lists we created, we perform quick sort on each of the two. Then the first
list that is sorted can add the pivot in terms of a list with a single element,
and then add the sorted second list. This returned list is the sorted list that
we called quick sort on. To maximize the quick sort algorithm, random pivot
point should be used instead of a specific pivot so worst case run time can be
minimized.
For merge sort, the idea is to split the list roughly to two
list with length half of the original list if the list given have a length of
two or greater. The two sub-lists will then be sorted using merge sort, the two
sorted sub-lists will be compared and the corresponding elements will be moved
around from first to the second and vice versa. Then the two lists can be added
to return a sorted list.
The lab we had this week was based on the continuation of
linked lists and designing helper functions. From the lab handout we were asked
to create linked lists using binary tree objects given such as using the tree
given to create a linked list that is in-order of the binary tree. What was
difficult about this lab was how to find a way to link one linked list at the
tail of another linked list. My partner and I were able to create a helper
function that helps finding the tail of a linked list and reference the other
linked list to the tail.
This week have been fairly busy because of the csc165
midterm was on the same day of the due day of assignment 2. But on the bright
side, this week’s lectures did make me realize the usefulness of recursion,
especially in algorithm designs.
Thursday 13 March 2014
SLOG #7: Week 9
This week we have learnt about how to create binary search
tree, and discussed about run time analysis during the lectures. Since the
exercise # 3 was due this week, our prof taught us how to form a TreeList (list
that have structure similar to a Binary Tree) when the correct pre-ordered list
and in-ordered list was given. Under the condition when no two nodes have the
same values, the root can found in the pre-ordered list as the first element,
and the left and right-sub TreeList can be found according the pattern of the lists.
Pre-order have root as the first element, all left sub-tree value in pre-order
after the root, and all right sub-tree
value in pre-order placed in the list after the last value in the left sub-tree.
In-ordered list have root placed after the left sub-tree in-ordered values and
before the right sub-tree in-ordered values. With correct slicing on the left
and right sub-trees in both ordered list, recursive calls can be made to
build up the correct sub-trees and root that can be used to form their
corresponding in-order, pre-order lists, and TreeList.
Binary search tree have the characteristics of a binary
tree with the condition that any value less than the root belongs to the left
sub-tree, and any value greater than the root belong to the right sub-tree. In
this tree, no two nodes have the same value. When searching for a value or
inserting a new value, the value will be checked based whether the value is
equal, smaller, or greater and go to the corresponding sub-tree branch or
stop. Similar terminologies can be applied when deleting a root node value, with
conditions made when both children exists for the tree.
To search or insert a specific item, binary search tree have
similar run time when compared to a binary search algorithm. In the lecture, we discussed how
run time on each individual computer is different depending on the processing
speed, but it does not affect the numbers of steps it required to process.
During the lecture we reviewed both selection sort and insertion sort to
demonstrate the run time may depend on both the object analyzed and the algorithm
itself.
In this week’s lab, we worked on creating functions that
calls helper function to perform certain tasks that the function itself may not
handle. Over all, the code given takes more time to read in order to give a
complete understanding on what the required function was supposed to perform
than the actual coding. After the lab I realized, from time to time, repetitive codes can be replaced
either using inheritance or calling on other functions.
Thursday 6 March 2014
SLOG # 6
Week 8:
This week, we continued our lectures on tree structures. In addition, we were introduced to how to design wrappers using helper functions. A wrapper
function is a function that calls for a helper function to perform certain
tasks required. With the help of a helper function, complexity of a function can be reduced in comparison to a function that does not use one. It is often possible that functions
that do not perform recursion to call a helper function to perform recursion.
From the lecture notes this week, wrapper functions are used for both linked
list and binary tree structures for demonstrations. For instance, a helper function that returns the reverse of a linked list can be used in a wrapper function to reverse the linked list want.
During this week’s lab, we continued working on Linked list
based on the questions from the previous week. Overall, my partner and I managed
to finish most of the questions with the use of recursion or loops structures.
The objective of this lab is to help the students have a better understanding of
the linked list tree structure, and how methods can be applied to the linked
list object to perform various tasks.
Since linked list can references to another linked list, it
is possible to create a linked list with infinite length by referencing to
itself. The references to the linked lists also makes the linked list objects easily altered when
adding or deleting linked list objects, or reordering the linked list objects. It is possible
we will be seeing questions involving tree structures like linked list in the next midterm and
the final exam. Looks like I will have to start catching up with the course
materials if I ever want to do well in the next midterm and the final exam.
Subscribe to:
Posts (Atom)