We discussed in class the problem of finding a needle in a haystack. More specifically, the abstract data type (data structures version of an API), specifies the following three operations we need to do to a collection of objects:
Officially, this is called the Set ADT. To keep things simple, we will assume our data structure has exactly one copy of each element. So if we say add(5) add(5) remove(5), then there is no longer a 5 (adding the second time had no effect). Also, we'll assume that removing an element that isn't there does nothing.
As far as implementations go, we discussed four different options
We came up with the following table of the worst case number of operations one would have to do to perform our operations for a collection that currently has $N$ items:
Find | Add | Remove | |
Ordinary Array | $\approx N$ | $\approx 2N$ | $\approx 3N$ |
Sorted Array | $\approx log_2(N)$ | $\approx 2N$ | $\approx 2N + log_2(N)$ |
Linked List | $\approx N$ | $\approx 1$ | $\approx N$ |
Below are some explanations
This is similar to the above, except searching is more efficient because we can continually halve our search range until there's only one element left (recall the definition we gave of log). We'll explore this algorithm more in the first lab. But it speeds up the find step, as well as the remove step (since finding is the first thing we do)
Searching/finding is just a matter of following the links, starting at the head. In the worst case, the element is at the end, or it isn't there at all, in which case we have to walk through all $N$ link nodes. Let's break down adding and removing in a bit more detail:
We just make the element we want to add the new head
Specifically, the steps are
This is a little bit trickier, but the basics are the same. We have to find the element and then reassign pointers to circumvent the element we're trying to remove (assuming there's a garbag collector around to clean up the orphaned node). Let's assume that it's actually there (otherwise, we just get to the end and don't do anything). Let's also assume that it's somewhere in the middle (you'd have to handle it being the beginning as a special case). Then the pseudocode is as follows
We've made improvements with the sorted array and linked list, but it seems like we're playing a game of whack-a-mole; we made adding more efficient with a linked list, but searching got slow again. How can we improve this?
The solution to a more efficient set ADT is actually incredibly simple. We just have to combine arrays and linked lists in clever ways. What we will do is split up our data into a bunch of bins. Each bin will hold one small chunk of our data. When we want to find an element, we will be able to jump directly to a bin that would contain it if it were in our set (and likewise for removing and adding).
Students had some neat ideas in class about creating bins. For instance, if we knew the range of our numbers was from $a$ to $b$ and we wanted N bins, then we could setup the bins so that they contained numbers in the range
For instance, if we wanted 4 bins for numbers between 0 and 99, we could do bins with ranges $[0, 24], [25, 49], [50, 74], [75, 99]$
The code for this would be OK enough, but then things would get instantly messed up as soon as we made a number that went beyond the range.
Actually, there's a solution even simpler than this that works in a more general case where we don't have to assume the range! Given $b$ bins, we simply map a number $x$ to $x \% b$! For example, if we chose $b=2$, all even numbers would go into one bin, and all odd numbers go into another bin.
Let's look at a few different ways to implement this with numbers in python. We'll cheat and use a random access list for now, but technically each bin contains a linked list, since these are more efficient than unsorted arrays by our above discussion. You'll implement the linked list version in your first homework
import numpy as np
# Make some unique random numbers
np.random.seed(4)
nums = np.unique(np.random.randint(0, 50, 50))
n_bins = 10
table = []
for i in range(n_bins):
table.append([]) # Create an empty list for each bin
for i in range(len(nums)):
# If I have a remainder of i, put this number
# in bin i
# The main trick here is how we index the array of bins
table[nums[i]%n_bins].append(nums[i])
for i in range(n_bins):
print(i, table[i])
0 [0, 30, 40] 1 [1, 21, 31] 2 [2, 12, 32, 42] 3 [3, 23, 33] 4 [34, 44] 5 [5, 15, 25, 45] 6 [36, 46] 7 [17] 8 [8, 28, 38, 48] 9 [9, 39, 49]
Here's an even more compact solution using numpy
# Mike's solution (using numpy)
for i in range(n_bins):
table.append(nums[nums%n_bins==i])
for i in range(n_bins):
print(i, table[i])
0 [0, 30, 40] 1 [1, 21, 31] 2 [2, 12, 32, 42] 3 [3, 23, 33] 4 [34, 44] 5 [5, 15, 25, 45] 6 [36, 46] 7 [17] 8 [8, 28, 38, 48] 9 [9, 39, 49]
One thing we discussed in class was the space/time tradeoff between using more bins. If we use more bins, we have fewer elements that "collide" (i.e. occupy the same bin) in each bin, so it doesn't take as much time to find what we're looking for or to add/remove something. However, it takes more memory to store the bins. But actually, if we use a number of bins on the order of the number of elements we expect to have, this isn't terrible; we're just roughly doubling the amount of memory we need, but we get things more efficient. Let's do this on the example below
table = []
n_bins=50
for i in range(n_bins):
table.append(nums[nums%n_bins==i])
for i in range(n_bins):
print(i, table[i])
0 [0] 1 [1] 2 [2] 3 [3] 4 [] 5 [5] 6 [] 7 [] 8 [8] 9 [9] 10 [] 11 [] 12 [12] 13 [] 14 [] 15 [15] 16 [] 17 [17] 18 [] 19 [] 20 [] 21 [21] 22 [] 23 [23] 24 [] 25 [25] 26 [] 27 [] 28 [28] 29 [] 30 [30] 31 [31] 32 [32] 33 [33] 34 [34] 35 [] 36 [36] 37 [] 38 [38] 39 [39] 40 [40] 41 [] 42 [42] 43 [] 44 [44] 45 [45] 46 [46] 47 [] 48 [48] 49 [49]
By contrast, here's only using 3 bins
table = []
n_bins=3
for i in range(n_bins):
table.append(nums[nums%n_bins==i])
for i in range(n_bins):
print(i, table[i])
0 [ 0 3 9 12 15 21 30 33 36 39 42 45 48] 1 [ 1 25 28 31 34 40 46 49] 2 [ 2 5 8 17 23 32 38 44]
In order to extend what we did above to objects, we need to define a consistent way of turning our objects into numbers. This is known as a hash code. A hash code should be deterministic; that is, it shouldn't change. For example, a hash code for a person could be the month of the year that they were born. Below is an example of what a hash table would look like for Harry Potter characters based on this hash function (using the birthdays of the actors who played them in the movies). Notice how the linked lists are setup in each bin
You can examine such hash codes live for Harry Potter characters at this link.
We started to discuss some of theoretical properties of hash functions, starting with the pigeonhole principle. In this context, if we have more objects than we do bins, then we are guaranteed to have a collision. We will explore more theory as the class moves forward.