Space and Time Complexity

Space complexity refers to the amount of memory used by an algorithm to complete its execution, as a function of the size of the input. The space complexity of an algorithm can be affected by various factors such as the size of the input data, the data structures used in the algorithm, the number and size of temporary variables, and the recursion depth. Time complexity refers to the amount of time required by an algorithm to run as the input size grows. It is usually measured in terms of the "Big O" notation, which describes the upper bound of an algorithm's time complexity.

Why do you think a programmer should care about space and time complexity?

  • To make their code as efficient as possible
  • Make their code simpler

Take a look at our lassen volcano example from the data compression tech talk. The first code block is the original image. In the second code block, change the baseWidth to rescale the image.

from IPython.display import Image, display
from pathlib import Path 

# prepares a series of images
def image_data(path=Path("images/"), images=None):  # path of static images is defaulted
    for image in images:
        # File to open
        image['filename'] = path / image['file']  # file with path
    return images

def image_display(images):
    for image in images:  
        display(Image(filename=image['filename']))

if __name__ == "__main__":
    lassen_volcano = image_data(images=[{'source': "Peter Carolin", 'label': "Lassen Volcano", 'file': "lassen-volcano.jpg"}])
    image_display(lassen_volcano)
    
from IPython.display import HTML, display
from pathlib import Path 
from PIL import Image as pilImage 
from io import BytesIO
import base64

# prepares a series of images
def image_data(path=Path("images/"), images=None):  # path of static images is defaulted
    for image in images:
        # File to open
        image['filename'] = path / image['file']  # file with path
    return images

def scale_image(img):
    #baseWidth = 625
    #baseWidth = 1250
    #baseWidth = 2500
    baseWidth = 200 # see the effect of doubling or halfing the baseWidth 
    #baseWidth = 10000 
    #baseWidth = 20000
    #baseWidth = 40000
    scalePercent = (baseWidth/float(img.size[0]))
    scaleHeight = int((float(img.size[1])*float(scalePercent)))
    scale = (baseWidth, scaleHeight)
    return img.resize(scale)

def image_to_base64(img, format):
    with BytesIO() as buffer:
        img.save(buffer, format)
        return base64.b64encode(buffer.getvalue()).decode()
    
def image_management(image):  # path of static images is defaulted        
    # Image open return PIL image object
    img = pilImage.open(image['filename'])
    
    # Python Image Library operations
    image['format'] = img.format
    image['mode'] = img.mode
    image['size'] = img.size
    image['width'], image['height'] = img.size
    image['pixels'] = image['width'] * image['height']
    # Scale the Image
    img = scale_image(img)
    image['pil'] = img
    image['scaled_size'] = img.size
    image['scaled_width'], image['scaled_height'] = img.size
    image['scaled_pixels'] = image['scaled_width'] * image['scaled_height']
    # Scaled HTML
    image['html'] = '<img src="data:image/png;base64,%s">' % image_to_base64(image['pil'], image['format'])


if __name__ == "__main__":
    # Use numpy to concatenate two arrays
    images = image_data(images = [{'source': "Peter Carolin", 'label': "Lassen Volcano", 'file': "lassen-volcano.jpg"}])
    
    # Display meta data, scaled view, and grey scale for each image
    for image in images:
        image_management(image)
        print("---- meta data -----")
        print(image['label'])
        print(image['source'])
        print(image['format'])
        print(image['mode'])
        print("Original size: ", image['size'], " pixels: ", f"{image['pixels']:,}")
        print("Scaled size: ", image['scaled_size'], " pixels: ", f"{image['scaled_pixels']:,}")
        
        print("-- original image --")
        display(HTML(image['html'])) 
---- meta data -----
Lassen Volcano
Peter Carolin
JPEG
RGB
Original size:  (2792, 2094)  pixels:  5,846,448
Scaled size:  (200, 150)  pixels:  30,000
-- original image --

Do you think this is a time complexity or space complexity or both problem?

  • BOTH
  • Space:It takes up a lot of pixels on the computer screen- Time: It takes a really long time for the computer to generate a picture

Big O Notation

  • Constant O(1)
  • Linear O(n)
  • Quadratic O(n^2)
  • Logarithmic O(logn)
  • Exponential (O(2^n))
numbers = list(range(1000))
print(numbers)

Constant O(1)

Time

An example of a constant time algorithm is accessing a specific element in an array. It does not matter how large the array is, accessing an element in the array takes the same amount of time. Therefore, the time complexity of this operation is constant, denoted by O(1).

print(numbers[263])

ncaa_bb_ranks = {1:"Alabama",2:"Houston", 3:"Purdue", 4:"Kansas"}
#look up a value in a dictionary given a key
print(ncaa_bb_ranks[1]) 

Space

This function takes two number inputs and returns their sum. The function does not create any additional data structures or variables that are dependent on the input size, so its space complexity is constant, or O(1). Regardless of how large the input numbers are, the function will always require the same amount of memory to execute.

def sum(a, b): 
  return a + b

print(sum(90,88))
print(sum(.9,.88))

Linear O(n)

Time

An example of a linear time algorithm is traversing a list or an array. When the size of the list or array increases, the time taken to traverse it also increases linearly with the size. Hence, the time complexity of this operation is O(n), where n is the size of the list or array being traversed.

for i in numbers:
    print(i)

Space

This function takes a list of elements arr as input and returns a new list with the elements in reverse order. The function creates a new list reversed_arr of the same size as arr to store the reversed elements. The size of reversed_arr depends on the size of the input arr, so the space complexity of this function is O(n). As the input size increases, the amount of memory required to execute the function also increases linearly.

def reverse_list(arr):
    n = len(arr) 
    reversed_arr = [None] * n #create a list of None based on the length or arr
    for i in range(n):
        reversed_arr[n-i-1] = arr[i] #stores the value at the index of arr to the value at the index of reversed_arr starting at the beginning for arr and end for reversed_arr 
    return reversed_arr

print(numbers)
print(reverse_list(numbers))

Quadratic O(n^2)

Time

An example of a quadratic time algorithm is nested loops. When there are two nested loops that both iterate over the same collection, the time taken to complete the algorithm grows quadratically with the size of the collection. Hence, the time complexity of this operation is O(n^2), where n is the size of the collection being iterated over.

for i in numbers:
    for j in numbers:
        print(i,j)

Space

This function takes two matrices matrix1 and matrix2 as input and returns their product as a new matrix. The function creates a new matrix result with dimensions m by n to store the product of the input matrices. The size of result depends on the size of the input matrices, so the space complexity of this function is O(n^2). As the size of the input matrices increases, the amount of memory required to execute the function also increases quadratically.

def multiply_matrices(matrix1, matrix2):
    m = len(matrix1) 
    n = len(matrix2[0])
    result = [[0] * n] * m #this creates the new matrix based on the size of matrix 1 and 2
    for i in range(m):
        for j in range(n):
            for k in range(len(matrix2)):
                result[i][j] += matrix1[i][k] * matrix2[k][j]
    return result

print(multiply_matrices([[1,2],[3,4]], [[3,4],[1,2]]))

Logarithmic O(logn)

Time

An example of a log time algorithm is binary search. Binary search is an algorithm that searches for a specific element in a sorted list by repeatedly dividing the search interval in half. As a result, the time taken to complete the search grows logarithmically with the size of the list. Hence, the time complexity of this operation is O(log n), where n is the size of the list being searched.

def binary_search(arr, low, high, target):
    while low <= high:
        mid = (low + high) // 2 #integer division
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            low = mid + 1
        else:
            high = mid - 1

target = 263
result = binary_search(numbers, 0, len(numbers) - 1, target)

print(result)

Space

The same algorithm above has a O(logn) space complexity. The function takes an array arr, its lower and upper bounds low and high, and a target value target. The function searches for target within the bounds of arr by recursively dividing the search space in half until the target is found or the search space is empty. The function does not create any new data structures that depend on the size of arr. Instead, the function uses the call stack to keep track of the recursive calls. Since the maximum depth of the recursive calls is O(logn), where n is the size of arr, the space complexity of this function is O(logn). As the size of arr increases, the amount of memory required to execute the function grows logarithmically.

Exponential O(2^n)

Time

An example of an O(2^n) algorithm is the recursive implementation of the Fibonacci sequence. The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. The recursive implementation of the Fibonacci sequence calculates each number by recursively calling itself with the two preceding numbers until it reaches the base case (i.e., the first or second number in the sequence). The algorithm takes O(2^n) time in the worst case because it has to calculate each number in the sequence by making two recursive calls.

def fibonacci(n):
    if n <= 1:
        return n
    else:
        return fibonacci(n-1) + fibonacci(n-2)

#print(fibonacci(5))
#print(fibonacci(10))
#print(fibonacci(20))
print(fibonacci(30))
#print(fibonacci(40))

Space

This function takes a set s as input and generates all possible subsets of s. The function does this by recursively generating the subsets of the set without the first element, and then adding the first element to each of those subsets to generate the subsets that include the first element. The function creates a new list for each recursive call that stores the subsets, and each element in the list is a new list that represents a subset. The number of subsets that can be generated from a set of size n is 2^n, so the space complexity of this function is O(2^n). As the size of the input set increases, the amount of memory required to execute the function grows exponentially.

def generate_subsets(s):
    if not s:
        return [[]]
    subsets = generate_subsets(s[1:])
    return [[s[0]] + subset for subset in subsets] + subsets

print(generate_subsets([1,2,3]))
print(generate_subsets([1,2,3,4,5,6]))
#print(generate_subsets(numbers))

Using the time library, we are able to see the difference in time it takes to calculate the fibonacci function above.

  • Based on what is known about the other time complexities, hypothesize the resulting elapsed time if the function is replaced.
import time

start_time = time.time()
print(fibonacci(34))
end_time = time.time()

total_time = end_time - start_time
print("Time taken:", total_time, "seconds")

start_time = time.time()
print(fibonacci(35))
end_time = time.time()

total_time = end_time - start_time
print("Time taken:", total_time, "seconds")

Hacks

  • Record your findings when testing the time elapsed of the different algorithms.
  • Although we will go more in depth later, time complexity is a key concept that relates to the different sorting algorithms. Do some basic research on the different types of sorting algorithms and their time complexity.
  1. Bubble Sort: It is one of the simplest sorting algorithms, where each element is compared with its adjacent element and swapped if the adjacent element is greater. This process is repeated until the list is sorted. The time complexity of the bubble sort algorithm is O(n^2).

  2. Selection Sort: First, the smallest element in the list is found and swapped with the first element. Then, the smallest element in the remaining list is found and swapped with the second element, and so on. The time complexity of the selection sort algorithm is O(n^2).

  3. Insertion Sort: This algorithm works by iterating through the list and inserting each element into its proper position in the sorted sub-list. The time complexity of the insertion sort algorithm is also O(n^2).

  4. Merge Sort: This algorithm divides the list into two halves recursively until each sub-list contains only one element. Then, the sub-lists are merged back together in sorted order. The time complexity of the merge sort algorithm is O(n log n).

  5. Quick Sort: It is a divide-and-conquer algorithm that selects a pivot element and partitions the list into two sub-lists, one with elements smaller than the pivot and the other with elements greater than the pivot. This process is repeated recursively until the list is sorted. The time complexity of the quick sort algorithm is O(n log n).

  6. Heap Sort: This algorithm sorts the elements by constructing a binary heap and repeatedly extracting the maximum element from the heap until the list is sorted. The time complexity of the heap sort algorithm is also O(n log n).

  • Why is time and space complexity important when choosing an algorithm?

Time and Space complexity is important when choosing an algorithm because they are both two major factors that can effect the efficiency and readability of code. Especially for larger code projects, people want to code something that is easy to read and something that takes up the least amount of time.

  • Should you always use a constant time algorithm / Should you never use an exponential time algorithm? Explain.

It's not always necessary to use a constant time algorithm. Not every code cell is trying to accomplish the same thing, so the choice of algorithm depends on the specific problem you are trying to solve, the size of the input data, and the resources available to you. If you have a small input size and limited resources, a constant time algorithm might be the most efficient choice. But if your input size is large, it might not be possible to solve the problem with a constant time algorithm, and you might have to resort to an algorithm with higher time complexity, such as an exponential time algorithm.

  • What are some general patterns that you noticed to determine each algorithm's time and space complexity?

    • Recursive calls: If an algorithm is recursive, the time complexity is often expressed using a recurrence relation. The complexity of a recursive algorithm is usually related to the number of recursive calls and the size of the data being processed.

    • Sorting and searching: If an algorithm involves sorting or searching data, the time complexity is usually expressed in terms of the number of elements being sorted or searched. For example, quicksort and merge sort have a time complexity of O(n log n), while linear search has a time complexity of O(n).

    • Data structures: If an algorithm uses data structures like arrays, lists, or trees, the space complexity is usually proportional to the size of the data being stored. For example, an algorithm that creates an array of size n has a space complexity of O(n).

Complete the Time and Space Complexity analysis questions linked below. Practice

Time and Space Complexity Analysis Questions

a = 0
b = 0
for i in range(N):
  a = a + random()
 
for i in range(M):
  b= b + random()

# Options: 
# O(N * M) time, O(1) space
# O(N + M) time, O(N + M) space
# O(N + M) time, O(1) space 
# O(N * M) time, O(N + M) space

O(N + M) time, O(1) space is the answer. Since N and M are independent variables, we can’t say which one is the leading tere. This would make the time complexity of this code would be O(M+N). Since variable size isn't determined by input size, our space complexity would remain constant or O(1)

a = 0
for i in range(N):
  for j in reversed(range(i,N)):
    a = a + i + j

# Options
# O(N)
# O(N*log(N))
# O(N * Sqrt(N))
# O(N*N)

O(N*N) is the answer. There are two nested loops that both repeat the same collection. This means that the time taken to complete the algorithm grows quadruples with the size of the collection.

k = 0
for i in range(n//2,n):
  for j in range(2,n,pow(2,j)):
        k = k + n / 2

# Options
# O(n)
# O(N log N)
# O(n^2)
# O(n^2Logn)

O(N log N) is the answer. The first for statement is N. The second for statement is logN. pow(2,j) represents the step size so for each iteration of the second for, j increases until 2^j = n which means that j = log(2)n. Then it stops so the time complexity is N*logN.

What does it mean when we say that an algorithm X is asymptotically more efficient than Y?

  1. X will always be a better choice for small inputs
  2. X will always be a better choice for large inputs
  3. Y will always be a better choice for small inputs
  4. X will always be a better choice for all inputs

The answer is 2. The worst-case time complexity of an algorithm is a measure of the largest amount of time it takes to run for any input size n. When we say that algorithm X is asymptotically more efficient than algorithm Y, we mean that X has a smaller worst-case time complexity than Y. This implies that as the input size increases towards infinity, X will eventually become faster than Y for large enough inputs.

a = 0
i = N
while (i > 0):
  a += i
  i //= 2

#Options
#O(N)
# O(Sqrt(N))
# O(N / 2)
# O(log N)

O(log N) is the answer. We have to find the smallest x such that ‘(N / 2^x )< 1 OR 2^x > N’ x = log(N)

  1. Which of the following best describes the useful criterion for comparing the efficiency of algorithms?
  • Time
  • Memory
  • Both of the above
  • None of the above

The answer is 3. Comparing the efficiency of an algorithm depends on the time and memory taken by an algorithm. The algorithm which runs in lesser time and takes less memory even for a large input size is considered a more efficient algorithm.

  1. How is time complexity measured?
  • By counting the number of algorithms in an algorithm.
  • By counting the number of primitive operations performed by the algorithm on a given input size.
  • By counting the size of data input to the algorithm.
  • None of the above

The answer is 2. To determine the time complexity of an algorithm, we first identify the operations that the algorithm performs, and then we count the number of times each operation is executed as a function of the input size n.

for i in range(n):
  i=i*k

# Options
# O(n)
# O(k)
# O(logkn)
# O(lognk)

The answer is O(logkn). Because loops for the kn-1 times, so after taking log it becomes logkn.

value = 0;
for i in range(n):
  for j in range(i):
    value=value+1

# Options
#n
# (n+1)
# n(n-1)
# n(n+1)

The answer is n(n-1). First for loop will run for (n) times and another for loop will be run for (n-1) times as the inner loop will only run till the range i which is 1 less than n , so overall time will be n(n-1).

  1. Algorithm A and B have a worst-case running time of O(n) and O(logn), respectively. Therefore, algorithm B always runs faster than algorithm A.
  • True
  • False

The answer is false. The Big-O notation provides an asymptotic comparison in the running time of algorithms. For n < n0​​, algorithm A might run faster than algorithm B, for instance.