Time complexity interview questions: We have compiled most frequently asked Python Interview Questions which will help you with different expertise levels.
Python Interview Questions on Algorithm Analysis and Big-O
Big o python: An algorithm is a procedure or set of instructions that are carried out in order to solve a problem. Coding demands procedural knowledge which is expressed in the form of algorithms. An algorithm provides a recipe or a roadmap or a simple sequence of instructions that are required to accomplish a particular programming task. So, for things as simple as adding two numbers or as complex as designing a banking program, everything requires a systematic approach. It is important to create a roadmap that makes sense before going ahead with the coding part. For simple programs, it is easy to create algorithms through logical thinking. There are also some famous algorithms that can be used for complex problem solving and hence are frequently used for coding.
- Python Interview Questions and Answers for Freshers & Experienced
- Python Program to Find majority element (Boyer–Moore Majority Vote Algorithm)
- Differences: Data Analytics vs. Data Science
What are the steps involved in writing an algorithm?
There are three main steps involved in writing an algorithm:
1. Data Input
- Formulate the problem and decide what data types will be the input.
2. Data Processing
- Decide what computation will be carried off to get the desired result.
- Also identify the decision points, conditions under which various functions will operate.
3. Results Output
- One should know the expected results so that they can be verified by the actual results.
What are the characteristics of the Algorithm?
An algorithm has the following five characteristics:
- Precision: It clearly defines one single starting point and one or more well-defined ending points.
- Effectiveness: It is composed of effective primitives that are understood by the person or machine using it.
- Input/Output Specified: The algorithm must receive input and it must also produce an output.
- Finiteness: An algorithm stops after the execution of a finite number of steps.
- Uniqueness: In an algorithm, the result of every step is uniquely defined and its value depends only on the input provided and the output generated by the previous steps.
What is the meaning of Problem solving?
Problem-solving is a logical process in which a problem is first broken down into smaller parts that can be solved step by step to get the desired solution.
What would you call an algorithm that puts element lists in a certain order?
An algorithm that puts elements of a list in a certain order is called a sorting algorithm. It uncovers a structure by sorting the input. Sorting is often required for optimizing the efficiency of other algorithms. The output of a sorting algorithm is in non-decreasing order where no element is smaller than the original element of the input and the output reorders but retains all the elements of the input which is generally in array form.
Some very commonly known sorting algorithms are as follows:
1. Simple sorts:
a. Insertion sort
b. Selection sort
2. Efficient Sorts:
a. Merge sort
b. Heap sort
c. Quick sort
d. Shell sort
3. Bubble sort
4. Distribution sort
a. Counting sort
b. Bucket sort
c. Radix sort
What type of algorithm calls itself with smaller or simpler input values?
A recursive algorithm calls itself with smaller input values. It is used if a problem can be solved by using its own smaller versions.
What is the purpose of the divide and conquer algorithm? Where is it used?
As the name suggests the divide and conquers algorithm divides the problem into smaller parts. These smaller parts are solved and the solutions obtained are used to solve the original problem. It is used in Binary search, Merge sort, quick sort to name a few.
What is dynamic programming?
In a dynamic problem, an optimization problem is broken down into much simpler sub-problems, each sub-problem is solved only once and the result is stored. For example, Fibonacci Sequence (Explained in Chapter on Recursion).
Big-0 notation helps you analyze how an algorithm would perform if the data involved in the process increases or in simple words it is a simplified analysis of how efficient an algorithm can be.
Since algorithms are an essential part of software programming it is important for us to have some idea about how long an algorithm would take to run, only then can we compare two algorithms and a good developer would always consider time complexity while planning the coding process.
It helps in identifying how long an algorithm takes to run.
(1) Gives us the algorithm complexity in terms of input size n.
(2) It considers only the steps involved in the algorithm.
(3) Big-O analysis can be used to analyze both time and space.
An algorithm’s efficiency can be measured in terms of the best-average or worst case but Big-O notation goes with the worst case scenario.
It is possible to perform a task in different ways which means that there can be different algorithms for the same task having different complexity and scalability.
Now, suppose there are two functions:
Constant Complexity [0(1)]
A task that is constant will never experience variation during runtime, irrespective of the input value. For Example:
>>> x = 5 + 7
The statement x = 5+7 does not depend on the input size of data in any way. This is known as 0(l)(big oh of 1).
Suppose, there are sequence of steps all of constant time as shown in the following code:
>>> a= (4-6) + 9
>>> b = (5 * 8) +8
>>> print (a * b)
Now let’s compute the Big-0 for these steps:
Total time = O(1)+O(1)+O(1)
While computing Big – O we ignore constants because once the data size increases, the value of constant does not matter.
Hence the Big-0 would be 0(1).
Linear complexity: [O(n)]
In the case of linear complexity, the time taken depends on the value of the input provided.
Suppose you want to print a table of 5. Have a look at the following code:
>>> for i in range(0,n):
print (“\n 5 x “,i,”=”,5*i)
The number of lines to be printed depends on ‘n’. For n =10, only ten lines will be printed but for n= 1000, the for loop will take more time to execute.
The print statement is O(1).
So, the block of code is n*O(l) which is 0(n).
Consider following lines of code:
>>>j = 0 ————(1)
>>> for i in range(0,n) : ————(2)
print(“\n 5 x “,i,”=”,5*i)
(1) is O(1)
(2) block is O(n)
Total time = O(l)+O(n)
We can drop O(1) because it is a lower order term and as the value of n becomes large (1) will not matter and the runtime actually depends on the for a loop.
Hence the Big-0 for the code mentioned above is O(n).
Quadratic Complexity: [O(n2)]
As the name indicates quadratic complexity, the time taken depends on the square of the input value. This can be the case with nested loops. Look at the following code:
>>> for i in range (0,n):
for j in range(0,n):
print(“I am in “, j,” loop of i = “, i,”.”)
In this piece of code, the print statement is executed n2 times.
Logarithmic complexity indicates that in the worst case the algorithm will have to perform log(n) steps. To understand this, let’s first understand the concept of the logarithm.
Logarithms are nothing but the inverse of exponentiation.
Now let’s take term 23. Here 2 is the base and 3 is the exponent. We, therefore, say that base 2 logs of 8 (log2 8) = 3. Same way if then base 105 = 100000 then base 10 log of 100000 (log10 of 100000) = 5.
Since the computer works with binary numbers, therefore in programming and Big O we generally work with base 2 logs.
Have a look at the following observation:
1. 20 = 1, log2 1 = 0
2. 21 = 2, log2 2 = 1,
3. 22 = 4, log2 4 = 2,
4. 23 = 8, log2 8 = 3,
5. 24 = 16,log2 16 = 4,
This means that if n =1, the number of steps = 1.
If n = 4, the number of steps will be 2.
If n = 8, then the number of steps will be 3.
So, as data doubles the number of steps increases by one.
The number of steps grows slowly in comparison to the growth in data size.
The best example of logarithmic complexity in software programming is the Binary Search tree. You will learn more about it in the chapter based on Trees.
What is worst-case time complexity of an algorithm?
The worst-case time complexity in computer science means worst-case in terms of time consumed while execution of a program. It is the longest-running time that an algorithm can take. The efficiency of algorithms can be compared by looking at the order of growth of the worst-case complexity.
What is the importance of Big-O notation?
Big-0 notation describes how fast the runtime of an algorithm will increase with respect to the size of the input. It would give you a fair idea about how your algorithm would scale and also give you an idea about the worst-case complexity of the algorithms that you might be considering for your project. You would be able to compare two algorithms and decide which would be a better option for you.
What is the need for run-time analysis for an algorithm when you can find the exact runtime for a piece of code?
The exact runtime is not. considered because the results can vary depending on the hardware of the machine, the speed of the processor, and the other processors that are running in the background. What is more important to understand is that how the performance of the algorithm gets affected with an increase in input data.
The big-O analysis is also known as.
What do you understand by asymptotic notation?
It is very important to know how fast a program would run and as mentioned earlier we do not want to consider the exact ran time because different computers have different hardware capabilities and we may not get the correct information in this manner.
In order to resolve this issue, the concept of asymptotic notation was developed. It provides a universal approach for measuring the speed and efficiency of an algorithm. For applications dealing with large inputs we are more interested in behaviour of an algorithm as the input size increases. Big O notation is one of the most important asymptotic notations.
Why is Big O notation expressed as O(n)?
Big-0 notation compares the runtime for various types of input sizes. The focus is only on the impact of input size on the runtime which is why we use the n notation. As the value of n increases, our only concern is to see how it affects the runtime. If we had been measuring the runtime directly then the unit of measurement would have been time units like second, micro-second, and so on. However, in this case, ‘n’ represents the size of the input, and ‘O’ stands for ‘Order’. Hence, 0(1) stands for the order of 1, 0(n) stands for the order of n, and 0(n2) stands for the order of the square of the size of the input.
Big-0 notations and names:
- Constant – 0(1)
- Logarithmic – 0(log(n))
- Linear -0(n)
- Log Linear – 0(nlog(n))
- Quadratic – 0(«2)
- Cubic – 0(«3)
- Exponential – 0(2″)
Write a code in python that takes a list of alphabets such as [‘a’ , ‘b’ , ‘c’, ‘d’ , ‘e’ , ’f’ , ’g’] and returns a list of combination such as [‘bcdefg’, ‘acdefg’, ‘abdefg’, ‘abcefg’, ‘abcdfg’, ‘abcdeg’, ‘abcdef’]. Find the time complexity for the code.
input_value = input("enter the list of alphabets separate by comma : ") alphabets = input_value.split(' , ') final = [ ] str = ' ' for element in range(0,len(alphabets)): for index, item in alphabets: if(item != alphabets[element]): str = str+item final. append (str) str=' '
enter the list of alphabets seperate by comma : a,b,c,d,e,f,g [‘bcdefg’, ‘acdefg’, ‘abdefg’, ‘abcefg’, ‘abcdfg’, ‘abcdeg’, ‘abcdef’] >>>
The code has a runtime of O(n<sup>2</sup>).
The following code finds the sum of all elements in a list. What is its time complexity?
def sum(input_list): total = 0 for i in input_list: total = total + i print(total)
def sum(input_list): total = 0 -----------(1) for i in input list: -----------(2) total = total + i print(total) ----------- (3) time for block (1) = O(1) time for block (2) = O(n) time for block (3) = O(1) Total time = O(1) + O(n) + O(1) Dropping constants Total time = O(n)
What are the pros and cons of time complexity?
It is a good way of getting an idea about the efficiency of the algorithm.
It can be difficult to assess complexity for complicated functions.
What is the difference between time and space complexity?
Time complexity gives an idea about the number of steps that would be involved in order to solve a problem. The general order of complexity in ascending order is as follows:
O(1) < O(log n) < O(n) < O(n log n) <O(n<sup>2</sup>)
Unlike time, memory can be reused and people are more interested in how fast the calculations can be done. This is one main reason why time complexity is often discussed more than space complexity. However, space complexity is never ignored. Space complexity decides how much memory will be required to run a program completely. Memory is required for:
- Instruction space
- Knowledge space for constants, variables, structured variables, dynamically altered areas, etc
What is the difference between auxiliary space and space complexity?
Auxiliary space is the extra space that is required while executing an algorithm and it is temporary space.
Space complexity on the other hand is the sum of auxiliary space and input space:
Space complexity = Auxiliary space + Input space
What is the memory usage while executing the algorithm?
Memory is used for:
- Saving the compiled version of instructions.
- In the case of nested functions or one algorithm, calling another algorithm, the variables are pushed to a system stack and made to wait till the internal algorithm has finished executing.
- To store data and variables.
You have learned about time complexity. Now, let’s move on to space complexity. As the name suggests space complexity describes how much memory space would be required if the size of input n increases. Here too we consider the worst-case scenario.
Now have a look at the following code:
>>> X = 23 (1)
>>> y = 34 (2)
>>> sum = x + y (3)
In this example, we need space to store three variables: x in (1), y in (2), and sum in (3). However, this scenario will not change, and the requirement for 3 variables is constant and hence the space complexity is 0(1).
Now, have a look at the following code:
word = input(“enter a word : “)
word_list = [ ]
for element in word:
The size of the word_list object increase with n. Hence, in this case, the space complexity is 0(n).
If there is a function 1 which has to suppose three variables and this functional calls another function named function2 that has 6 variables then the overall requirement for temporary workspace is 9 units. Even if functional calls function2 ten times the workspace requirement will remain the same.
What would be the space complexity for the following piece of code? Explain.
n = int(input("provide a number : ")) statement = "Happy birthday" for i in range(0,n): print(i+1,". ”, statement)
The space complexity for the above code will be 0(1) because the space requirement is only for storing value of integer variable n and string statement. Even if the size of n increases the space requirement remains constant. Hence, 0(1).
What is the time and space complexity for the following:
a = b = 0 for i in range(n): a = a + i for j in range(m): b = b + j
Time for first loop = O(n)
Time for second loop = O(m)
Total time = O(n) + O(m) = O(n + m)
Find the time complexity for the following code:
a = 0 for i in range(n): a = a + i for j in range(m): a = a + i + j
Time complexity: O(n2)
What will be the time complexity for the following code?
i = j = k =0 for i in range(n/2,n): for j in range(2,n): k = k+n/2 j = j*2
The time complexity for the first for loop O(n/2).
The time complexity for second for loop: O(logn) because j is doubling itself till it is less than n.
Total time = O(n/2)*O(logn)
What is the time complexity for the following code:
i = a = 0 while i>0: a = a+i i = i/2
Time complexity will be O(logn).
Big O for Python Data Structures
Python language comprises mutable and immutable objects. Numbers, strings, and tuple come in the latter category whereas lists, sets, and dictionary data types are mutable objects. The reason why lists and dictionaries are called mutable is that they can be easily altered at any time. They are like arrays because the data elements can be inserted or deleted easily by providing an index value and since the values can be easily added or removed from any point, the lists are considered to be dynamic. Hence, lists are known to be dynamic arrays. You can also say that lists are called dynamic array because:
- Its size can be dynamically modified at run time.
- You need not define the size of the list when you create it.
- They allow you to store more than one variable.
- Dynamic arrays once filled allocate a bigger chunk of memory, and elements of the original array are copied to this section of the memory as a result it can continue to fill available slots.
The most important operations that are performed on a list are as follows:
- Assigning value to an index value
Both the methods mentioned above are designed to run at constant time 0(1). The Big-0 for common list operations are given as follows:
- index[ ] : O(1)
- index assignment: O(1)
- append: O(1)
- pop( ): O(1)
- pop(i): O(n)
- insert(i, item): O(n)
- delete operator: O(n)
- contains(in): O(n)
- get slice[x:y]: O(k)
- del slice: O(n)
- set slice : O(n+k)
- reverse: O(n)
- concatenate: O(k)
- sort: O(nlog n)
- multiply O(nk)
Dictionaries are implementation of hashtables and can be operated with keys and values.
Big-O efficiencies for common dictionary objects are given as follows:
- copy : 0(n)
- get item : 0(1)
- set item: 0(1)
- delete item : 0(1)
- contains(in): 0(1)
- iteration: 0(n)