The problem asks for the worst-case number of arithmetic operations performed by a recursive binary search on a sorted array of size \(n\). To solve this, we need to understand how binary search works and analyze its time complexity.
- Binary Search operates on a sorted array. It works by comparing the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated, and the search continues on the remaining half.
- This method effectively halves the search space at each step, making it a highly efficient algorithm for finding elements in a sorted array.
- Since the array is halved at each step, the maximum depth of recursive calls (and hence the number of arithmetic operations performed) is determined by the number of times we can split the array until no elements remain.
- This depth is given by the logarithm base 2 of the array size \( n \), which is \(\log_2 n\). This is because after every recursive call, the size of the array is reduced from \( n \) to \( n/2 \), then to \( n/4 \), and so on, until it becomes 1.
- Therefore, the worst-case time complexity in terms of arithmetic operations for a recursive binary search is \(\Theta(\log_2 n)\).
Now, let's rule out the other options:
- \(\Theta(\sqrt{n})\): This option does not hold true for binary search. \(\sqrt{n}\) is not relevant for algorithms that halve their problem size.
- \(\Theta(n^2): This complexity is characteristic of algorithms that involve nested loops iterating over the entire set of elements, not applicable to binary search.
- \(\Theta(n): Linear search is \(\Theta(n)\), but binary search is more efficient in terms of time complexity.
Thus, the correct answer is \(\Theta(\log_2 n)\), which corresponds to the worst-case scenario for a binary search on a sorted array.