You have a million integers. You need the one at position 731,842. How fast can you get it?
If you're scanning from the start, you're doing 731,842 operations. If you're using a linked list, you're chasing 731,842 pointers. But there's a data structure that does it in exactly one step, regardless of position. Before I name it — think about what property of memory would make that possible.
The Property
Contiguous allocation. When elements sit next to each other in memory, all the same size, finding any one of them is arithmetic:
address = base_address + (i element_size)
One multiplication, one addition. Position doesn't matter. Size doesn't matter. That's O(1) random access, and it's the defining characteristic of the structure you already know as an array.
Now here's a question worth sitting with: if reads are O(1), why isn't everything O(1)?
Where It Breaks
Think about inserting an element at position 5 in a 1000-element array. What has to happen to elements 5 through 999?
Every single one shifts right. That's 995 operations for one insert. The contiguous layout that makes reads instant is the same property that makes middle-insertions expensive. You can't have gaps — gaps break the address formula.
| Operation | Complexity | Ask yourself: why? |
|---|---|---|
| Read by index | O(1) | Address arithmetic |
| Push to end | O(1) amortized | Where does the "amortized" come from? |
| Insert in middle | O(n) | What if you could have gaps? |
| Delete from middle | O(n) | Same constraint, opposite direction |
| Search (unsorted) | O(n) | What would you need to do better? |
| Search (sorted) | O(log n) | What technique makes this possible? |
The Amortized Argument
When a dynamic array runs out of space, it allocates 2x the capacity and copies everything. That copy is O(n). So how can push be O(1)?
Don't read ahead. Think about it. If you double the capacity each time, how often does the expensive copy happen relative to the number of pushes?
The insight: after doubling to size 2n, you can do n more pushes before the next copy. The expensive operation happens exponentially less often. Over n total pushes, total copy work sums to roughly 2n — that's the geometric series. Per push: O(1) amortized.
If an interviewer asks "what's the complexity of push?", they're testing whether you can explain amortized — not just say the word, but walk through why. That's the difference between memorization and understanding. Can you explain it right now, out loud, without looking at the paragraph above?
The Patterns
Here's a problem: given an array of integers and a target sum, find two elements that add up to it. O(n²) brute force is obvious — check every pair. Can you do better?
function twoSum(nums: number[], target: number): number[] {
const map = new Map<number, number>();
for (let i = 0; i < nums.length; i++) {
const complement = target - nums[i];
if (map.has(complement)) return [map.get(complement)!, i];
map.set(nums[i], i);
}
return [];
}
What just happened? The nested loop disappeared. One pass, O(n). The trick was storing previous results so future iterations can query them in O(1). This is the two-pointer/hash map hybid — but more importantly, it's a general principle: trade space for time by caching prior computation.
That principle shows up everywhere. Sliding windows. Dynamic programming. Memoization. The array is just where you first see it.
Reflection
Before moving on, answer this without scrolling up: Why are middle insertions O(n) in an array? And: what property would a data structure need to make insertions O(1)?
If you can articulate both answers clearly, you understand arrays — not as a data structure you memorized, but as a set of trade-offs you can reason about.
Original article: [jamesperalta.com](https://jamesperalta.com/learn/dsa/arrays)*