Back to list
dev_to 2026年3月7日

動的な配列は实际上動的ではないなぜ

Why Dynamic Arrays Aren't Actually Dynamic

Translated: 2026/3/7 14:00:32
dynamic-arrayscomputer-sciencedata-strcuture

Japanese Translation

強調点: 标准配列は構成的なサイズで設計されていますが、それはアクセスをO(1)にするために。動的な配列はこの制限を管理するためにはオーバー・アロケートメモリを行います,并定期的に再調整を実行します。 これにより、膨張した背景配列は通常(〜1.5倍も2倍も)に大きくなるため、非常にコストの高いO(n)コピー操作が稀になるため、全体的なコストは典型的にはO(1),または知られている最適のアクセス速度を維持します。 概念として、均賛定速となります。

Original Content

TL;DR: Standard arrays are fixed-size by design to ensure O(1) access. Dynamic arrays manage this constraint by over-allocating memory and periodically resizing. By growing the backing array geometrically—usually 1.5x or 2x—the expensive O(n) copy operations are rare enough that the average cost per insertion remains O(1), a concept known as amortized constant time. I often find that one of the first abstractions we take for granted as engineers is the dynamic array. Whether you are using an ArrayList in Java or a standard array in JavaScript, it is easy to assume these structures just "grow" naturally. In reality, memory is still a rigid series of fixed slots. I want to look at how we maintain the abstraction of contiguous growth while staying within the physical limits of memory allocation. I look at standard arrays as fixed-size blocks because the operating system requires a contiguous chunk of memory to provide O(1) random access. If I want to calculate the address of the fifth element, the CPU needs to know exactly how far to offset from the starting address without checking for gaps. When I allocate an array, the system reserves a specific range of addresses. If I need to add an eleventh item to a ten-item array, I cannot simply expand the block in-place. There is no guarantee that the memory address immediately following my array isn't already claimed by another process or variable. To grow, I am forced to move the entire data set to a new, larger location that can accommodate the new size. In my experience, the most critical part of this process is the growth factor. When the backing array hits its capacity limit, the system allocates a brand-new, larger buffer and executes a linear O(n) copy operation to move all existing elements from the old memory block to the new one. I see a lot of logic where the new array is scaled by a factor of 1.5x or 2x rather than just adding a single slot. This geometric growth is intentional. If I only increased the size by one slot every time I ran out of room, I would be performing an O(n) copy on every single addition. By doubling the capacity, I ensure that as the dataset grows, the intervals between these expensive reallocations become significantly longer. Current Capacity Elements Added Resize Triggered? New Capacity Copy Cost (Elements) 4 4 No 4 0 4 5 Yes 8 4 8 8 No 8 0 8 9 Yes 16 8 16 17 Yes 32 16 I use the term amortized constant time to describe an operation that is occasionally expensive but usually very cheap. In the context of dynamic arrays, the O(n) cost of a resize is spread out across a large number of O(1) insertions, making the average cost per operation effectively constant. I like to explain this through the lens of a gym membership. I make one large payment of 600 dollars at the start of the year (representing the expensive O(n) copy operation). For the next 364 days, I walk into the gym for free (representing the O(1) insertion). If I analyze the cost of a single day, it is either 600 dollars or nothing. However, the amortized cost—the cost averaged over the whole year—is only about 1.64 per day. For most intents and purposes, I can treat the operation as O(1) because the "spikes" in latency are so infrequent. Why do some implementations use a 1.5x growth factor instead of 2x? Can I avoid the O(n) copy cost if I know my data size? Is there a downside to using very large growth factors? Cheers!