top of page

What is Cache memory?Android and iOS?

Cache memory, also called CPU memory, is high-speed static random access memory (SDRAM) that a computer microprocessor can access more quickly than it can access regular random access memory RAM,

This memory is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.

The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer’s main memory. Fast access to these instructions increases the overall speed of the program.

Cache memory is an intermediate form of storage between the registers (located inside the processor and directly accessed by the CPU) and the RAM.

Cache memory can be primary or secondary cache memory, with primary cache memory directly integrated into (or closest to) the processor. In addition to hardware-based cache, cache memory also can be a disk cache, where a reserved portion on a disk stores and provides access to frequently accessed data/applications from the disk.

Image result for what is cache memory

The functions of cache memory are:

  1. The CPU uses the cache memory to store instructions and data that are repeatedly required during the execution of programs, thus improving the performance and speed of the whole system.

  2. It also avoids the need to access the dynamic RAM to retrieve the same data repeatedly.If you know about Memory hierarchy then u understand better “:

  3. In the Memory Hierarchy System, a cache memory is placed between CPU &main memory.

”The function of the cache organization” is concerned with the transfer of information between main memory and CPU. “If you don’t know about Memory hierarchy then this will help u” -The Memory hierarchy system consists of all storage devices employed in a computer system from a slow but high storage capacity auxiliary memory to relatively faster main memory, even faster cache memory accessible to the high-speed processing logic.

Actually CPU logic is faster than main memory access time, So a technique is used to compensate for the mismatch in operating speed is to employ an extremely fast small cache between the CPU and Main memory whose access time is closer to process logic cycle time.

****If you want to know in depth, further must read this article –CPU is faster than main memory. And main memory is faster than auxiliary memory or secondary memory. If CPU wants to execute something, it has to read something from main memory and then CPU executed.

CPU able to execute instruction very fast but main memory cannot able to give the speed at that rate. The speed of CPU is always limited by the speed of main memory. Here CPU is very fast but main memory cannot get the instruction as fast as CPU. So what happened, the speed of the CPU is obviously fallen down due to the speed of the main memory. why this delay ?, because if we take a word(a wordis an ordered set of bytes or bits that is the normal unit in which information may be stored.) from main memory to CPU we go through the bus( bus is a set of wires).so,solving this problem we used speed memory devices which are directed connected to CPU, one storage is register(very fast) so instruction will be present before execution in CPU .

So, CPU can get it fast and execute.BUT here the problem is register size is too small so, “register cannot store the entire program, they only able to store few instructions.”

So, the many people invented a new device called as cache So, the cache is a high-speed memory which is not costly as register but It is faster compared to main memory.

Dynamic RAM
  1. L1 cache is generally around 16–128KiB and takes a couple of processor cycles to access.

  2. L2 cache usually has a capacity of 256KiB -> a couple of MiB and takes low tens of cycles to access.

  3. L3 cache is usually high single digit MiB to tens of MiB, also usually shared by all cores, and takes somewhere in the range of 20 and 100 cycles to access.

In the past, L1, L2 and L3 caches have been created using combined processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. That’s why the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2 and L3 cache.

In addition to instruction and data caches, other caches are designed to provide specialized system functions. According to some definitions, the L3 cache’s shared design makes it a specialized cache. Other definitions keep instruction caching and data caching separate, and refer to each as a specialized cache.

The ability of cache memory to improve a computer’s performance relies on the concept of locality of reference.

Locality describes various situations that make a system more predictable, such as where the same storage location is repeatedly accessed, creating a pattern of memory access that the cache memory relies upon.

There are several types of locality. Two key ones for cache are temporal and spatial.

Temporal locality is when the same resources are accessed repeatedly in a short amount of time. Spatial locality refers to accessing various data or resources that are in close proximity to each other.

DRAM is usually about half as fast as L1, L2 or L3 cache memory, and much less expensive. It provides faster data access than flash storage, hard disk drives (HDDs) and tape storage. It came into use in the last few decades to provide a place to store frequently accessed disk data to improve I/O performance.

To do this, the OS temporarily transfers inactive data from RAM to disk storage. This approach increases virtual address space by using active memory in RAM and inactive memory in HDDs to form contiguous addresses that hold both an application and its data.

Virtual memory lets a computer run larger programs or multiple programs simultaneously, and each program operates as though it has unlimited memory.



bottom of page