What is Cache memory?Android and iOS?
Cache memory, also called CPU memory, is high-speed static random access memory (SDRAM) that a computer microprocessor can access more quickly than it can access regular random access memory RAM,
This memory is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.
The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer’s main memory. Fast access to these instructions increases the overall speed of the program.
Cache memory is an intermediate form of storage between the registers (located inside the processor and directly accessed by the CPU) and the RAM.
Cache memory can be primary or secondary cache memory, with primary cache memory directly integrated into (or closest to) the processor. In addition to hardware-based cache, cache memory also can be a disk cache, where a reserved portion on a disk stores and provides access to frequently accessed data/applications from the disk.
The functions of cache memory are:
The CPU uses the cache memory to store instructions and data that are repeatedly required during the execution of programs, thus improving the performance and speed of the whole system.
It also avoids the need to access the dynamic RAM to retrieve the same data repeatedly.If you know about Memory hierarchy then u understand better “:
In the Memory Hierarchy System, a cache memory is placed between CPU &main memory.
”The function of the cache organization” is concerned with the transfer of information between main memory and CPU. “If you don’t know about Memory hierarchy then this will help u” -The Memory hierarchy system consists of all storage devices employed in a computer system from a slow but high storage capacity auxiliary memory to relatively faster main memory, even faster cache memory accessible to the high-speed processing logic.
Actually CPU logic is faster than main memory access time, So a technique is used to compensate for the mismatch in operating speed is to employ an extremely fast small cache between the CPU and Main memory whose access time is closer to process logic cycle time.
****If you want to know in depth, further must read this article –CPU is faster than main memory. And main memory is faster than auxiliary memory or secondary memory. If CPU wants to execute something, it has to read something from main memory and then CPU executed.
CPU able to execute instruction very fast but main memory cannot able to give the speed at that rate. The speed of CPU is always limited by the speed of main memory. Here CPU is very fast but main memory cannot get the instruction as fast as CPU. So what happened, the speed of the CPU is obviously fallen down due to the speed of the main memory. why this delay ?, because if we take a word(a wordis an ordered set of bytes or bits that is the normal unit in which information may be stored.) from main memory to CPU we go through the bus( bus is a set of wires).so,solving this problem we used speed memory devices which are directed connected to CPU, one storage is register(very fast) so instruction will be present before execution in CPU .
So, CPU can get it fast and execute.BUT here the problem is register size is too small so, “register cannot store the entire program, they only able to store few instructions.”
So, the many people invented a new device called as cache So, the cache is a high-speed memory which is not costly as register but It is faster compared to main memory.
L1 cache is generally around 16–128KiB and takes a couple of processor cycles to access.
L2 cache usually has a capacity of 256KiB -> a couple of MiB and takes low tens of cycles to access.
L3 cache is usually high single digit MiB to tens of MiB, also usually shared by all cores, and takes somewhere in the range of 20 and 100 cycles to access.
Comments