New Compression Algorithm Doubles Mobile RAM Size

Memory is a scarce resource in embedded systems and mobiles. Increasing memory often increases packaging and cooling costs, size, and energy consumption.

To save memory in disk-less embedded systems without modifying applications or hardware, joint venture between NEC Labs and Northwestern University's Robert R. McCormick School of Engineering and Applied Science designed "CRAMES".

Short for "compressed RAM for embedded systems", CRAMES takes advantage of an operating system's virtual memory infrastructure by storing swapped-out pages in compressed format. It dynamically adjusts the size of the compressed RAM area, protecting applications capable of running without it from performance or energy consumption penalties. In addition to compressing working data sets, CRAMES also enables efficient in-RAM filesystem compression, thereby further increasing RAM capacity.

In order to minimize the performance penalty, CRAMES creators designed a new software-based on-line memory compression algorithm for embedded systems and a method of adaptively managing the uncompressed and compressed memory regions during application execution. In comparison with algorithms that are commonly used in on-line memory compression, the new algorithm has a competitive compression ratio but is twice as fast.

Experimental results indicate that the memory available to applications can be increased by 150%, allowing the execution of applications with larger working data sets, or allowing existing applications to run with less physical memory. Execution time and energy consumption for a broad range of applications increase only slightly.

To sum it up in a less technical manner, CRAMES is a new system that dynamically compress and decompress data stored in RAM in order to be able to run larger programs.

Add new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.

Comments

Add new comment