Starting from:

$30

CSC139- Homework 3: Virtual Memory Manager (VMM) Solved

                                                                                              General rules: Create homework, compose specifications or any text by using a common document-creation tool, such as Microsoft® Word. Program in C, C++ or Java. Refer to the wwweb or lecture notes for this class to design, implement, and debug solid SW solutions. Be complete, and precise.

Summary: Design, implement, test, debug and document a simulator for a demand-paged Virtual Memory Manager named Vmm. Vmm assumes a 32-bit, byte-addressable architecture with 4 kB sized, and 4 kB aligned page frames. Run your program with your own, well thought-out inputs. Also use the inputs mentioned here. Only submit inputs and outputs of your test runs, not program sources, and not the traces that you used for your own debug!

Abstract: Simulate a Virtual Memory Manager named Vmm that reads in textual form memory requests (AKA loads and stores) from stdin and simulates them. Vmm measures the number of CPU cycles of any simulated action. The fixed cycles for memory accesses are defined via constants; they do not vary dynamically as they generally will in a real run-time system. Implement Vmm with three levels of 32-bit address mapping, using 10, 10 and 12 bits of any logical address. Except for the Page Directory (PD) which is implemented as a dedicated 4 kB-size cache, Vmm operates without cache. Accesses through the PD cost 1 cycle each.

Track the exact number of cycles for each and for all memory accesses; also compute the number of cycles if no virtual memory mapping had been available. Track the number of page swap-ins, swap-outs, and all interesting demand paging related activities. Define the number of cycles for any swap operation to be fixed at 5000 cycles per swap. In reality this will be higher and does generally vary from one disk access to another.

Output some text file solely for your own education –and to help you debug your code– that shows the state of the memory subsystem at each swap-in and swap-out. This file can become large when the system is thrashing. Make sure you do run some thrashing case. Do not turn in these files.

Detailed Requirements: Simulate Vmm for a byte-addressable, 4-bytes per word architecture with 32 bit addresses, using demand paging. A page is 4 kB long, and 4 kB aligned. Write also a separate address-generating program to be used for your simulation inputs. Load- and store requests (in ASCII text form) are read by the simulator’s front-end (FE) and “executed” by the simulator.

To start a simulation run, the first input tuple p of information is the number of page frames available; e.g. input p 9 indicates 9 physical page frames are available for this run; no more. Whenever the data space needed to execute your simulated program is larger than the available number of physical frames (like in case of p 9), some resident page will have to be swapped out. This is called the victim page. Careful, if the victim page is unmodified and already on disk, it can simply be overridden; no need to swap back onto disk. Only user pages are swapped out, not Page Tables. The PD being implemented as a HW cache can never be swapped out.

Use three levels of address mapping. Each logical 32-bit address is interpreted as a 10+10+12 bit string as follows:

PD: The leftmost 10 bits of a logical address hold the index into the Page Directory PD. The PD is a 4 kB HW cache, having space for 1024 pointers to Page Tables (PT). An entry in the PD is 4 bytes long, or 32 bits, sufficient to hold the frame-aligned address of a PT. The remaining 12 bits in a PD entry are usable for administrative detail about the PT. You need some of these bits.

PT: Te next 10 bits of a logical address are the index into the Page Table PT. An entry in the PT is 4 bytes long, or 32 bits, each sufficient to hold the address of a user page. The remaining 12 bits are usable for further detail about the user page.

User Page: Finally, the rightmost 12 bits of any logical address are the byte-offset into a user page. The start address of a user page is pointed to by the appropriate PT entry.

Replacement: If a page must be swapped out, use a Random Replacement Policy. Define some fixed, arbitrary, small number of (simulated) machine cycles for selecting the victim page; e.g. 10 cycles would be an acceptable constant. This constitutes a simplification compared with a real demand paged system that must identify a victim page, sometimes at great cost.

Note that only 20 address bits need to be stored in each of the 1024 PD and PT entries. While logical addresses are 32 bits long, all frames are aligned on 4 Kb boundaries, thus the rightmost 12 bits of any page address always happen to be 0. No need to store them! Instead, the remaining 12 bits can be used for: the P bit (present); D bit (dirty, set if written, clear initially; AKA M bit for modified); R bit for read-only pages; S bit for a shared page; and other data interesting to an Operating System about a page. Perhaps a few bits even remain unused.

Summary Output after completion of a simulation:

 

* * * Paging Activity Statistics * * * 

 number of memory accesses       = xx

 number of triples (1 + access)  = xx

 number of swap ins (faults)     = xx

 number of swap outs             = xx

 total number of pages malloced  = xx

 number of pages for Page Tables = xx

 number of page frames for user  = xx

 total memory cycles             = xx

 cycles w/o Vmm                  = xx

 cycles per swap_in              = 5000

 cycles per swap_out             = 5000

 last working set size           = xx

 max working set size ever       = xx

 max physical pages              = xx

 page size                       = 4096

 replacement algorithm           = random

    

Required Input and resulting output: P x at the beginning of the input specifies the maximum number x of physical page frames. Example above showed p 9. Other input samples to the simulator: w x y define that at address x the value y will be stored, i.e. written; in your actual simulation you may ignore the y value actually being stored. Tuple r x defines that memory address x is loaded (AKA read) into some register. Simulate and track the timing for each memory access.

 

 

More products