Tuesday, July 13, 2010

Memory Management

Memory Management
The memory manager implements virtual memory, provides a core set of services such as memory mapped files, copy-on-write memory, large memory support, and underlying support for the cache manager.
About Memory Management
Each process on 32-bit Microsoft Windows has its own virtual address space that enables addressing up to 4 gigabytes of memory. Each process on 64-bit Windows has a virtual address space of 8 terabytes. All threads of a process can access its virtual address space. However, threads cannot access memory that belongs to another process, which protects a process from being corrupted by another process.
For information on the virtual address space and the memory management functions, see the following topics.
• Virtual Address Space
• Memory Pools
• Memory Performance Information
• Virtual Memory Functions
• Heap Functions
• File Mapping
• Large Memory Support
• Global and Local Functions
• Standard C Library Functions
• Comparing Memory Allocation Methods

Virtual Address Space
The virtual addresses that a process uses do not represent the actual physical location of an object in memory. Instead, the system maintains a page map for each process, which is an internal data structure used to translate virtual addresses into corresponding physical addresses. Each time a thread references an address, the system translates the virtual address to a physical address.
For more information about virtual memory, see the following topics:
• Virtual Address Space and Physical Storage
• Working Set
• Page State
• Scope of Allocated Memory
• Data Execution Prevention
• Memory Protection
• Memory Limits for Windows Releases

Virtual Address Space and Physical Storage
The maximum amount of physical memory supported by Microsoft Windows ranges from 2 GB to 1 TB, depending on the version of Windows. For more information, see Memory Limits for Windows Releases. The virtual address space of each process can be smaller or larger than the total physical memory available on the computer. The subset of the virtual address space of a process that resides in physical memory is known as the working set. If the threads of a process attempt to use more physical memory than is currently available, the system pages some the memory contents to disk. The total amount of virtual address space available to a process is limited by physical memory and the free space on disk available for the paging file.
Physical storage and the virtual address space of each process are organized into pages, units of memory, whose size depends on the host computer. For example, on x86 computers the host page size is 4 kilobytes.
To maximize its flexibility in managing memory, the system can move pages of physical memory to and from a paging file on disk. When a page is moved in physical memory, the system updates the page maps of the affected processes. When the system needs space in physical memory, it moves the least recently used pages of physical memory to the paging file. Manipulation of physical memory by the system is completely transparent to applications, which operate only in their virtual address spaces.
Working Set
The working set of a process is the set of pages in the virtual address space of the process that are currently resident in physical memory. Memory allocations that are nonpageable such as Address Windowing Extensions (AWE) or large page allocations are not included in the working set.
When a process references a page that is not part of its working set, a page fault occurs. The system page fault handler attempts to resolve the page fault and, if it succeeds, the page is added to the working set. (Accessing AWE or large page allocations never causes a page fault, because these allocations are not in the working set.)
A hard page fault must be resolved by reading page contents from the page's backing store, which is either the system paging file or a memory-mapped file created by the process. A soft page fault can be resolved without accessing the backing store. A soft page fault occurs when:
• The page is in the working set of some other process, so it is already resident in memory.
• The page is in transition, because it either has been removed from the working sets of all processes that were using the page and has not yet been repurposed, or it is already resident as a result of a memory manager prefetch operation.
• A process references an allocated virtual page for the first time (sometimes called a demand-zero fault).
Pages can be removed from a process working set as a result of the following actions:
• The process reduces or empties the working set by calling the SetProcessWorkingSetSize, SetProcessWorkingSetSizeEx or EmptyWorkingSet function.
• The process calls the VirtualUnlock function on a memory range that is not locked.
• The memory manager trims pages from the working set to create more available memory.
• The memory manager must remove a page from the working set to make room for a new page (for example, because the working set is at its maximum size).
If several processes share a page, removing the page from the working set of one process does not affect other processes. After a page is removed from the working sets of all processes that were using it, the page becomes a transition page. Transition pages remain cached in RAM until the page is either referenced again by some process or repurposed (for example, filled with zeros and given to another process). If a transition page has been modified since it was last written to disk (that is, if the page is "dirty"), then the page must be written to its backing store before it can be repurposed. The system may start writing dirty transition pages to their backing store as soon as such pages become available.
Each process has a minimum and maximum working set size that affect the virtual memory paging behavior of the process. To obtain the current size of the working set of a specified process, use the GetProcessMemoryInfo function. To obtain or change the minimum and maximum working set sizes, use the GetProcessWorkingSetSizeEx and SetProcessWorkingSetSizeEx functions.
The process status application programming interface (PSAPI) provides a number of functions that return detailed information about the working set of a process. For details, see Working Set Information.

Page State
The pages of a process's virtual address space can be in one of the following states.
State Description
Free The page is neither committed nor reserved. The page is not accessible to the process. It is available to be reserved, committed, or simultaneously reserved and committed. Attempting to read from or write to a free page results in an access violation exception.
A process can use the VirtualFree or VirtualFreeEx function to release reserved or committed pages of its address space, returning them to the free state.
Reserved The page has been reserved for future use. The range of addresses cannot be used by other allocation functions. The page is not accessible and has no physical storage associated with it. It is available to be committed.
A process can use the VirtualAlloc or VirtualAllocEx function to reserve pages of its address space and later to commit the reserved pages. It can use VirtualFree or VirtualFreeEx to decommit committed pages and return them to the reserved state.
Committed Physical storage is allocated for a reserved page, and access is controlled by one of the memory protection constants. The system initializes and loads each committed page into physical memory only during the first attempt to read or write to that page. When the process terminates, the system releases the storage for committed pages.
A process can use VirtualAlloc or VirtualAllocEx to commit physical pages from a reserved region. They can also simultaneously reserve and commit pages.
The GlobalAlloc and LocalAlloc functions allocate committed pages with read/write access.



Scope of Allocated Memory
All memory a process allocates by using the memory allocation functions ( HeapAlloc, VirtualAlloc, GlobalAlloc, or LocalAlloc) is accessible only to the process. However, memory allocated by a DLL is allocated in the address space of the process that called the DLL and is not accessible to other processes using the same DLL. To create shared memory, you must use file mapping.
Named file mapping provides an easy way to create a block of shared memory. A process can specify a name when it uses the CreateFileMapping function to create a file-mapping object. Other processes can specify the same name to either the CreateFileMapping or OpenFileMapping function to obtain a handle to the mapping object.
Each process specifies its handle to the file-mapping object in the MapViewOfFile function to map a view of the file into its own address space. The views of all processes for a single file-mapping object are mapped into the same sharable pages of physical storage. However, the virtual addresses of the mapped views can vary from one process to another, unless the MapViewOfFileEx function is used to map the view at a specified address. Although sharable, the pages of physical storage used for a mapped file view are not global; they are not accessible to processes that have not mapped a view of the file.
Any pages committed by mapping a view of a file are released when the last process with a view of the mapping object either terminates or unmaps its view by calling the UnmapViewOfFile function. At this time, the specified file (if any) associated with the mapping object is updated. A specified file can also be forced to update by calling the FlushViewOfFile function.
For more information, see File Mapping. For an example of shared memory in a DLL, see Using Shared Memory in a Dynamic-Link Library.
If multiple processes have write access to shared memory, you must synchronize access to the memory. For more information, see Synchronization.

Data Execution Prevention
Data Execution Prevention (DEP) is a system-level memory protection feature that is built into the operating system starting with Windows XP and Windows Server 2003. DEP enables the system to mark one or more pages of memory as non-executable. Marking memory regions as non-executable means that code cannot be run from that region of memory, which makes it harder for the exploitation of buffer overruns.
DEP prevents code from being run from data pages such as the default heap, stacks, and memory pools. If an application attempts to run code from a data page that is protected, a memory access violation exception occurs, and if the exception is not handled, the calling process is terminated.
DEP is not intended to be a comprehensive defense against all exploits; it is intended to be another tool that you can use to secure your application.
How Data Execution Prevention Works
If an application attempts to run code from a protected page, the application receives an exception with the status code STATUS_ACCESS_VIOLATION. If your application must run code from a memory page, it must allocate and set the proper virtual memory protection attributes. The allocated memory must be marked PAGE_EXECUTE, PAGE_EXECUTE_READ, PAGE_EXECUTE_READWRITE, or PAGE_EXECUTE_WRITECOPY when allocating memory. Heap allocations made by calling the malloc and HeapAlloc functions are non-executable.
Applications cannot run code from the default process heap or the stack.
DEP is configured at system boot according to the no-execute page protection policy setting in the boot configuration data. An application can get the current policy setting by calling the GetSystemDEPPolicy function. Depending on the policy setting, an application can change the DEP setting for the current process by calling the SetProcessDEPPolicy function.
Programming Considerations
An application can use the VirtualAlloc function to allocate executable memory with the appropriate memory protection options. It is suggested that an application set, at a minimum, the PAGE_EXECUTE memory protection option. After the executable code is generated, it is recommended that the application set memory protections to disallow write access to the allocated memory. Applications can disallow write access to allocated memory by using the VirtualProtect function. Disallowing write access ensures maximum protection for executable regions of process address space. You should attempt to create applications that use the smallest executable address space possible, which minimizes the amount of memory that is exposed to memory exploitation.
You should also attempt to control the layout of your application's virtual memory and create executable regions. These executable regions should be located in a lower memory space than non-executable regions. By locating executable regions below non-executable regions, you can help prevent a buffer overflow from overflowing into the executable area of memory.
Application Compatibility
Some application functionality is incompatible with DEP. Applications that perform dynamic code generation (such as Just-In-Time code generation) and do not explicitly mark generated code with execute permission may have compatibility issues on computers that are using DEP. Applications written to the Active Template Library (ATL) version 7.1 and earlier can attempt to execute code on pages marked as non-executable, which triggers an NX fault and terminates the application; for more information, see SetProcessDEPPolicy. Most applications that perform actions incompatible with DEP must be updated to function properly.
A small number of executable files and libraries may contain executable code in the data section of an image file. In some cases, applications may place small segments of code (commonly referred to as thunks) in the data sections. However, DEP marks sections of the image file that is loaded in memory as non-executable unless the section has the executable attribute applied.
Therefore, executable code in data sections should be migrated to a code section, or the data section that contains the executable code should be explicitly marked as executable. The executable attribute, IMAGE_SCN_MEM_EXECUTE, should be added to the Characteristics field of the corresponding section header for sections that contain executable code. For more information about adding attributes to a section, see the documentation included with your linker.

Memory Protection
Memory that belongs to a process is implicitly protected by its private virtual address space. In addition, Windows provides memory protection by using the virtual memory hardware. The implementation of this protection varies with the processor, for example, code pages in the address space of a process can be marked read-only and protected from modification by user-mode threads.
For the complete list of attributes, see Memory Protection Constants.
Copy-on-Write Protection
Copy-on-write protection is an optimization that allows multiple processes to map their virtual address spaces such that they share a physical page until one of the processes modifies the page. This is part of a technique called lazy evaluation, which allows the system to conserve physical memory and time by not performing an operation until absolutely necessary.
For example, suppose two processes load pages from the same DLL into their virtual memory spaces. These virtual memory pages are mapped to the same physical memory pages for both processes. As long as neither process writes to these pages, they can map to and share, the same physical pages, as shown in the following diagram.

If Process 1 writes to one of these pages, the contents of the physical page are copied to another physical page and the virtual memory map is updated for Process 1. Both processes now have their own instance of the page in physical memory. Therefore, it is not possible for one process to write to a shared physical page and for the other process to see the changes.

Loading Applications and DLLs
When multiple instances of the same Windows-based application are loaded, each instance is run in its own protected virtual address space. However, their instance handles (hInstance) typically have the same value. This value represents the base address of the application in its virtual address space. If each instance can be loaded into its default base address, it can map to and share the same physical pages with the other instances, using copy-on-write protection. The system allows these instances to share the same physical pages until one of them modifies a page. If for some reason one of these instances cannot be loaded in the desired base address, it receives its own physical pages.
DLLs are created with a default base address. Every process that uses a DLL will try to load the DLL within its own address space at the default virtual address for the DLL. If multiple applications can load a DLL at its default virtual address, they can share the same physical pages for the DLL. If for some reason a process cannot load the DLL at the default address, it loads the DLL elsewhere. Copy-on-write protection forces some of the DLL's pages to be copied into different physical pages for this process, because the fixes for jump instructions are written within the DLL's pages, and they will be different for this process. If the code section contains many references to the data section, this can cause the entire code section to be copied to new physical pages.


Memory Pools
The memory manager creates the following memory pools that the system uses to allocate memory: nonpaged pool and paged pool. Both memory pools are located in the region of the address space that is reserved for the system and mapped into the virtual address space of each process. The nonpaged pool consists of virtual memory addresses that are guaranteed to reside in physical memory as long as the corresponding kernel objects are allocated. The paged pool consists of virtual memory that can be paged in and out of the system. To improve performance, systems with a single processor have three paged pools, and multiprocessor systems have five paged pools.
The handles for kernel objects are stored in the paged pool, so the number of handles you can create is based on available memory.
The system records the limits and current values for its nonpaged pool, paged pool, and page file usage. For more information, see Memory Performance Information.
Memory Performance Information
Memory performance information is available from the memory manager through the system performance counters and through functions such as GetPerformanceInfo, GetProcessMemoryInfo, and GlobalMemoryStatusEx. Applications such as the Windows Task Manager, the Reliability and Performance Monitor, and the Process Explorer tool use performance counters to display memory information for the system and for individual processes.
This topic associates performance counters with the data returned by memory performance functions and the Windows Task Manager:
• System Memory Performance Information
• Process Memory Performance Information
System Memory Performance Information
The following table associates memory object performance counters with the data returned by the memory performance functions in the MEMORYSTATUSEX, PERFORMANCE_INFORMATION, and PROCESS_MEMORY_COUNTERS_EX structures, and with the corresponding information displayed by Task Manager.
Process Memory Performance Information
The following table associates process object performance counters with the data returned by the memory performance functions in the MEMORYSTATUSEX, PERFORMANCE_INFORMATION, and PROCESS_MEMORY_COUNTERS_EX structures, and with the corresponding information displayed by Task Manager.
Virtual Memory Functions
The virtual memory functions enable a process to manipulate or determine the status of pages in its virtual address space. They can perform the following operations:
• Reserve a range of a process's virtual address space. Reserving address space does not allocate any physical storage, but it prevents other allocation operations from using the specified range. It does not affect the virtual address spaces of other processes. Reserving pages prevents needless consumption of physical storage, while enabling a process to reserve a range of its address space into which a dynamic data structure can grow. The process can allocate physical storage for this space, as needed.
• Commit a range of reserved pages in a process's virtual address space so that physical storage (either in RAM or on disk) is accessible only to the allocating process.
• Specify read/write, read-only, or no access for a range of committed pages. This differs from the standard allocation functions that always allocate pages with read/write access.
• Free a range of reserved pages, making the range of virtual addresses available for subsequent allocation operations by the calling process.
• Decommit a range of committed pages, releasing their physical storage and making it available for subsequent allocation by any process.
• Lock one or more pages of committed memory into physical memory (RAM) so that the system cannot swap the pages out to the paging file.
• Obtain information about a range of pages in the virtual address space of the calling process or a specified process.
• Change the access protection for a specified range of committed pages in the virtual address space of the calling process or a specified process.
For more information, see the following topics.
• Allocating Virtual Memory
• Freeing Virtual Memory
• Working With Pages
• Memory Management Functions

Allocating Virtual Memory
The virtual memory functions manipulate pages of memory. The functions use the size of a page on the current computer to round off specified sizes and addresses.
The VirtualAlloc function performs one of the following operations:
• Reserves one or more free pages.
• Commits one or more reserved pages.
• Reserves and commits one or more free pages.
You can specify the starting address of the pages to be reserved or committed, or you can allow the system to determine the address. The function rounds the specified address to the appropriate page boundary. Reserved pages are not accessible, but committed pages can be allocated with PAGE_READWRITE, PAGE_READONLY, or PAGE_NOACCESS access. When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page. You can use normal pointer references to access memory committed by the VirtualAlloc function.

Freeing Virtual Memory
The VirtualFree function decommits and releases pages according to the following rules:
• Decommits one or more committed pages, changing the state of the pages to reserved. Decommitting pages releases the physical storage associated with the pages, making it available to be allocated by any process. Any block of committed pages can be decommitted.
• Releases a block of one or more reserved pages, changing the state of the pages to free. Releasing a block of pages makes the range of reserved addresses available to be allocated by the process. Reserved pages can be released only by freeing the entire block that was initially reserved by VirtualAlloc.
• Decommits and releases a block of one or more committed pages simultaneously, changing the state of the pages to free. The specified block must include the entire block initially reserved by VirtualAlloc, and all of the pages must be currently committed.
After a memory block is released or decommitted, you can never refer to it again. Any information that may have been in that memory is gone forever. Attempting to read from or write to a free page results in an access violation exception. If you require information, do not decommit or free memory containing that information.
To specify that the data in a memory range is no longer of interest, call VirtualAlloc with MEM_RESET. The pages will not be read from or written to the paging file. However, the memory block can be used again later.

Working with Pages
To determine the size of a page on the current computer, use the GetSystemInfo function.
The VirtualQuery and VirtualQueryEx functions return information about a region of consecutive pages beginning at a specified address in the address space of a process. VirtualQuery returns information about memory in the calling process. VirtualQueryEx returns information about memory in a specified process and is used to support debuggers that need information about a process being debugged. The region of pages is bounded by the specified address rounded down to the nearest page boundary. It extends through all subsequent pages with the following attributes in common:
• The state of all pages is the same: either committed, reserved, or free.
• If the initial page is not free, all pages in the region are part of the same initial allocation of pages that were reserved by a call to VirtualAlloc.
• The access protection of all pages is the same (that is, PAGE_READONLY, PAGE_READWRITE, or PAGE_NOACCESS).
The VirtualLock function enables a process to lock one or more pages of committed memory into physical memory (RAM), preventing the system from swapping the pages out to the paging file. It can be used to ensure that critical data is accessible without disk access. Locking pages into memory is dangerous because it restricts the system's ability to manage memory. Excessive use of VirtualLock can degrade system performance by causing executable code to be swapped out to the paging file. The VirtualUnlock function unlocks memory locked by VirtualLock.
The VirtualProtect function enables a process to modify the access protection of any committed page in the address space of a process. For example, a process can allocate read/write pages to store sensitive data, and then it can change the access to read only or no access to protect against accidental overwriting. VirtualProtect is typically used with pages allocated by VirtualAlloc, but it also works with pages committed by any of the other allocation functions. However, VirtualProtect changes the protection of entire pages, and pointers returned by the other functions are not necessarily aligned on page boundaries. The VirtualProtectEx function is similar to VirtualProtect, except it changes the protection of memory in a specified process. Changing the protection is useful to debuggers in accessing the memory of a process being debugged.

Heap Functions
The heap functions enable a process to create a private heap, a block of one or more pages in the address space of the calling process. The process can then use a separate set of functions to manage the memory in that heap. There is no difference between memory allocated from a private heap and that allocated by using the other memory allocation functions. For a complete list of functions, see the table in Memory Management Functions.
The HeapCreate function creates a private heap object from which the calling process can allocate memory blocks by using the HeapAlloc function. HeapCreate specifies both an initial size and a maximum size for the heap. The initial size determines the number of committed, read/write pages initially allocated for the heap. The maximum size determines the total number of reserved pages. These pages create a contiguous block in the virtual address space of a process into which the heap can grow. Additional pages are automatically committed from this reserved space if requests by HeapAlloc exceed the current size of committed pages, assuming that the physical storage for it is available. Once the pages are committed, they are not decommitted until the process is terminated or until the heap is destroyed by calling the HeapDestroy function.
The memory of a private heap object is accessible only to the process that created it. If a dynamic-link library (DLL) creates a private heap, it does so in the address space of the process that called the DLL. It is accessible only to that process.
The HeapAlloc function allocates a specified number of bytes from a private heap and returns a pointer to the allocated block. This pointer can be used in the HeapFree , HeapReAlloc, HeapSize , and HeapValidate functions.
Memory allocated by HeapAlloc is not movable. The address returned by HeapAlloc is valid until the memory block is freed or reallocated; the memory block does not need to be locked. Because the system cannot compact a private heap, it can become fragmented.
Applications that allocate large amounts of memory in various allocation sizes can use the low-fragmentation heap to reduce heap fragmentation.
A possible use for the heap functions is to create a private heap when a process starts up, specifying an initial size sufficient to satisfy the memory requirements of the process. If the call to the HeapCreate function fails, the process can terminate or notify the user of the memory shortage; if it succeeds, however, the process is assured of having the memory it needs.
Memory requested by HeapCreate may or may not be contiguous. Memory allocated within a heap by HeapAlloc is contiguous. You should not write to or read from memory in a heap except that allocated by HeapAlloc, nor should you assume any relationship between two areas of memory allocated by HeapAlloc.
You should not refer in any way to memory that has been freed by HeapFree. After the memory is freed, any information that may have been in it is gone forever. If you require information, do not free memory containing the information. Function calls that return information about memory (such as HeapSize) may not be used with freed memory, as they may return bogus data.

Low-fragmentation Heap
Heap fragmentation is when available memory is broken into small, non-contiguous blocks. When this occurs, memory allocation can fail even though there is enough memory in the heap to satisfy the request, because no one block of memory is large enough to satisfy the allocation request.
For applications that have a low memory usage, the standard heap is adequate; allocations will not fail due to heap fragmentation. However, if the application allocates memory frequently and uses a variety of allocation sizes, memory allocation can fail due to heap fragmentation.
Windows XP and Windows Server 2003 introduce the low-fragmentation heap (LFH). This mechanism is built on top of the existing heap, but as the name implies, it reduces fragmentation of the heap. Applications that allocate large amounts of memory in various allocation sizes should use the LFH. Note that the LFH can allocate blocks up to 16 KB. For blocks greater than this, the LFH uses the standard heap.
To use the LFH in your application, call the HeapCreate or GetProcessHeap function to obtain a handle to a standard heap. Then call the HeapSetInformation function to enable the LFH. If the call succeeds, memory is allocated and freed in the LFH when you call the heap API. Otherwise, the memory is allocated in the standard heap. Note that it is not possible to enable the LFH if the heap was created with HEAP_NO_SERIALIZE or if you are using certain gflags options related to the heap.
The LFH avoids fragmentation by managing all allocated blocks in 128 predetermined different block-size ranges. Each of the 128 size ranges is called a bucket. When an application needs to allocate memory from the heap, the LFH chooses the bucket that can allocate the smallest block large enough to contain the requested size. The smallest block that can be allocated is 8 bytes.

Message Reflection for Windows Controls

What Is Message Reflection?

Windows controls frequently send notification messages to their parent windows. For instance, many controls send a control color notification message (WM_CTLCOLOR or one of its variants) to their parent to allow the parent to supply a brush for painting the background of the control.

In Windows and in MFC before version 4.0, the parent window, often a dialog box, is responsible for handling these messages. This means that the code for handling the message needs to be in the parent window's class and that it has to be duplicated in every class that needs to handle that message. In the case above, every dialog box that wanted controls with custom backgrounds would have to handle the control color notification message. It would be much easier to reuse code if a control class could be written that would handle its own background color.

In MFC 4.0, the old mechanism still works — parent windows can handle notification messages. In addition, however, MFC 4.0 facilitates reuse by providing a feature called "message reflection" that allows these notification messages to be handled in either the child control window or the parent window, or in both. In the control background color example, you can now write a control class that sets its own background color by handling the reflected WM_CTLCOLOR message — all without relying on the parent. (Note that since message reflection is implemented by MFC, not by Windows, the parent window class must be derived from CWnd for message reflection to work.)
Older versions of MFC did something similar to message reflection by providing virtual functions for a few messages, such as messages for owner-drawn list boxes (WM_DRAWITEM, and so on). The new message reflection mechanism is generalized and consistent.

Message reflection is backward compatible with code written for versions of MFC before 4.0.
If you have supplied a handler for a specific message, or for a range of messages, in your parent window's class, it will override reflected message handlers for the same message provided you don't call the base class handler function in your own handler. For example, if you handle WM_CTLCOLOR in your dialog box class, your handling will override any reflected message handlers.

If, in your parent window class, you supply a handler for a specific WM_NOTIFY message or a range of WM_NOTIFY messages, your handler will be called only if the child control sending those messages does not have a reflected message handler through ON_NOTIFY_REFLECT(). If you use ON_NOTIFY_REFLECT_EX() in your message map, your message handler may or may not allow the parent window to handle the message. If the handler returns FALSE, the message will be handled by the parent as well, while a call that returns TRUE does not allow the parent to handle it. Note that the reflected message is handled before the notification message.

When a WM_NOTIFY message is sent, the control is offered the first chance to handle it. If any other reflected message is sent, the parent window has the first chance to handle it and the control will receive the reflected message. To do so, it will need a handler function and an appropriate entry in the control's class message map.

The message-map macro for reflected messages is slightly different than for regular notifications: it has _REFLECT appended to its usual name. For instance, to handle a WM_NOTIFY message in the parent, you use the macro ON_NOTIFY in the parent's message map. To handle the reflected message in the child control, use the ON_NOTIFY_REFLECT macro in the child control's message map. In some cases, the parameters are different, as well. Note that ClassWizard can usually add the message-map entries for you and provide skeleton function implementations with correct parameters.


Message-Map Entries and Handler Function Prototypes for Reflected Messages
To handle a reflected control notification message, use the message-map macros and function prototypes listed in the table below.

ClassWizard can usually add these message-map entries for you and provide skeleton function implementations. See Defining a Message Handler for a Reflected Message for information about how to define handlers for reflected messages.

To convert from the message name to the reflected macro name, prepend ON_ and append _REFLECT. For example, WM_CTLCOLOR becomes ON_WM_CTLCOLOR_REFLECT.

The three exceptions to the rule above are as follows:
• The macro for WM_COMMAND notifications is ON_CONTROL_REFLECT.
• The macro for WM_NOTIFY reflections is ON_NOTIFY_REFLECT.
• The macro for ON_UPDATE_COMMAND_UI reflections is ON_UPDATE_COMMAND_UI_REFLECT.

In each of the above special cases, you must specify the name of the handler member function. In the other cases, you must use the standard name for your handler function.

The meanings of the parameters and return values of the functions are documented under either the function name or the function name with On prepended. For instance, CtlColor is documented in OnCtlColor. Several reflected message handlers need fewer parameters than the similar handlers in a parent window. Just match the names in the table below with the names of the formal parameters in the documentation.

Handling Reflected Messages: An Example of a Reusable control

Here is a simple example creates a reusable control called CYellowEdit. The control works the same as a regular edit control except that it displays black text on a yellow background. It would be easy to add member functions that would allow the CYellowEdit control to display different colors.


1. Create a new dialog box in an existing application.

You must have an application in which to develop the reusable control. If you don't have an existing application to use, create a dialog-based application using AppWizard.

2. With your project loaded into Visual C++, use ClassWizard to create a new class called CYellowEdit based on CEdit.

3. Add three member variables to your CYellowEdit class. The first two will be COLORREF variables to hold the text color and the background color. The third will be a CBrush object that will hold the brush for painting the background. The CBrush object allows you to create the brush once, merely referencing it after that, and to destroy the brush automatically when the CYellowEdit control is destroyed.

4. Initialize the member variables by writing the constructor as follows:

 CYellowEdit::CYellowEdit()
 {
    m_clrText = RGB( 0, 0, 0 );
    m_clrBkgnd = RGB( 255, 255, 0 );
    m_brBkgnd.CreateSolidBrush( m_clrBkgnd );
  }

5. Using ClassWizard, add a handler for the reflected WM_CTLCOLOR message to your CYellowEdit class. Note that the equal sign in front of the message name in the list of messages you can handle indicates that the message is reflected. This is described in Defining a Message Handler for a Reflected Message.

ClassWizard adds the following message-map macro and skeleton function for you:

ON_WM_CTLCOLOR_REFLECT()

// Note: other code will be in between....

HBRUSH CYellowEdit::CtlColor(CDC* pDC, UINT nCtlColor)
{
// TODO: Change any attributes of the DC here

// TODO: Return a non-NULL brush if the
// parent's handler should not be called
return NULL;
}

6. Replace the body of the function with the following code. The code specifies the text color, the text background color, and the background color for rest of the control.

    pDC->SetTextColor( m_clrText ); // text
   pDC->SetBkColor( m_clrBkgnd ); // text bkgnd
   return m_brBkgnd; // ctl bkgnd


7. Create an edit control in your dialog box, then attach it to a member variable by double-clicking the edit control while holding a control key down. In the Add Member Variable dialog box, finish the variable name and choose "Control" for the category, then "CYellowEdit" for the variable type. Don't forget to set the tab order in the dialog box. Also, be sure to include the header file for the CYellowEdit control in your dialog box's header file.

8. Build and run your application. The edit control will have a yellow background.