Friday, December 17, 2010

Root Cause Analysis

If I have an unwanted situation which consumes resources and tends to happen in a repeated fashion then there is a possibility that it might be beneficial to figure out what is really causing this situation to occur and remove it so the situation does not occur again. This is generally referred to as Root Cause Analysis, finding the real cause of the problem and dealing with it rather than simply continuing to deal with the symptoms.

This raises several questions:

>>How does one determine which situations are candidates for root cause analysis?
>>How does one figure out what the root cause is?
>>Does the removal of the cause entail less resource expenditure than it takes to continue to deal with the symptom?

Determining Candidates
-------------------------------------

In normal chaotic organizational environments it is often quite difficult to find candidates for root cause analysis because the situations which repeat are either distributed over time so one doesn't realize they are actually recurring, or the situation happens to different people so there isn't an awareness of the recurring nature of the situation. When an organization is using a an automated problem resolution support system, such as SolutionBuilder, it is very easy to determine which situations are recurring with what frequency. Every time a solution is used its frequency counter gets updated, so all one has to do is run reports against the system to determine which solutions are being used with what frequency. Those situations which are recurring with the greatest frequency and consume the greatest amount of resource to rectify are the candidates for root cause analysis.

Finding the Root Cause
------------------------------------
Most situations which arise within an organizational context have multiple approaches to resolution. These different approaches generally require different levels of resource expenditure to execute. And, due to the immediacy which exists in most organizational situations there is a tendency to opt for the solution which is the most expedient in terms of dealing with the situation. In doing this the tendency is generally to treat the symptom rather than the underlying fundamental problem that is actually responsible for the situation occurring. Yet, in taking the most expeditious approach and dealing with the symptom, rather than the cause, what is generally ensured is that the situation will, in time, return and need to be dealt with again.

Consider the specific example of expediting customer orders in an order fulfillment process. The organization has a well defined process for accepting, processing, and shipping customer orders. When a customer calls and complains about not getting their order the most normal response is to expedite. This means that someone personally tracks down this customer's order, assigns it a #1 priority, and ensures it gets shipped ahead of everything else. What isn't realized, until sometime later on, if at all, is that in expediting this order one or more other orders were delayed because the process was disrupted to get this customer's order out the door. What is all comes down to is that expediting orders simply ensures that more orders will have to be expedited later. In systems terms this is a typical "Fixes that Fail" structure which evolves into an "Addiction" structure where the organization becomes addicted to expediting to deal with customer order complaints.

The appropriate response to this situation is to figure out why the order was in need of expediting in the first place. Yet this is seldom done because the task assigned to the expediter was, "get the order shipped!" and that's as far as the thought processes and investigation are apt to go.

To find root causes there is one really only one question that's relevant, "What can we learn from this situation?" Research has repeatedly proven that unwanted situations within organizations are about 95% related to process problems and only 5% related to personnel problems. Yet, most organizations spend far more time looking for culprits than causes and because of this misdirected effort seldom really gain the benefit they could gain from understanding the foundation of the unwanted situation. Consider the following two scenarios.

Scenario # 1

The Plant Manager walked into the plant and found oil on the floor. He called the Foreman over and told him to have maintenance clean up the oil. The next day while the Plant Manager was in the same area of the plant he found oil on the floor again and he subsequently raked the Foreman over the coals for not following his directions from the day before. His parting words were to either get the oil cleaned up or he'd find someone that would.

Scenario # 2

The Plant Manager walked into the plant and found oil on the floor. He called the Foreman over and asked him why there was oil on the floor. The Foreman indicated that it was due to a leaky gasket in the pipe joint above. The Plant Manager then asked when the gasket had been replaced and the Foreman responded that Maintenance had installed 4 gaskets over the past few weeks and they each one seemed to leak. The Foreman also indicated that Maintenance had been talking to Purchasing about the gaskets because it seemed they were all bad. The Plant Manager then went to talk with Purchasing about the situation with the gaskets. The Purchasing Manager indicated that they had in fact received a bad batch of gaskets from the supplier. The Purchasing Manager also indicated that they had been trying for the past 2 months to try to get the supplier to make good on the last order of 5,000 gaskets that all seemed to be bad. The Plant Manager then asked the Purchasing Manager why they had purchased from this supplier if they were so disreputable and the Purchasing Manager said because they were the lowest bidder when quotes were received from various suppliers. The Plant Manager then asked the Purchasing Manager why they went with the lowest bidder and he indicated that was the direction he had received from the VP of Finance. The Plant Manager then went to talk to the VP of Finance about the situation. When the Plant Manager asked the VP of Finance why Purchasing had been directed to always take the lowest bidder the VP of Finance said, "Because you indicated that we had to be as cost conscious as possible!" and purchasing from the lowest bidder saves us lots of money. The Plant Manger was horrified when he realized that he was the reason there was oil on the plant floor. Bingo!

You may find scenario # 2 somewhat funny, and laugh at the situation. It would be better if the situation made you weep because it is often all so true in numerous variations on the same theme. Everyone in the organization doing their best to do the right things, and everything ends up screwed up. The root cause of this whole situation is local optimization with no global thought involved.

Scenario # 2 also provides an good example of how one should proceed to do root cause analysis. Once simply has to continue to ask "Why?" until the pattern completes and the cause of the difficulty in the situation becomes rather obvious.

To Resolve or Not To Resolve
Once the root cause is determined then it has to be determined whether it costs more to remove the root cause or simply continue to treat the symptoms. This is often not an easy determination. Even though it may be relatively easy to estimate the cost to remove the root cause it is generally very difficult to assess the cost of treating the symptom. This difficulty arises because the cost of the symptom is generally wrapped up in some number of customer and employee satisfaction factors in addition to the resource costs associated with just treating the symptom.

Consider a situation where it is determined that it will cost $100,000 to remove the root cause of a problem and only 5 minutes for someone to resolve the situation when the customer calls with the problem. Initially one might perceive that the cost of removing the root cause is far larger than the cost of treating the symptom. Yet suppose that this symptom is such that when it arises it so infuriates the customer that they swear they will never buy another product from you, and will go out of there way for the next year to tell everyone they meet what a terrible company you are to do business with. How do you estimate to lost business cost associated with this situation. And if you think this is a bizarre case, it is not, for I was personally on an "I hate Midas Muffler" campaign for over two years because they screwed up the brakes on my car. In that two years I managed to reach several thousand people because I preached "I hate Midas Muffler" in my TQM classes, and continued to use them as an excellent bad example.

Finally
----------------
Is "Root Cause Analysis" really an appropriate phrase? In this apparently endlessly interconnected world, everything seems to influence so many other things. Seeking the "Root Cause" is an endless exercise because no matter how deep you go there's always at least one more cause you can look for. Might "Actionable Cause Analysis" be more appropriate? I think I'm looking for a cause that I can act on that will provide long term relief from the symptoms, without causing more problems that I have to deal with tomorrow.

Way To VC

Using resources from a DLL

Using Resources from DLL

Requirements:
----------------------
Let us consider we have a DLL named 'FilingResource.DLL' containing resource of Dialog having ID as IDD_NETWORK_PATH_SETUP. We have class CFilingResourceDlg associated with this dialog. We want to import this dialog in the DLL, in our program by inheriting CFilingResourceDlg class with the class CDerivedClass.

Steps to use Resource stored in DLL:
-----------------------------------------------------------

1) Load the DLL using LoadLibrary()
The LoadLibrary function maps the specified executable module into the address space of the calling process. LoadLibrary takes one parameter of LPCTSTR type which points to the null terminated string that names the executable module i.e. the name of the DLL. If the function succeeds, the return value is a handle to the module (DLL in our case).
e.g. HINSTANCE hResourceDLL = ::LoadLibrary(_T(“FilingResource.DLL”));

2) Convert the dialog ID, which is an integer, to resource type so that it can be used with resource related functions.
e.g. LPCTSTR lpszTemplateName =
MAKEINTRESOURCE(IDD_NETWORK_PATH_SETUP);

3) Determine the Resource (i.e. Dialog) in the module (i.e. DLL) using FindResource()
The FindResource function determines the location of a resource (Dialog in our case) with the specified type and name in the specified module (DLL in our case). FindResource() takes three parameters: A Handle to the module, Name of the resource and type of the resource. If the function succeeds, the return value is a handle to the specified resource’s info block.
e.g HRSRC hResource = ::FindResource(hResourceDLL, lpszTemplateName, RT_DIALOG);

4) Load the Resource using LoadResource.
The LoadResource function loads the specified resource into global memory. This function takes two parameters: A handle to the executable module and the resource found using FindResource. If the function succeeds, the return value is a handle to the global memory block containing the data associated with the resource. If it fails, the return value is NULL.
e.g. HGLOBAL hTemplate = LoadResource(hResourceDLL, hResource);

5) Lock the resource in memory using LockResource.
The LockResource function locks the specified resource in memory. This function takes the handle of the resource obtained by LoadResource as a parameter. If the loaded resource is locked, the return value is a pointer to the first byte of the resource; otherwise, it is NULL.
e.g. LPDLGTEMPLATE lpDialogTemplate = (LPDLGTEMPLATE)LockResource(hTemplate);

6) Create object of the class and call InitModalIndirect() and DoModal() of the Dialog.
The empty constructor is called to construct the dialog-box object. Next, InitModalIndirect is called to store the handle to the in-memory dialog-box template. The Windows dialog box is created and displayed later, when the DoModal member function is called.
e.g
CDerivedClass derivedClassObj;
derivedClassObj.InitModalIndirect(lpDialogTemplate);
derivedClassObj.DoModal();


waytovc.blogspot.com

Thursday, December 16, 2010

C# write xml file using XmlDocument class

step1.

using System.Xml;
//...
//something
//...
//Declaration
XmlDocument xmldoc;
XmlElement xmlelem;
XmlElement xmlelem2;
XmlText xmltext;
//....
//something
//....

xmldoc = new XmlDocument();
XmlDeclaration declaration = xmldoc.CreateXmlDeclaration("1.0", "utf-8", null);
xmldoc.AppendChild(declaration);

xmlelem = xmldoc.CreateElement("Root");
xmlelem.SetAttribute("xmlns:xsi", "http://www.w3.org/2001/XMLSchema-instance");
xmlelem.SetAttribute("xmlns:xsd", "http://www.w3.org/2001/XMLSchema");
xmldoc.AppendChild(xmlelem);
//Language
xmlelem2 = xmldoc.CreateElement("Language");
xmltext = xmldoc.CreateTextNode("English");
xmlelem2.AppendChild(xmltext);
xmldoc.ChildNodes.Item(1).AppendChild(xmlelem2);
//Author
xmlelem2 = xmldoc.CreateElement("Author");
xmltext = xmldoc.CreateTextNode("WayToVC");
xmlelem2.AppendChild(xmltext);
xmldoc.ChildNodes.Item(1).AppendChild(xmlelem2);

//saving
try
{
xmldoc.Save("SomeFile.xml");//I've chosen the c:\ for the resulting file somefile.xml
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

//

You can add subchilds using xmldoc.CreateElement() method and append the element to the parent.

C# DateTime.UtcNow in ISO 8601 date format

Here is the way to get a DateTime.UtcNow string which represents the same value in an ISO 8601 compliant format.

DateTime.UtcNow.ToString("yyyy-MM-ddTHH\:mm\:ss.fffffffzzz");
This gives you a date similar to 2008-09-22T13:57:31.2311892-04:00

Another way is:
============

DateTime.UtcNow.ToString("o"); //my recommendation
which gives you 2008-09-22T14:01:54.9571247Z

♠♪♠♪♠♪♠♪♠♪♠♪♠♪♠♪♠♪

Wednesday, December 15, 2010

Debugging a Dll

If you want to debug a dll, it should be called by an application, usually an .exe.
There are couple of ways to debug a Dll based upon the type and properties of the DLL.
Some ways are as follows:
1.) An application built in another project in the same Visual Studio solution that contains the class library.

2.) An existing application already deployed on a test or production computer.

3.) Located on the Web and accessed through a URL.

4.) A Web application that contains a Web page which embeds the DLL.

To debug a DLL, start by debugging the calling application, typically either an EXE or a Web application. There are several ways to debug it.

If you have a project for the calling application, you can open that project and start execution from the Debug menu

If the calling application is an existing program already deployed on a test or production computer and is already running you can attach to it. Use this method if the DLL is a control hosted by Internet Explorer, or a control on a Web page.

You can debug it from the Visual Studio Immediate window. In this case, the Immediate window plays the role of the application.

To specify an executable for the debug session

1.In Solution Explorer, select the project that creates the DLL.

2.From the View menu, choose Property Pages.

3.In the Property Pages dialog box, open the Configuration Properties folder and select the Debugging category.

4.In the Command box, specify the path name for the container. For example, C:\Program Files\MyApplication\MYAPP.EXE.

5.In the Command Arguments box, specify any necessary arguments for the executable.


Before you start debugging the calling application, you will usually want to set a breakpoint in the class library. When the breakpoint is hit, you can step through the code, observing the action at each line, until you isolate the problem.



Thursday, November 18, 2010

Thursday, August 12, 2010

How to do proper Code Maintenance

It's safe to say that most developers prefer greenfield development. Wikipedia defines greenfield as "a project that lacks any constraints imposed by prior work." It gives us an opportunity to utilize all of the best practices we strive for: unit testing, code reviews, loose coupling, mockable design and the like.

But in the real world, we often have to maintain existing code. That maintenance includes not only bug fixes, but adding new features too. The older the codebase, the more likely it is that there will be other applications that rely on specific functionality -- especially in the case of class libraries. It may be tempting to jump into older code and start ripping apart the "bad stuff," but we must be careful not to break existing functionality. In this article, I'll review some techniques that will allow you to move old code forward, with little to no impact on existing codebases that rely on the old code. I'll also show you how you can improve the old code to make it more testable.

But in the real world, we often have to maintain existing code. That maintenance includes not only bug fixes, but adding new features too. The older the codebase, the more likely it is that there will be other applications that rely on specific functionality -- especially in the case of class libraries. It may be tempting to jump into older code and start ripping apart the "bad stuff," but we must be careful not to break existing functionality. In this article, I'll review some techniques that will allow you to move old code forward, with little to no impact on existing codebases that rely on the old code. I'll also show you how you can improve the old code to make it more testable.

When you create a library for other developers or even just yourself, you're defining an API. The interface to this library is all of the methods and signatures you've defined. Changing these may allow you, as the library developer, to clean up some things, but it's going to add pain points to all of the consumers of your class. A simple change to a method signature could result in a much better library design with better testability, but could cause 10 or 15 other applications to fail to compile until they're updated to support the new signature. If you have a public API that hundreds or even thousands of developers may be using, you've put them in a tough spot. Not only do their applications fail to compile with the new library, they're probably considering the future of your library and whether they want to continue using it after the pain you've caused them.

Maintaining backward compatibility can be a challenge sometimes, but it leads to stability in the way other developers and other code interact with your library.

Overloading
The easiest way to modify existing code with zero impact is to utilize method overloading. Overloading allows you to add additional parameters to an existing call, but you can maintain the old signatures so as not to break existing clients.

Here's a totally contrived example of some utility that grabs the first 10 files in some directory:

public string[] ReadFiles(string directory)
{
return Directory.GetFiles(directory).Take(10).
ToArray();
}

At some point in the future, you may want to use this function, but you need 15 or maybe even 20 files read in. The quick solution is to simply require the number of files to be passed in as a parameter:

public string[] ReadFiles(string directory, int maxFiles)
{
return Directory.GetFiles(directory).
Take(maxFiles).ToArray();
}


But now any existing code compiled against this new library will break. We need to include an overload that will keep the old code working the same way (returning 10 files):

public string[] ReadFiles(string directory)
{
return ReadFiles(directory, 10);
}


Using overloading is probably the most common way that you can add or enhance functionality while still maintaining backward compatibility.

Remove UI Dependencies
One of the key things I do before modifying any existing code is to make sure there's a unit test for it. This ensures that I don't break the expected functionality of the code when I make my change. If you're like me, you usually find that older code has limited or no unit tests. You should always take advantage of maintenance time to create some unit tests for older code.

From time to time, I've come across library code that makes use of a UI element. It may just be a simple Message Box call or even popping up a Save File dialog, but those elements make writing automated unit tests quite difficult -- if not impossible. When I run into these situations, I like to use a combination of interfaces and method overloading to remove the UI dependency. This makes unit testing much easier and the code becomes more flexible.

Here's an example: Let's say you have a utility class that has a method to close the file you were writing to. However, the file is only opened and written to on an as-needed basis. For this reason, the code that executes to close the file first checks a dirty flag and will only prompt the user to save the file if the dirty flag is true:

public void CloseFile()
{
bool saveFile = true;
if( IsDirty )
{
saveFile = MessageBox.Show("Save Changes?", "Save",
MessageBoxButtons.YesNo) == DialogResult.Yes;
}

if( saveFile )
{
// code to save the file
}
}

The MessageBox in the middle of this code makes it impossible to test this in an automated fashion, using traditional tools like NUnit or MSTest, for instance. You must eliminate the dependency on the MessageBox. First, define an interface to handle the ISaveFilePrompt:

public interface ISaveFilePrompt
{
bool ShouldSaveFile();
}

Next, change the CloseFile method to use this interface instead of the MessageBox:

public void CloseFile(ISaveFilePrompt saveFilePrompt)
{
bool saveFile = true;
if (IsDirty)
{
saveFile = saveFilePrompt.ShouldSaveFile();
}

if (saveFile)
{
// code to save the file
}
}

You now have a method that can easily be unit tested in an automated setting. You can mock out the ISaveFilePrompt object to return whatever you want for the ShouldSaveFile method. But in the process, you've broken existing code that relied on this code to display a MessageBox. Because the point of this article is to try to avoid breaking existing code, you still have some work to do. To complete this refactoring, you need to create an implementation of ISaveFilePrompt that uses a MessageBox:

public class MessageBoxPrompt : ISaveFilePrompt
{
public bool ShouldSaveFile()
{
return MessageBox.Show("Save Changes?", "Save",
MessageBoxButtons.YesNo) == DialogResult.Yes;
}
}

Finally, we'll create a parameter-less version of CloseFile that simply defers execution to our new version and uses the MessageBoxPrompt:

public void CloseFile()
{
CloseFile(new MessageBoxPrompt());
}

We've refactored this code so it can now be tested in an automated fashion while maintaining full backward compatibility and functionality with existing clients. We also have the added benefit of having a CloseFile method that can now be used in a batch mode or other UI-less environment (like a Windows Service).

Eliminate Static Methods
When it comes to unit testing and mocking, static methods pose quite a challenge. A lot of mocking frameworks (like Rhino Mocks and NMock) rely on intercepting method calls via dynamic proxies. This works great for interfaces and virtual class methods, but hits a brick wall with static methods -- in other words, static methods can't be mocked with mocking frameworks that rely on dynamic proxies. While there are some frameworks that use the .NET Profiler API (like Typemock) and can therefore intercept calls anywhere (even static methods), it is generally considered code smell to create static methods that do anything but simple, procedural code.

But there's a lot of code out there that utilizes static methods. Heck, I've even written a lot of it. Maintenance time is the perfect time to clean up those static methods!

I've most often used static methods for configuration files. I'd create a small class with a few read/write properties that would be used for saving and loading user-defined application settings. I'd often add a static "Load" method where I would provide the filename to load up the class from disk:

[Serializable]
public class MyAppSettings
{
public int BatchSize { get; set; }
public string JobName { get; set; }

public static MyAppSettings Load(string filename)
{
// deserialize object from the file "filename"
}
}


And somewhere in my main code when I needed to load the settings, I'd simply call the Load method:

MyAppSettings settings = MyAppSettings.Load("settings.xml");The problem with this approach is that it relies on a physical file on disk -- that's strong coupling. And because this is a static method, I can't mock out the call to MyAppSettings.Load using my favorite mocking tool (Rhino Mocks).

When it comes time to write a unit test for the following code, I hit a road block:

public void ProcessBatch()
{
MyAppSettings settings = MyAppSettings.
Load("settings.xml");

for(int batchNumber = 0 ; batchNumber < settings.
BatchSize ; batchNumber++)
{
// process batch
}
}Just as was discussed in the earlier section on removing UI dependencies, you'll use an interface to handle the loading of the application settings:

public interface IAppSettingsLoader
{
MyAppSettings LoadSettings();
}

Now update the method to use the IAppSettingsLoader:

public void ProcessBatch(IAppSettingsLoader
appSettingsLoader)
{
MyAppSettings settings = appSettingsLoader
.LoadSettings();

for(int batchNumber = 0 ; batchNumber < settings.
BatchSize ; batchNumber++)
{
// process batch
}
}

And create an IAppSettingsLoader that loads the settings from disk so that calls to "ProcessBatch()" by existing clients will continue to work:

public class FileSettingsLoader : IAppSettingsLoader
{
private readonly string filename;

public FileSettingsLoader(string filename)
{
this.filename = filename;
}

public MyAppSettings LoadSettings()
{
// deserialize object from the file "filename"
}
}

And the ProcessBatch overload that matches the existing signature is updated to use this new class:

public void ProcessBatch()
{
ProcessBatch(new FileSettingsLoader("settings.xml"));
}

The last thing you want to do is discourage new code from using the static Load method.

The Microsoft .NET Framework has a built-in attribute called "Obsolete." The obsolete attribute can be applied to just about anything -- classes, methods, enums, interfaces, delegates and so on. In this case, we'll apply the obsolete atribute to the static Load method:

[Obsolete("Use FileSettingsLoader to load MyAppSettings from disk")]
public static MyAppSettings Load(string filename)
{
// deserialize object from the file "filename"
}

Now, any code that uses the static Load method will get a warning at compile time that it should instead be using the FileSettingsLoader. You can keep this attribute as is for a couple of versions, and then add another parameter to the attribute that will actually treat this as an error and prevent client code from compiling:

[Obsolete("Use FileSettingsLoader to load MyAppSettings from disk", true)]
public static MyAppSettings Load(string filename)
{
// deserialize object from the file "filename"
}

The Obsolete attribute is a nice way to give consumers a "heads up" that a method shouldn't be used anymore. Once you get to the point where you want to force people to stop using a method (or class), set the "error" parameter to true and they will be forced to change!

Support Old Data Formats
At times, it's not code you must change, but the data your application produces. You should always make an effort to support old data formats as long as it's feasible. The easiest way to accommodate legacy data formats is to provide a data convertor that will read data in the old format and convert it to the new format. I've done this several times with XML serialization. Imagine the following class:

The Obsolete attribute is a nice way to give consumers a "heads up" that a method shouldn't be used anymore. Once you get to the point where you want to force people to stop using a method (or class), set the "error" parameter to true and they will be forced to change!

Support Old Data Formats
At times, it's not code you must change, but the data your application produces. You should always make an effort to support old data formats as long as it's feasible. The easiest way to accommodate legacy data formats is to provide a data convertor that will read data in the old format and convert it to the new format. I've done this several times with XML serialization. Imagine the following class:



public class ConfigData
{
public string DirectoryName { get; set; }
public int FileCount { get; set; }
public string[] IgnoredExtensions { get; set; }
}

Here's an instance of this ConfigData class when serialized as XML:


C:\Data
20

.exe
.bat



Suppose you need to change the DirectoryName property to an array of strings instead of a single string? This is a breaking change from a data format standpoint. Here's the new class definition:

public class ConfigData
{
public string[] DirectoryNames { get; set; }
public int FileCount { get; set; }
public string[] IgnoredExtensions { get; set; }
}An XMLSerializer for the new ConfigData type would not be able to deserialize the old data into the new structure. However, by writing a simple data convertor that will automatically convert the old format into the new class structure, you can continue to support the old data:

public class LegacyConfigDataReader
{
public ConfigData ReadFile(string filename)
{
XDocument document;
using (var fs = new FileStream(filename, FileMode.
Open, FileAccess.Read))
{
using (var sr = new StreamReader(fs))
{
document = XDocument.Load(sr);
}
}

return new ConfigData
{
DirectoryNames = new[]
{document.Descendants("DirectoryName").
ElementAt(0).Value},
FileCount = Int32.Parse(
document.Descendants("FileCount")
.ElementAt(0).Value),
IgnoredExtensions =
document.Descendants("IgnoredExtensions").
ElementAt(0).Descendants()
.Select(v => v.Value).ToArray(),
};
}
}

It is assumed that once the new data is serialized to disk, it will have a new file extension or some other way for the application to know the format has been updated. Whenever the application needs to read the old format, the LegacyConfigDataReader can be used. This is an example where legacy code may have to be changed depending on how you implemented your original deserialization code.

And, obviously, this technique will only work with a data format you can easily parse (like XML or some other text-based format) or a custom binary format that you've created yourself. If you had used .NET's built-in binary serialization, it would be just about impossible to support older class layouts.

In this article, we've seen a few methods that allow you to modify existing code with little or no impact on existing code. At the same time, we've also enhanced some of that code to make it more testable.

Maintenance time shouldn't be viewed as a walk through a minefield, but instead as an opportunity to solidify and enhance your existing code.

[original source: http://visualstudiomagazine.com/Articles/2010/05/01/Make-Good-Use-of-Code-Maintenance.aspx?Page=3]

101 Visual Studio 2010 Tips

>> Tip #1 How to not accidentally copy a blank line
TO Text Editor All Lang Gen Apply cut or copy commands to blank lines

>> Tip #2 How to cycle through the Clipboard ring
Ctrl+Shift+V (Edit.CycleClipboardRing)

>> Tip #3 How to use the Undo stack
Undo button on Standard Toolbar

>> Tip #4 New! Automatic highlighting of symbols
Hover over symbol, then Ctrl+Shift+Arrow to navigate

>> Tip #5 How to navigate forward and backward w go-back markers
View.NavigateBackward (Ctrl+Minus), View.NavigateForward (Ctrl+Shift+Minus)

>> Tip #6 New! How to collapse a region with ease
Hover over any part of region and dclick. Or Ctrl+M, Ctrl+M

>> Tip #7 How to reach the navigation bar via keyboard shortcut
Ctrl+F2 (Window.MoveToNavigationBar), type-ahead selection

>> Tip #8 How to use Navigate To
Ctrl+comma

>> Tip #9 How to split a window and create new ones
Window Split, Window New Window

>> Tip #10 How to show line numbers in the editor
Tools Options Text Editor All Languages General - Line Numbers

>> Tip #11 How to enable virtual space
TO Text Editor All Languages General

>> Tip #12 How to view visible white space
Edit Advanced View White Space

>> Tip #13 How to change the color of visual white space glyphs
TO Env Fonts and Colors Text Editor Visual White Space

>> Tip #14 How to increase the editors ToolTip font size
TO Env Fonts and Colors Show Settings for Editor ToolTip

>> Tip #15 New! How to zoom in/out in Editor
Ctrl+Mouse Wheel / Edit in zoom control

>> Tip #16 How to change text editor font size via keyboard
Macros.Samples.Accessibility.TextEditorFontSize

>> Tip #17 Diff Automatic vs Default in Fonts and Colors

>> Tip #18 How to print boldly
TO Env Fonts and Colors - Show Settings for Printer

>> Tip #19 How to use box/column selection in the editor
Shift+Alt+Arrow, or Mouse+Alt

>> Tip #20 New! How to use Multiline Edit
Shift+Alt+Arrow, type

>> Tip #21 How to format the current document
Ctrl+K, Ctrl+D (Edit.FormatDocument)

>> Tip #22 You can remove unused using statements in C#
Context Menu Organize Usings Remove Unused Usings

>> Tip #23 How to remove a project from Start Page
Right-click project, select Remove from list

>> Tip #24 How to set bookmarks and navigate among them
Edit.ToggleBookmark (Ctrl+K, Ctrl+K )

>> Tip #25 You can bookmark your quick find results
Ctrl+F, then press bookmark all

>> Tip #26 How to increase Intellisense font sizes
TO Env Fonts and Colors - Show Settings For -

>> Tip #27 How to increase Environment font
TO Env Fonts and Colors - Show Settings For Environment Font

>> Tip #28 Toggle Statement Completion tabs via keyboard
All tab: Alt+. and Common Tab: Alt+,

>> Tip #29 New! How to do Pascal / Sub-string matching in Intellisense

>> Tip #30 New! How to enable Suggestion mode in Intellisense
Ctrl+Alt+Space to enable

>> Tip #31 You can insert a snippet by pressing Tab Tab
Type in snippet shortcut word, then hit Tab Tab to insert

>> Tip #32 New! How to browse new code snippets and add new ones
Tools Code Snippet Manager, HTML and Javascript

>> Tip #33 How to insert a code snippet around a block of code in C#
Select code, then Ctrl+K, Ctrl+S. Command: Edit.SurroundWith

>> Tip #34 How to behold the power of incremental search
Ctrl+I - (Edit.IncrementalSearch)

>> Tip #35 Use Ctrl+F3 to search for currently-selected word
Edit.FindNextStatement

>> Tip #36 How not to search for the currently-selected word
Tools Options Environment Find and Replace

>> Tip #37 You can use F3 to search for the last thing you searched for
Edit.FindNext

>> Tip #38 You can customize what files to find in
Find in Files Look in Choose Search Folders

>> Tip #39 You can use a reg key for customizing search results
HKCU\... \10.0\Find, String Find results format = $f$e($l,$c):$t\r\n

>> Tip #40 Use Ctrl+Alt+Down to drop down the file tab channel
Window.ShowEzMDIFileList

>> Tip #41 Use Close All But This on files in the file tab channel
File.CloseAllButThis

>> Tip #42 You can copy a files full path for the file tab channel
File.CopyFullPath

>> Tip #43 Open a Windows Explorer browser to the active file
File.OpenContainingFolder

>> Tip #44 How to close just the selected files you want
Window Windows

>> Tip #45 How to use the IDE Navigator
Hold Ctrl key, then press tab (or shift+tab)

>> Tip #46 How to navigate all open tool windows
Hold Alt key, then press F7 (or Shift+F7)

>> Tip #47 How to disable the IDE Navigator
Rebind Window.Previous/NextDocumentWindow

>> Tip #48 How to disable statement completion
TO Text Editor All Language Auto List Members

>> Tip #49 How to customize what the tool window push pin does
TO Environment General

>> Tip #50 Show autohiding tool windows via autohide channel
Right-click in the autohide channel to view context menu

>> Tip #51 How to redock a tool window via keyboard
Ctrl+Double Click Tool Window title bar

>> Tip #52 You can maximize a tool window in the editor
Window Tabbed Document

>> Tip #53 New! How to move a file onto a secondary monitor
Click-Drag a file out of File Tab Channel

>> Tip #54 New! How to snap file windows to monitor edges
Windows 7 feature: Win key+Arrow

>> Tip #55 New! How to put file back into File Tab Channel
Ctrl-DoubleClick

>> Tip #56 New! How to reverse the order the file tabs open
TO Doc insert docs to right of existing ones

>> Tip #57 Customize the tool window x button
Tools Options Environment General

>> Tip #58 How to access a toolbar within a tool window
Shift+Alt (note: Alt+Shift will not work)

>> Tip #59 How to quickly access full screen mode
Shift+Alt+Enter (View.FullScreen)

>> Tip #60 How to enter the File window layout mode
Open a file from a command prompt

>> Tip #61 How to use the keyboard to jump to output window panes
Window.NextSubPane. Need to create shortcut

>> Tip #62 Drag and drop code onto the Toolboxs General tab
Either drag and drop code, or use Cut/Copy shortcuts

>> Tip #63 How to use Ctrl+Arrow to move among the Toolbox Tabs

>> Tip #64 Switch between the Icon View and the List view in Toolbox
On Context Menu, uncheck List view

>> Tip #65 You can use Show All to find your hiding Toolbox controls
On Context Menu, check Show All

>> Tip #66 You can show custom tokens in the Task list
TO Environment Task List, add custom token to list

>> Tip #67 How to find what development settings you last reset to
HKCU\Software\Microsoft\VisualStudio\10.0\Profile, LastResetSettingsFile

>> Tip #68 You can create a macro for your import / export settings

>> Tip #69 How to open a file without any UI
Tools.GoToCommandLine

>> Tip #70 How to have fun with the Find Combo Box
Does everything from finding stuff to making coffee

>> Tip #71 How to not show the Start Page on launch
TO Env Startup anything but Show Start Page

>> Tip #72 How to open to the last loaded project
TO Env Startup Load Last Loaded project

>> Tip #73 How to use solution folders to hide projects
Solution Explorer context menu Add New Solution Folder

>> Tip #74 How to create temp or throw away projects
TO Projects and Solutions uncheck Save new projects when created

>> Tip #75 How to hide or show the Project Location is Not Trusted message box
TO Projects and Solutions

>> Tip #76 How to show the Misc Files project in Solution Explorer
TO Env Documents, show Misc project

>> Tip #77 How to type-ahead selection in solution explorer
Just type name of file and focus will jump to file that matches

>> Tip #78 How to add a solution to a solution
File Open Project, choose a solution (not a project) file

>> Tip #79 How to have the Sln Explorer show active file
TO Projects and Solutions General Track Active Item

>> Tip #80 How to use tracepoints to log stuff in your code
Editor context menu Breakpoints Add Tracepoints

>> Tip #81 How to use DataTips to edit a variables content
Click inside DataTip contents to edit

>> Tip #82 New! How to leave comments in Data Tips
Expand down arrow and type in comment

>> Tip #83 New! How to export Data Tips
Debug Export DataTips

>> Tip #84 New! How to label breakpoints
Right-click on breakpoint in Bp Window, Edit Labels

>> Tip #85 New! How to export breakpoints
Breakpoint Window Export Breakpoints button on toolbar

>> Tip #86 How to select the startup project from the Sln Explorer
Tools Options Projects and Solutions Build and Run

>> Tip #87 How to make statement completion transparent
Hold down Ctrl key.

>> Tip #88 You can use Ctrl+. to show a smart tag

>> Tip #89 Shortcut to go directly to the class view search bar
View.ClassViewGoToSearchCombo

>> Tip #90 How to bring up Code Definition Window in C#
View Code Definition Window, Ctrl+\, D

>> Tip #91 How to bring up the Call Hierarchy dialog
View Call Hierarchy, Ctrl+Alt+K

>> Tip #92 How to use Devenv /nosplash to speed up launch, maybe

>> Tip #93 You can create project/item templates
File Export Template

>> Tip #94 New Project from Existing Code
File New Project from existing code

>> Tip #95 Edit project file within IDE
Unload then select Edit

>> Tip #96 XAML Visualizer
Drop down arrow in DataTip to show XAML visualizer option

>> Tip #97 How to see the caught exception in Watch Window
Add $exception to watch window

>> Tip #98 You can disable the Exception assistant
Tools Options Debugging General

>> Tip #99 New! How to use Historical debugging aka Intellitrace
Up / Down arrows in gutter like DVR controls

>> Tip #100 New! How to open IntelliTrace log
Double-click .iTrace files to open in VS

>> Tip #101 New! How to use Extension Manager
Install VS Tips extension to get Tips in Start Page

Tuesday, July 13, 2010

Memory Management

Memory Management
The memory manager implements virtual memory, provides a core set of services such as memory mapped files, copy-on-write memory, large memory support, and underlying support for the cache manager.
About Memory Management
Each process on 32-bit Microsoft Windows has its own virtual address space that enables addressing up to 4 gigabytes of memory. Each process on 64-bit Windows has a virtual address space of 8 terabytes. All threads of a process can access its virtual address space. However, threads cannot access memory that belongs to another process, which protects a process from being corrupted by another process.
For information on the virtual address space and the memory management functions, see the following topics.
• Virtual Address Space
• Memory Pools
• Memory Performance Information
• Virtual Memory Functions
• Heap Functions
• File Mapping
• Large Memory Support
• Global and Local Functions
• Standard C Library Functions
• Comparing Memory Allocation Methods

Virtual Address Space
The virtual addresses that a process uses do not represent the actual physical location of an object in memory. Instead, the system maintains a page map for each process, which is an internal data structure used to translate virtual addresses into corresponding physical addresses. Each time a thread references an address, the system translates the virtual address to a physical address.
For more information about virtual memory, see the following topics:
• Virtual Address Space and Physical Storage
• Working Set
• Page State
• Scope of Allocated Memory
• Data Execution Prevention
• Memory Protection
• Memory Limits for Windows Releases

Virtual Address Space and Physical Storage
The maximum amount of physical memory supported by Microsoft Windows ranges from 2 GB to 1 TB, depending on the version of Windows. For more information, see Memory Limits for Windows Releases. The virtual address space of each process can be smaller or larger than the total physical memory available on the computer. The subset of the virtual address space of a process that resides in physical memory is known as the working set. If the threads of a process attempt to use more physical memory than is currently available, the system pages some the memory contents to disk. The total amount of virtual address space available to a process is limited by physical memory and the free space on disk available for the paging file.
Physical storage and the virtual address space of each process are organized into pages, units of memory, whose size depends on the host computer. For example, on x86 computers the host page size is 4 kilobytes.
To maximize its flexibility in managing memory, the system can move pages of physical memory to and from a paging file on disk. When a page is moved in physical memory, the system updates the page maps of the affected processes. When the system needs space in physical memory, it moves the least recently used pages of physical memory to the paging file. Manipulation of physical memory by the system is completely transparent to applications, which operate only in their virtual address spaces.
Working Set
The working set of a process is the set of pages in the virtual address space of the process that are currently resident in physical memory. Memory allocations that are nonpageable such as Address Windowing Extensions (AWE) or large page allocations are not included in the working set.
When a process references a page that is not part of its working set, a page fault occurs. The system page fault handler attempts to resolve the page fault and, if it succeeds, the page is added to the working set. (Accessing AWE or large page allocations never causes a page fault, because these allocations are not in the working set.)
A hard page fault must be resolved by reading page contents from the page's backing store, which is either the system paging file or a memory-mapped file created by the process. A soft page fault can be resolved without accessing the backing store. A soft page fault occurs when:
• The page is in the working set of some other process, so it is already resident in memory.
• The page is in transition, because it either has been removed from the working sets of all processes that were using the page and has not yet been repurposed, or it is already resident as a result of a memory manager prefetch operation.
• A process references an allocated virtual page for the first time (sometimes called a demand-zero fault).
Pages can be removed from a process working set as a result of the following actions:
• The process reduces or empties the working set by calling the SetProcessWorkingSetSize, SetProcessWorkingSetSizeEx or EmptyWorkingSet function.
• The process calls the VirtualUnlock function on a memory range that is not locked.
• The memory manager trims pages from the working set to create more available memory.
• The memory manager must remove a page from the working set to make room for a new page (for example, because the working set is at its maximum size).
If several processes share a page, removing the page from the working set of one process does not affect other processes. After a page is removed from the working sets of all processes that were using it, the page becomes a transition page. Transition pages remain cached in RAM until the page is either referenced again by some process or repurposed (for example, filled with zeros and given to another process). If a transition page has been modified since it was last written to disk (that is, if the page is "dirty"), then the page must be written to its backing store before it can be repurposed. The system may start writing dirty transition pages to their backing store as soon as such pages become available.
Each process has a minimum and maximum working set size that affect the virtual memory paging behavior of the process. To obtain the current size of the working set of a specified process, use the GetProcessMemoryInfo function. To obtain or change the minimum and maximum working set sizes, use the GetProcessWorkingSetSizeEx and SetProcessWorkingSetSizeEx functions.
The process status application programming interface (PSAPI) provides a number of functions that return detailed information about the working set of a process. For details, see Working Set Information.

Page State
The pages of a process's virtual address space can be in one of the following states.
State Description
Free The page is neither committed nor reserved. The page is not accessible to the process. It is available to be reserved, committed, or simultaneously reserved and committed. Attempting to read from or write to a free page results in an access violation exception.
A process can use the VirtualFree or VirtualFreeEx function to release reserved or committed pages of its address space, returning them to the free state.
Reserved The page has been reserved for future use. The range of addresses cannot be used by other allocation functions. The page is not accessible and has no physical storage associated with it. It is available to be committed.
A process can use the VirtualAlloc or VirtualAllocEx function to reserve pages of its address space and later to commit the reserved pages. It can use VirtualFree or VirtualFreeEx to decommit committed pages and return them to the reserved state.
Committed Physical storage is allocated for a reserved page, and access is controlled by one of the memory protection constants. The system initializes and loads each committed page into physical memory only during the first attempt to read or write to that page. When the process terminates, the system releases the storage for committed pages.
A process can use VirtualAlloc or VirtualAllocEx to commit physical pages from a reserved region. They can also simultaneously reserve and commit pages.
The GlobalAlloc and LocalAlloc functions allocate committed pages with read/write access.



Scope of Allocated Memory
All memory a process allocates by using the memory allocation functions ( HeapAlloc, VirtualAlloc, GlobalAlloc, or LocalAlloc) is accessible only to the process. However, memory allocated by a DLL is allocated in the address space of the process that called the DLL and is not accessible to other processes using the same DLL. To create shared memory, you must use file mapping.
Named file mapping provides an easy way to create a block of shared memory. A process can specify a name when it uses the CreateFileMapping function to create a file-mapping object. Other processes can specify the same name to either the CreateFileMapping or OpenFileMapping function to obtain a handle to the mapping object.
Each process specifies its handle to the file-mapping object in the MapViewOfFile function to map a view of the file into its own address space. The views of all processes for a single file-mapping object are mapped into the same sharable pages of physical storage. However, the virtual addresses of the mapped views can vary from one process to another, unless the MapViewOfFileEx function is used to map the view at a specified address. Although sharable, the pages of physical storage used for a mapped file view are not global; they are not accessible to processes that have not mapped a view of the file.
Any pages committed by mapping a view of a file are released when the last process with a view of the mapping object either terminates or unmaps its view by calling the UnmapViewOfFile function. At this time, the specified file (if any) associated with the mapping object is updated. A specified file can also be forced to update by calling the FlushViewOfFile function.
For more information, see File Mapping. For an example of shared memory in a DLL, see Using Shared Memory in a Dynamic-Link Library.
If multiple processes have write access to shared memory, you must synchronize access to the memory. For more information, see Synchronization.

Data Execution Prevention
Data Execution Prevention (DEP) is a system-level memory protection feature that is built into the operating system starting with Windows XP and Windows Server 2003. DEP enables the system to mark one or more pages of memory as non-executable. Marking memory regions as non-executable means that code cannot be run from that region of memory, which makes it harder for the exploitation of buffer overruns.
DEP prevents code from being run from data pages such as the default heap, stacks, and memory pools. If an application attempts to run code from a data page that is protected, a memory access violation exception occurs, and if the exception is not handled, the calling process is terminated.
DEP is not intended to be a comprehensive defense against all exploits; it is intended to be another tool that you can use to secure your application.
How Data Execution Prevention Works
If an application attempts to run code from a protected page, the application receives an exception with the status code STATUS_ACCESS_VIOLATION. If your application must run code from a memory page, it must allocate and set the proper virtual memory protection attributes. The allocated memory must be marked PAGE_EXECUTE, PAGE_EXECUTE_READ, PAGE_EXECUTE_READWRITE, or PAGE_EXECUTE_WRITECOPY when allocating memory. Heap allocations made by calling the malloc and HeapAlloc functions are non-executable.
Applications cannot run code from the default process heap or the stack.
DEP is configured at system boot according to the no-execute page protection policy setting in the boot configuration data. An application can get the current policy setting by calling the GetSystemDEPPolicy function. Depending on the policy setting, an application can change the DEP setting for the current process by calling the SetProcessDEPPolicy function.
Programming Considerations
An application can use the VirtualAlloc function to allocate executable memory with the appropriate memory protection options. It is suggested that an application set, at a minimum, the PAGE_EXECUTE memory protection option. After the executable code is generated, it is recommended that the application set memory protections to disallow write access to the allocated memory. Applications can disallow write access to allocated memory by using the VirtualProtect function. Disallowing write access ensures maximum protection for executable regions of process address space. You should attempt to create applications that use the smallest executable address space possible, which minimizes the amount of memory that is exposed to memory exploitation.
You should also attempt to control the layout of your application's virtual memory and create executable regions. These executable regions should be located in a lower memory space than non-executable regions. By locating executable regions below non-executable regions, you can help prevent a buffer overflow from overflowing into the executable area of memory.
Application Compatibility
Some application functionality is incompatible with DEP. Applications that perform dynamic code generation (such as Just-In-Time code generation) and do not explicitly mark generated code with execute permission may have compatibility issues on computers that are using DEP. Applications written to the Active Template Library (ATL) version 7.1 and earlier can attempt to execute code on pages marked as non-executable, which triggers an NX fault and terminates the application; for more information, see SetProcessDEPPolicy. Most applications that perform actions incompatible with DEP must be updated to function properly.
A small number of executable files and libraries may contain executable code in the data section of an image file. In some cases, applications may place small segments of code (commonly referred to as thunks) in the data sections. However, DEP marks sections of the image file that is loaded in memory as non-executable unless the section has the executable attribute applied.
Therefore, executable code in data sections should be migrated to a code section, or the data section that contains the executable code should be explicitly marked as executable. The executable attribute, IMAGE_SCN_MEM_EXECUTE, should be added to the Characteristics field of the corresponding section header for sections that contain executable code. For more information about adding attributes to a section, see the documentation included with your linker.

Memory Protection
Memory that belongs to a process is implicitly protected by its private virtual address space. In addition, Windows provides memory protection by using the virtual memory hardware. The implementation of this protection varies with the processor, for example, code pages in the address space of a process can be marked read-only and protected from modification by user-mode threads.
For the complete list of attributes, see Memory Protection Constants.
Copy-on-Write Protection
Copy-on-write protection is an optimization that allows multiple processes to map their virtual address spaces such that they share a physical page until one of the processes modifies the page. This is part of a technique called lazy evaluation, which allows the system to conserve physical memory and time by not performing an operation until absolutely necessary.
For example, suppose two processes load pages from the same DLL into their virtual memory spaces. These virtual memory pages are mapped to the same physical memory pages for both processes. As long as neither process writes to these pages, they can map to and share, the same physical pages, as shown in the following diagram.

If Process 1 writes to one of these pages, the contents of the physical page are copied to another physical page and the virtual memory map is updated for Process 1. Both processes now have their own instance of the page in physical memory. Therefore, it is not possible for one process to write to a shared physical page and for the other process to see the changes.

Loading Applications and DLLs
When multiple instances of the same Windows-based application are loaded, each instance is run in its own protected virtual address space. However, their instance handles (hInstance) typically have the same value. This value represents the base address of the application in its virtual address space. If each instance can be loaded into its default base address, it can map to and share the same physical pages with the other instances, using copy-on-write protection. The system allows these instances to share the same physical pages until one of them modifies a page. If for some reason one of these instances cannot be loaded in the desired base address, it receives its own physical pages.
DLLs are created with a default base address. Every process that uses a DLL will try to load the DLL within its own address space at the default virtual address for the DLL. If multiple applications can load a DLL at its default virtual address, they can share the same physical pages for the DLL. If for some reason a process cannot load the DLL at the default address, it loads the DLL elsewhere. Copy-on-write protection forces some of the DLL's pages to be copied into different physical pages for this process, because the fixes for jump instructions are written within the DLL's pages, and they will be different for this process. If the code section contains many references to the data section, this can cause the entire code section to be copied to new physical pages.


Memory Pools
The memory manager creates the following memory pools that the system uses to allocate memory: nonpaged pool and paged pool. Both memory pools are located in the region of the address space that is reserved for the system and mapped into the virtual address space of each process. The nonpaged pool consists of virtual memory addresses that are guaranteed to reside in physical memory as long as the corresponding kernel objects are allocated. The paged pool consists of virtual memory that can be paged in and out of the system. To improve performance, systems with a single processor have three paged pools, and multiprocessor systems have five paged pools.
The handles for kernel objects are stored in the paged pool, so the number of handles you can create is based on available memory.
The system records the limits and current values for its nonpaged pool, paged pool, and page file usage. For more information, see Memory Performance Information.
Memory Performance Information
Memory performance information is available from the memory manager through the system performance counters and through functions such as GetPerformanceInfo, GetProcessMemoryInfo, and GlobalMemoryStatusEx. Applications such as the Windows Task Manager, the Reliability and Performance Monitor, and the Process Explorer tool use performance counters to display memory information for the system and for individual processes.
This topic associates performance counters with the data returned by memory performance functions and the Windows Task Manager:
• System Memory Performance Information
• Process Memory Performance Information
System Memory Performance Information
The following table associates memory object performance counters with the data returned by the memory performance functions in the MEMORYSTATUSEX, PERFORMANCE_INFORMATION, and PROCESS_MEMORY_COUNTERS_EX structures, and with the corresponding information displayed by Task Manager.
Process Memory Performance Information
The following table associates process object performance counters with the data returned by the memory performance functions in the MEMORYSTATUSEX, PERFORMANCE_INFORMATION, and PROCESS_MEMORY_COUNTERS_EX structures, and with the corresponding information displayed by Task Manager.
Virtual Memory Functions
The virtual memory functions enable a process to manipulate or determine the status of pages in its virtual address space. They can perform the following operations:
• Reserve a range of a process's virtual address space. Reserving address space does not allocate any physical storage, but it prevents other allocation operations from using the specified range. It does not affect the virtual address spaces of other processes. Reserving pages prevents needless consumption of physical storage, while enabling a process to reserve a range of its address space into which a dynamic data structure can grow. The process can allocate physical storage for this space, as needed.
• Commit a range of reserved pages in a process's virtual address space so that physical storage (either in RAM or on disk) is accessible only to the allocating process.
• Specify read/write, read-only, or no access for a range of committed pages. This differs from the standard allocation functions that always allocate pages with read/write access.
• Free a range of reserved pages, making the range of virtual addresses available for subsequent allocation operations by the calling process.
• Decommit a range of committed pages, releasing their physical storage and making it available for subsequent allocation by any process.
• Lock one or more pages of committed memory into physical memory (RAM) so that the system cannot swap the pages out to the paging file.
• Obtain information about a range of pages in the virtual address space of the calling process or a specified process.
• Change the access protection for a specified range of committed pages in the virtual address space of the calling process or a specified process.
For more information, see the following topics.
• Allocating Virtual Memory
• Freeing Virtual Memory
• Working With Pages
• Memory Management Functions

Allocating Virtual Memory
The virtual memory functions manipulate pages of memory. The functions use the size of a page on the current computer to round off specified sizes and addresses.
The VirtualAlloc function performs one of the following operations:
• Reserves one or more free pages.
• Commits one or more reserved pages.
• Reserves and commits one or more free pages.
You can specify the starting address of the pages to be reserved or committed, or you can allow the system to determine the address. The function rounds the specified address to the appropriate page boundary. Reserved pages are not accessible, but committed pages can be allocated with PAGE_READWRITE, PAGE_READONLY, or PAGE_NOACCESS access. When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page. You can use normal pointer references to access memory committed by the VirtualAlloc function.

Freeing Virtual Memory
The VirtualFree function decommits and releases pages according to the following rules:
• Decommits one or more committed pages, changing the state of the pages to reserved. Decommitting pages releases the physical storage associated with the pages, making it available to be allocated by any process. Any block of committed pages can be decommitted.
• Releases a block of one or more reserved pages, changing the state of the pages to free. Releasing a block of pages makes the range of reserved addresses available to be allocated by the process. Reserved pages can be released only by freeing the entire block that was initially reserved by VirtualAlloc.
• Decommits and releases a block of one or more committed pages simultaneously, changing the state of the pages to free. The specified block must include the entire block initially reserved by VirtualAlloc, and all of the pages must be currently committed.
After a memory block is released or decommitted, you can never refer to it again. Any information that may have been in that memory is gone forever. Attempting to read from or write to a free page results in an access violation exception. If you require information, do not decommit or free memory containing that information.
To specify that the data in a memory range is no longer of interest, call VirtualAlloc with MEM_RESET. The pages will not be read from or written to the paging file. However, the memory block can be used again later.

Working with Pages
To determine the size of a page on the current computer, use the GetSystemInfo function.
The VirtualQuery and VirtualQueryEx functions return information about a region of consecutive pages beginning at a specified address in the address space of a process. VirtualQuery returns information about memory in the calling process. VirtualQueryEx returns information about memory in a specified process and is used to support debuggers that need information about a process being debugged. The region of pages is bounded by the specified address rounded down to the nearest page boundary. It extends through all subsequent pages with the following attributes in common:
• The state of all pages is the same: either committed, reserved, or free.
• If the initial page is not free, all pages in the region are part of the same initial allocation of pages that were reserved by a call to VirtualAlloc.
• The access protection of all pages is the same (that is, PAGE_READONLY, PAGE_READWRITE, or PAGE_NOACCESS).
The VirtualLock function enables a process to lock one or more pages of committed memory into physical memory (RAM), preventing the system from swapping the pages out to the paging file. It can be used to ensure that critical data is accessible without disk access. Locking pages into memory is dangerous because it restricts the system's ability to manage memory. Excessive use of VirtualLock can degrade system performance by causing executable code to be swapped out to the paging file. The VirtualUnlock function unlocks memory locked by VirtualLock.
The VirtualProtect function enables a process to modify the access protection of any committed page in the address space of a process. For example, a process can allocate read/write pages to store sensitive data, and then it can change the access to read only or no access to protect against accidental overwriting. VirtualProtect is typically used with pages allocated by VirtualAlloc, but it also works with pages committed by any of the other allocation functions. However, VirtualProtect changes the protection of entire pages, and pointers returned by the other functions are not necessarily aligned on page boundaries. The VirtualProtectEx function is similar to VirtualProtect, except it changes the protection of memory in a specified process. Changing the protection is useful to debuggers in accessing the memory of a process being debugged.

Heap Functions
The heap functions enable a process to create a private heap, a block of one or more pages in the address space of the calling process. The process can then use a separate set of functions to manage the memory in that heap. There is no difference between memory allocated from a private heap and that allocated by using the other memory allocation functions. For a complete list of functions, see the table in Memory Management Functions.
The HeapCreate function creates a private heap object from which the calling process can allocate memory blocks by using the HeapAlloc function. HeapCreate specifies both an initial size and a maximum size for the heap. The initial size determines the number of committed, read/write pages initially allocated for the heap. The maximum size determines the total number of reserved pages. These pages create a contiguous block in the virtual address space of a process into which the heap can grow. Additional pages are automatically committed from this reserved space if requests by HeapAlloc exceed the current size of committed pages, assuming that the physical storage for it is available. Once the pages are committed, they are not decommitted until the process is terminated or until the heap is destroyed by calling the HeapDestroy function.
The memory of a private heap object is accessible only to the process that created it. If a dynamic-link library (DLL) creates a private heap, it does so in the address space of the process that called the DLL. It is accessible only to that process.
The HeapAlloc function allocates a specified number of bytes from a private heap and returns a pointer to the allocated block. This pointer can be used in the HeapFree , HeapReAlloc, HeapSize , and HeapValidate functions.
Memory allocated by HeapAlloc is not movable. The address returned by HeapAlloc is valid until the memory block is freed or reallocated; the memory block does not need to be locked. Because the system cannot compact a private heap, it can become fragmented.
Applications that allocate large amounts of memory in various allocation sizes can use the low-fragmentation heap to reduce heap fragmentation.
A possible use for the heap functions is to create a private heap when a process starts up, specifying an initial size sufficient to satisfy the memory requirements of the process. If the call to the HeapCreate function fails, the process can terminate or notify the user of the memory shortage; if it succeeds, however, the process is assured of having the memory it needs.
Memory requested by HeapCreate may or may not be contiguous. Memory allocated within a heap by HeapAlloc is contiguous. You should not write to or read from memory in a heap except that allocated by HeapAlloc, nor should you assume any relationship between two areas of memory allocated by HeapAlloc.
You should not refer in any way to memory that has been freed by HeapFree. After the memory is freed, any information that may have been in it is gone forever. If you require information, do not free memory containing the information. Function calls that return information about memory (such as HeapSize) may not be used with freed memory, as they may return bogus data.

Low-fragmentation Heap
Heap fragmentation is when available memory is broken into small, non-contiguous blocks. When this occurs, memory allocation can fail even though there is enough memory in the heap to satisfy the request, because no one block of memory is large enough to satisfy the allocation request.
For applications that have a low memory usage, the standard heap is adequate; allocations will not fail due to heap fragmentation. However, if the application allocates memory frequently and uses a variety of allocation sizes, memory allocation can fail due to heap fragmentation.
Windows XP and Windows Server 2003 introduce the low-fragmentation heap (LFH). This mechanism is built on top of the existing heap, but as the name implies, it reduces fragmentation of the heap. Applications that allocate large amounts of memory in various allocation sizes should use the LFH. Note that the LFH can allocate blocks up to 16 KB. For blocks greater than this, the LFH uses the standard heap.
To use the LFH in your application, call the HeapCreate or GetProcessHeap function to obtain a handle to a standard heap. Then call the HeapSetInformation function to enable the LFH. If the call succeeds, memory is allocated and freed in the LFH when you call the heap API. Otherwise, the memory is allocated in the standard heap. Note that it is not possible to enable the LFH if the heap was created with HEAP_NO_SERIALIZE or if you are using certain gflags options related to the heap.
The LFH avoids fragmentation by managing all allocated blocks in 128 predetermined different block-size ranges. Each of the 128 size ranges is called a bucket. When an application needs to allocate memory from the heap, the LFH chooses the bucket that can allocate the smallest block large enough to contain the requested size. The smallest block that can be allocated is 8 bytes.

Message Reflection for Windows Controls

What Is Message Reflection?

Windows controls frequently send notification messages to their parent windows. For instance, many controls send a control color notification message (WM_CTLCOLOR or one of its variants) to their parent to allow the parent to supply a brush for painting the background of the control.

In Windows and in MFC before version 4.0, the parent window, often a dialog box, is responsible for handling these messages. This means that the code for handling the message needs to be in the parent window's class and that it has to be duplicated in every class that needs to handle that message. In the case above, every dialog box that wanted controls with custom backgrounds would have to handle the control color notification message. It would be much easier to reuse code if a control class could be written that would handle its own background color.

In MFC 4.0, the old mechanism still works — parent windows can handle notification messages. In addition, however, MFC 4.0 facilitates reuse by providing a feature called "message reflection" that allows these notification messages to be handled in either the child control window or the parent window, or in both. In the control background color example, you can now write a control class that sets its own background color by handling the reflected WM_CTLCOLOR message — all without relying on the parent. (Note that since message reflection is implemented by MFC, not by Windows, the parent window class must be derived from CWnd for message reflection to work.)
Older versions of MFC did something similar to message reflection by providing virtual functions for a few messages, such as messages for owner-drawn list boxes (WM_DRAWITEM, and so on). The new message reflection mechanism is generalized and consistent.

Message reflection is backward compatible with code written for versions of MFC before 4.0.
If you have supplied a handler for a specific message, or for a range of messages, in your parent window's class, it will override reflected message handlers for the same message provided you don't call the base class handler function in your own handler. For example, if you handle WM_CTLCOLOR in your dialog box class, your handling will override any reflected message handlers.

If, in your parent window class, you supply a handler for a specific WM_NOTIFY message or a range of WM_NOTIFY messages, your handler will be called only if the child control sending those messages does not have a reflected message handler through ON_NOTIFY_REFLECT(). If you use ON_NOTIFY_REFLECT_EX() in your message map, your message handler may or may not allow the parent window to handle the message. If the handler returns FALSE, the message will be handled by the parent as well, while a call that returns TRUE does not allow the parent to handle it. Note that the reflected message is handled before the notification message.

When a WM_NOTIFY message is sent, the control is offered the first chance to handle it. If any other reflected message is sent, the parent window has the first chance to handle it and the control will receive the reflected message. To do so, it will need a handler function and an appropriate entry in the control's class message map.

The message-map macro for reflected messages is slightly different than for regular notifications: it has _REFLECT appended to its usual name. For instance, to handle a WM_NOTIFY message in the parent, you use the macro ON_NOTIFY in the parent's message map. To handle the reflected message in the child control, use the ON_NOTIFY_REFLECT macro in the child control's message map. In some cases, the parameters are different, as well. Note that ClassWizard can usually add the message-map entries for you and provide skeleton function implementations with correct parameters.


Message-Map Entries and Handler Function Prototypes for Reflected Messages
To handle a reflected control notification message, use the message-map macros and function prototypes listed in the table below.

ClassWizard can usually add these message-map entries for you and provide skeleton function implementations. See Defining a Message Handler for a Reflected Message for information about how to define handlers for reflected messages.

To convert from the message name to the reflected macro name, prepend ON_ and append _REFLECT. For example, WM_CTLCOLOR becomes ON_WM_CTLCOLOR_REFLECT.

The three exceptions to the rule above are as follows:
• The macro for WM_COMMAND notifications is ON_CONTROL_REFLECT.
• The macro for WM_NOTIFY reflections is ON_NOTIFY_REFLECT.
• The macro for ON_UPDATE_COMMAND_UI reflections is ON_UPDATE_COMMAND_UI_REFLECT.

In each of the above special cases, you must specify the name of the handler member function. In the other cases, you must use the standard name for your handler function.

The meanings of the parameters and return values of the functions are documented under either the function name or the function name with On prepended. For instance, CtlColor is documented in OnCtlColor. Several reflected message handlers need fewer parameters than the similar handlers in a parent window. Just match the names in the table below with the names of the formal parameters in the documentation.

Handling Reflected Messages: An Example of a Reusable control

Here is a simple example creates a reusable control called CYellowEdit. The control works the same as a regular edit control except that it displays black text on a yellow background. It would be easy to add member functions that would allow the CYellowEdit control to display different colors.


1. Create a new dialog box in an existing application.

You must have an application in which to develop the reusable control. If you don't have an existing application to use, create a dialog-based application using AppWizard.

2. With your project loaded into Visual C++, use ClassWizard to create a new class called CYellowEdit based on CEdit.

3. Add three member variables to your CYellowEdit class. The first two will be COLORREF variables to hold the text color and the background color. The third will be a CBrush object that will hold the brush for painting the background. The CBrush object allows you to create the brush once, merely referencing it after that, and to destroy the brush automatically when the CYellowEdit control is destroyed.

4. Initialize the member variables by writing the constructor as follows:

 CYellowEdit::CYellowEdit()
 {
    m_clrText = RGB( 0, 0, 0 );
    m_clrBkgnd = RGB( 255, 255, 0 );
    m_brBkgnd.CreateSolidBrush( m_clrBkgnd );
  }

5. Using ClassWizard, add a handler for the reflected WM_CTLCOLOR message to your CYellowEdit class. Note that the equal sign in front of the message name in the list of messages you can handle indicates that the message is reflected. This is described in Defining a Message Handler for a Reflected Message.

ClassWizard adds the following message-map macro and skeleton function for you:

ON_WM_CTLCOLOR_REFLECT()

// Note: other code will be in between....

HBRUSH CYellowEdit::CtlColor(CDC* pDC, UINT nCtlColor)
{
// TODO: Change any attributes of the DC here

// TODO: Return a non-NULL brush if the
// parent's handler should not be called
return NULL;
}

6. Replace the body of the function with the following code. The code specifies the text color, the text background color, and the background color for rest of the control.

    pDC->SetTextColor( m_clrText ); // text
   pDC->SetBkColor( m_clrBkgnd ); // text bkgnd
   return m_brBkgnd; // ctl bkgnd


7. Create an edit control in your dialog box, then attach it to a member variable by double-clicking the edit control while holding a control key down. In the Add Member Variable dialog box, finish the variable name and choose "Control" for the category, then "CYellowEdit" for the variable type. Don't forget to set the tab order in the dialog box. Also, be sure to include the header file for the CYellowEdit control in your dialog box's header file.

8. Build and run your application. The edit control will have a yellow background.

Monday, March 8, 2010

Writing Bridge Dll betwen ATL COM and TCL

I came up with a requirement where a TCL script load a COM dll and has to invoke the interfaces. As a first time user of TCL script I came to know that with Load Command we can load .dlls (not COM dll) in tcl and can call the methods of the dll. But how to call the interfaces of a COM dll is my question. With reference to my knowledge pool, I prefer to write a bridge dll between the COM dll and the TCL script.

I illustrate this with an example.

Lets consider I have a COM class CMyClass with an interface IMyClass having method MyMethod.

HRESULT MyMethod([in]int A,[in]int B, [out] int *iRetVal);

Creating a Bridge dll

Create a new project Win32 dynamic link library. I am giving a name ‘TclComBridgeDll’.

Add C++ Source and header files with name TclComBridgeDll.cpp and TclComBridgeDll.h

in the stdafx.h add

#include generic\tcl.h

The tch.h is available in the generic folder of the TCL source available at http://prdownloads.sourceforge.net/tcl/tcl858-src.zip http://www.tcl.tk/software/tcltk/download.html

Add the following in TclComBridgeDll.h

#ifdef TCLCOMBRIDGEDLL_EXPORTS
#define TCLCOMBRIDGEDLL_API __declspec(dllexport)
#else
#define TCLCOMBRIDGEDLL_API __declspec(dllimport)
#endif
//To call MyMethod method in the COM dll interface
int CallMyMethod(ClientData data, Tcl_Interp *interp, int objc, Tcl_Obj *CONST objv[]);
extern “C”
{
TCLCOMBRIDGEDLL_API int Tclsupport_Init(Tcl_Interp *interp);
}

Now in the implementation file TclComBridgeDll.Cpp

Add the following lines.

#include “stdafx.h”
#include “TclComBridgeDll.h”
#import “Debug\MyAtlComDll.dll” rename_namespace (“namespacename”)
using namespace namespacename;

int CallMyMethod(ClientData data,Tcl_Interp *interp,int objc,Tcl_Obj *CONST objv[])
{
// Check the number of arguments
if (objc != 3)
{
Tcl_WrongNumArgs(interp, 1, objv, “arg arg”);
return TCL_ERROR;
}


size_t length1,length2;
char *v1 = Tcl_GetStringFromObj(objv[1], (int *) &length1);
if (v1[0] == ‘e’)
return TCL_ERROR ;

char *v2 = Tcl_GetStringFromObj(objv[2], (int *) &length2);
if (v2[0] == ‘e’)
return TCL_ERROR ;

long result = 0;

int nA = atoi(v1);
int nB = atoi(v2);

IMyClass *pImycls;
::CoInitialize(NULL);
HRESULT hr=::CoCreateInstance(__uuidof(MyClass), NULL,
CLSCTX_ALL,__uuidof(IMyClass),
(void**)(&pImycls));


try
{
int nTot;
pImycls->MyMethod(nA,nB,&nTot);
pImycls->Release();
result = nTot;
}
catch(…)
{
}

Tcl_SetObjResult(interp, Tcl_NewIntObj(result)) ;
return TCL_OK ;
}
// Note the casing on the _Init function name
TCLCOMBRIDGEDLL_API int Tclsupport_Init(Tcl_Interp *interp)
{
// Link with the stubs library to make the extension as portable as possible
if (Tcl_InitStubs(interp, “8.1″, 0) == NULL)
{ return TCL_ERROR;
}


// Declare which package and version is provided by this C code
if ( Tcl_PkgProvide(interp, “CallMyMethod”, “1.0″) != TCL_OK )
{
return TCL_ERROR ;
}

// Create a command
Tcl_CreateObjCommand(interp, “CallMyMethod”, CallMyMethod, (ClientData)NULL, (Tcl_CmdDeleteProc *)NULL);
return TCL_OK ;
}

Add tclstub86.lib to your linker input. It is avilable in your downloaded tcl858-src.zip

Build the solution, and it will build TclComBridgeDll.dll

Now your Bridge dll is ready to load from the TCL Script file by calling the below statements

Load TclComBridgeDll.dll

#CallMyMethod 10 20

Tuesday, January 5, 2010

Enabling Default Reply

Enabling Default Reply

Recently I came across situation, To respond to the pop up message boxes default button automatically without waiting for the user to do it. Some solutions came to my thought process like finding the top level window and send WM_MESSAGES and so on...

On my way to acheive this, I found an article about enabling default reply in Microsoft Developers Network for WinEmbedded5. I applied the same concept in my system while the application is running.
Okay I explain why these kind of mechanism you need?
Lets consider an example of implementing a Line drawing application. On LButton Down of the mouse you take the starting point (pX,pY) and you are rendering in the client area and On the LButton Up you take the end point (qX,qY); and calls the LineDraw method. Imagine while you are rendering on the client area meanwhile a MessageBox Alert like net Send Message box or Outlook Mail Alret Message box poped up; Basically kind of system model dialogs, you must need to respond to them and you lost the focus to your drawing co-ordinates. So the below lines are helpful while you are doing such operations.

User32 provides the functionality to intercept every MessageBox function and automatically choose the default button. Not all message box windows are produced through the standard MessageBox function; therefore, this feature might not operate on all windows.

To instruct NTUSER to intercept messages, an additional key and values are required in the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control registry key. You might need to create the registry key and values on your system, as follows.

To set up your system to automatically reply to a message box without displaying it

1.In the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control registry key, create a new key named Error Message Instrument.
2.Under the Error Message Instrument key, add the values of type REG_DWORD from the following table.



ValueDataDescription
EnableDefaultReply0x00000001Enables the Enable Default Reply feature
EnableLogging 0x00000001 Enables event logging.


3.Under the Error Message Instrument registry key, add one of the LogSeverity values of type REG_DWORD from the following table.








Value DataDescription
LogSeverity0x00000000 = EMI_SEVERITY_ALLAll message box events are logged.
LogSeverity0x00000001 = EMI_SEVERITY_USERMessage box events with the dwStyle parameter defined are logged, including MB_USERICON, MB_ICONASTERISK, MB_ICONQUESTION, MB_ICONEXCLAMATION, and MB_ICONHAND.
LogSeverity0x00000002 = EMI_SEVERITY_INFORMATIONErrors, warnings, questions, and information are logged. Message box events with no dwStyle parameter or dwStyle = MB_ICONUSER are not logged.
LogSeverity0x00000003 = EMI_SEVERITY_QUESTIONErrors, warnings, and questions are logged. Information, events with no style, and user-defined severity levels are not logged.
LogSeverity0x00000004 = EMI_SEVERITY_WARNINGOnly errors and warnings are logged.
LogSeverity0x00000005 = EMI_SEVERITY_ERROR. (EMI_SEVERITY_MAX_VALUE)Only errors are logged.


To log message information to the event log

1.In the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog\Application registry key, create a new key named Error Instrument.
2.Under the Error Instrument key, add the registry values from the following table.
Value Type Value
TypesSupported REG_DWORD 0x00000007
EventMessageFile REG_EXPAND_SZ %SystemRoot%\System32\User32.dll

You must reboot your target system for the changes to take effect.

****Note: Make sure to revert back the changes beyond the scope of your application, otherwise you will loose the MessageBox alerts.

Hope you enjoyed reading this :)
yours,
Sreedhar