Thursday, July 30, 2009

Isn't that Impossible?

Credits: Dharmesh Mehta
The contents of this article are original works of Dharmesh M. Mehta taken verbatim from his blog posting at . I liked the way Dharmesh captured the common arguments people make for not implementing security in a surrealistic way and hence posting it here too.
Permissions from original author: Pending

Not every organization and their people know about software security issues nor do they respect the same.

In most of my workshops conducted with developers for secure coding, I often hear the proclamation, "Isn't that Impossible..." and then the drama starts...

Many developers do not understand how the web works
• “Users can’t change the value of a drop down”
• “That option is greyed out”
• “We don’t even link to that page”

Many developers doubts attacker motivation
• “You are using specialized tools; our users don’t use those”
• “Why would anyone put a string that long into that field?”
• “It’s just an internal application” (in an enterprise with 80k employees and a flat network)
• “This application has a small user community; we know who is authenticated to it” (huh?)
• “You have been doing this a long time, nobody else would be able to find that in a reasonable time frame!”

Many developers do not understand the difference between network and application security
• “That application is behind 3 firewalls!”
• “We’re using SSL”
• “That system isn’t even exposed to the outside”

Many developers do not understand a vulnerability class
• “That’s just an error message” (usually related to SQL Injection)
• “You can’t even fit a valid SQL statement in 10 characters”

Many developers cite incorrect or inadequate architectural mitigations
• “You can’t execute code from the stack, it is read-only on all Intel processors”
• “Our WAF protects against XSS attacks” (well, clearly it didn’t protect against the one I’m showing you)
Developer cites questionable tradeoffs
• “Calculating a hash value will be far too expensive” (meanwhile, they’re issuing dozens of Ajax requests every time a user click a link)

There would be dozens more. The point that is developer education for security is one of the largest gaps in most SDLCs. How can you expect your developers to write secure code when you don’t teach them this stuff? You can only treat the symptoms for so long; eventually you have to attack the root cause.

Friday, July 24, 2009

PDB files.

Most developers realize that PDB files are something that help us debug, but that's about it. Don't feel bad if you don't know what's going on with PDB files because while there is documentation out there, it's scattered around and much of it is for compiler and debugger writers. While it's extremely cool and interesting to write compilers and debuggers, that's probably not a normal developer's job.

What I want to do here is to put in one place what everyone doing development on a Microsoft operating system has to know when it comes to PDB files. This information also applies to both native and managed developers, though I will mention a trick specific to managed developers. I'll start by talking about PDB file storage as well as the contents. Since the debugger uses the PDB files, I'll discuss exactly how the debugger finds the right PDB file for your binary. Finally, I'll talk about how the debugger looks for the source files when debugging and show you a favorite trick related to how the debugger finds source code.

Before we jump in, I need to define two important terms. A build you do on your development machine is a private build. A build done on a build machine is a public build. This is an important distinction because debugging binaries you build locally is easy, it is always the public builds that cause problems.

The most important thing all developers need to know: PDB files are as important as source code! Yes, that's red and bold on purpose. Nobody can find the PDB files for the build running on a production server. Without the matching PDB files you just made your debugging challenge nearly impossible. With a huge amount of effort, you can disassemble and can find the problems without the right PDB files, but it will save you a lot of effort if you have the right PDB files in the first place.

John Cunningham, the development manager for all things diagnostics on Visual Studio, said at the 2008 PDC, "Love, hold, and protect your PDBs." At a minimum, every development shop must set up a Symbol Server. Briefly, a Symbol Server stores the PDBs and binaries for all your public builds. That way no matter what build someone reports a crash or problem, you have the exact matching PDB file for that public build the debugger can access. Both Visual Studio and WinDBG know how to access Symbol Servers and if the binary is from a public build, the debugger will get the matching PDB file automatically.

Most of you reading this will also need to do one preparatory step before putting your PDB files in the Symbol Server. That step is to run the Source Server tools across your public PDB files, which is called source indexing. The indexing embeds the version control commands to pull the exact source file used in that particular public build. Thus, when you are debugging that public build you never have to worry about finding the source file for that build. If you're a one or two person team, you can sometimes live without the Source Server step.

The rest of this entry will assume you have set up Symbol Server and Source Server indexing. One good piece of news for those of you who will be using TFS 2010, out of the box the Build server will have the build task for Source Indexing and Symbol Server copying as part of your build.

One complaint against setting up a Symbol Server is that their software is too big and complex. There's no way your software is bigger and more complex than everything Microsoft does. They source index and store every single build of all products they ship into a Symbol Server. That means everything from Windows, to Office, to SQL, to Games and everything in between is stored in one central location.

My guess is that Building 34 in Redmond is nothing but SAN drives to hold all of those files and everyone in that building is there to support those SANs. It's so amazing to be able to debug anything inside Microsoft and you never have to worry about symbols or source (provided you have appropriate rights to that source tree).

With the key infrastructure discussion out of the way, let me turn to what's in a PDB and how the debugger finds them. The actual file format of a PDB file is a closely guarded secret but Microsoft provides APIs to return the data for debuggers.

A native C++ PDB file contains quite a bit of information:
a) Public, private, and static function addresses
b) Global variable names and addresses
c) Parameter and local variable names and offsets where to find them on the stack
d) Type data consisting of class, structure, and data definitions
e) Frame Pointer Omission (FPO) data, which is the key to native stack walking on x86
f) Source file names and their lines

A .NET PDB only contains two pieces of information, the source file names and their lines and the local variable names. All the other information is already in the .NET metadata so there is no need to duplicate the same information in a PDB file.

When you load a module into the process address space, the debugger uses two pieces of information to find the matching PDB file. The first is obviously the name of the file. If you load ZZZ.DLL, the debugger looks for ZZZ.PDB. The extremely important part is how the debugger knows this is the exact matching PDB file for this binary. That's done through a GUID that's embedded in both the PDB file and the binary. If the GUID does not match, you certainly won't debug the module at the source code level.

The .NET compiler, and for native the linker, puts this GUID into the binary and PDB. Since the act of compiling creates this GUID, stop and think about this for a moment. If you have yesterday's build and did not save the PDB file will you ever be able to debug the binary again? No! This is why it is so critical to save your PDB files for every build. Because I know you're thinking it, I'll go ahead and answer the question already forming in your mind: no, there's no way to change the GUID.

However, you can look at the GUID value in your binary. Using a command line tool that comes with Visual Studio,
DUMPBIN, you can list all the pieces of your Portable Executable (PE) files. To run DUMPBIN, open the Visual Studio 2008 Command Prompt from the Program's menu, as you will need the PATH environment variable set in order to find the DUMPBIN EXE.

There are numerous command line options to DUMPBIN, but the one that shows us the build GUID is /HEADERS. The important piece to us is the Debug Directories output:
Debug Directories Time Type Size RVA Pointer -------- ------ -------- -------- -------- 4A03CA66 cv 4A 000025C4 7C4 Format: RSDS, {4B46C704-B6DE-44B2-B8F5-A200A7E541B0}, 1, C:\junk\stuff\HelloWorld\obj\Debug\HelloWorld.pdb

With the knowledge of how the debugger determines the correctly matching PDB file, I want to talk about where the debugger looks for the PDB files. You can see all of this order loading yourself by looking at the Visual Studio Modules window, Symbol File column when debugging. The first place searched is the directory where the binary was loaded. If the PDB file is not there, the second place the debugger looks is the hard coded build directory embedded in the Debug Directories in the PE file. If you look at the above output, you see the full path C:\JUNK\STUFF\HELLOWORLD\OBJ\DEBUG\HELLOWORD.PDB. (The MSBUILD tasks for building .NET applications actually build to the OBJ\ directory and copy the output to DEBUG or RELEASE directory only on a successful build.) If the PDB file is not in the first two locations, and a Symbol Server is set up for the on the machine, the debugger looks in the Symbol Server cache directory. Finally, if the debugger does not find the PDB file in the Symbol Server cache directory, it looks in the Symbol Server itself. This search order is why your local builds and public build parts never conflict.

How the debugger searches for PDB files works just fine for nearly all the applications you'll develop. Where PDB file loading gets a little more interesting are those .NET applications that require you to put assemblies in the Global Assembly Cache (GAC). I'm specifically looking at you SharePoint and the cruelty you inflict on web parts, but there are others. For private builds on your local machine, life is easy because the debugger will find the PDB file in the build directory as I described above. The pain starts when you need to debug or test a private build on another machine.

On the other machine, what I've seen numerous developers do after using
GACUTIL to put the assembly into the GAC is to open up a command window and dig around in C:\WINDOWS\ASSEMBLY\ to look for the physical location of the assembly on disk. While it is subject to change in the future, an assembly compiled for Any CPU is actually in a directory like the following:
Example is the name of the assembly, is the version number, and 682bc775ff82796a is the public key token value. Once you've deduced the actual directory, you can copy the PDB file to that directory and the debugger will load it.

If you're feeling a little queasy right now about digging through the GAC like this, you should, as it is unsupported and fragile. There's a better way that seems like almost no one knows about,
DEVPATH. The idea is that you can set a couple of settings in .NET and it will add a directory you specify to the GAC so you just need to toss the assembly and it's PDB file into that directory so debugging is far easier. Only set up DEVPATH on development machines because any files stored in the specified directory are not version checked as they are in the real GAC.

To use DEVPATH, you will first create a directory that has read access rights for all accounts and at least write access for your development account. This directory can be anywhere on the machine. The second step is to set a system wide environment variable, DEVPATH whose value is the directory you created. The documentation on DEVPATH doesn't make this clear, but set the DEVPATH environment variable before you do the next step.
To tell the .NET runtime that you have DEVPATH set up requires you to add the following to your APP.CONFIG, WEB.CONFIG, or MACHINE.CONFIG as appropriate for your application:

Once you turn on development mode, you'll know there's a problem with either the DEVPATH environment variable missing for the process or the path you set does not exist if your application dies at startup with a COMException with the error message saying the completely non-intuitive: "Invalid value for registry." Also, be extremely vigilant if you do want to use DEVPATH in MACHINE.CONFIG because every process on the machine is affected. Causing all .NET applications to fail on a machine won't win you many friends around the office.

The final item every developer needs to know about PDB files is how the source file information is stored in a PDB file. For public builds that have had source indexing tools run on them, the storage is the version control command to get that source file into the source cache you set. For private builds, what's stored is the full path to the source files that compiler used to make the binary. In other words, if you use a source file MYCODE.CPP in C:\FOO, what's embedded in the PDB file is C:\FOO\MYCODE.CPP.

Ideally, all public builds are automatically being source indexed immediately and stored in your Symbol Server so if you don't have to even think any more about where the source code is. However, some teams don't do the source indexing across the PDB files until they have done smoke tests or other blessings to see if the build is good enough for others to use. That's a perfectly reasonable approach, but if you do have to debug the build before its source indexed, you had better pull that source code to the exact same drive and directory structure the build machine used or you may have some trouble debugging at the source code level. While both the Visual Studio debugger and WinDBG have options for setting the source search directories, I've found it hard to get right.

For smaller projects, it's no problem because there's always plenty of room for your source code. Where life is more difficult is on bigger projects. What are you going to do if you have 30 MB of source code and you have only 20 MB of disk space left on your C: drive? Wouldn't it be nice to have a way to control the path stored in the PDB file?
While we can't edit the PDB files, there's an easy trick to controlling the paths put inside the PDB files:
SUBST.EXE. What SUBST does is associate a path with a drive letter. If you pull your source code down to C:\DEV and you execute "SUBST R: C:\DEV" the R: drive will now show at its top level the same files and directories if you typed "DIR C:\DEV." You'll also see the R: drive in Explorer as a new drive. You can also achieve the drive to path affect by mapping a drive to a shared directory in Explorer.

What you'll do on the build machine is set a startup item that executes your particular SUBST command. When the build system account logs in, it will have the new drive letter available and that's where you'll do your builds. With complete control over the drive and root embedded in the PDB file, all you need to do to set up the source code on a test machine is to pull it down wherever you want and do a SUBST execution using the same drive letter the build machine used. Now there's no more thinking about source matching again in the debugger.

While not all of this information about PDB files I've discussed in this entry is entirely new. I hope by getting it all together that you'll find it easier to deal with what's going on and debug your applications faster. Debugging faster means shipping faster so that's always high on the good things scale.

This information is from a Wintellect article by John Robbins.

Thursday, July 9, 2009

Active Directory Federation Services (ADFS)

Federated Identity is a standards based technology. IBM, SUN, and Versign all have stakes in this technology. ADFS is simply Microsoft's solution for Federation management.
ADFS is part of the R2 release of Server 2003. You cannot purchase or download ADFS separately.

So exactly what is ADFS?
ADFS is a service (actually a series of web services) that provides a secure single-sign-on (SSO) experience which allows a user to access multiple web applications spread across different enterprises and networks.

ADFS fills a much needed gap in the following scenarios:
Extranet Applications
Many organizations host at least one application used by business partners or other outside users. So we stand up this application and supporting infrastructure in our DMZ right? In most cases this involves at least an IIS Server, SQL Server, and something to authenticate and authorize users. If we plan on having more than just a handful of users using our application and need advanced user management then chances are we're going to put an Active Directory forest in our DMZ. Ok great. Now we just secure it all and create user accounts for our business partners and we're off and running. But wait... All of a sudden our internal users need access to the extranet application as well. So now what? We could create and manage a second user account for each internal user needing access (not to mention reset the user's passwords when they forget their DMZ account information). Our second option is to open up several ports and create a trust relationship between the two domains. This would give us the ability to provide extranet application access to the intranet AD users, however opening the required ports decreases our security and makes our internal AD environment more vulnerable to attack. This is where ADFS comes in.

ADFS gives us the ability to setup what's known as Federation Servers in our internal network and dmz. The federation servers then securely (via certificates and SSL) allow our internal AD users to acquire a "token" which in return gives them access to the extranet application. This prevents the internal user from having to enter any credentials just like the application was sitting on the internal network. This is all done without exposing internal account information to the DMZ.

B2B Extranet Applications
Now lets take ADFS to the next level. Remember the business partner we want to provide access to our extranet application? Remember how we put AD in the DMZ so we can setup user accounts for our partner? What if our application supports different levels of security? And what if we want to give each user from the business partner unique access to our application? Well all still fairly simple. We just setup and manage a user account for each of those user's in our DMZ domain right? What if our business partner decides they want all 1,000 employees accessing our extranet app. Well now we have an account management nightmare. This is where ADFS comes to the rescue again. With ADFS we can provide Federated access to accounts from our partner's active directory domain. We setup a federation server in our DMZ and our business partner does the same (once again encrypting communication over SSL). We then grant application access to what we call "claims groups" which map to real groups within our partner's domain. Our partner then simply places their domain's already created, and managed user accounts into their own group within AD and suddenly they are browsing our extranet application, with SSO I might add. Please note that credentials, SIDS, and all other AD account information is NEVER passed between federation servers (or organizations). Federation servers simply provide "tokens" to user accounts when they need to access the application on the other side.

Final Thoughts
ADFS is an exciting new technology that many vendors and companies are beginning to buy into. With that said keep in mind that it is a "new" technology as well. Be sure to look for future standard and protocol changes. You should also be aware that ADFS can be very complicated and confusing to setup the first time, however it can be simplified.

Please do NOT setup a production ADFS (especially in a B2B scenario) unless you have extensively tested and are comfortable with the security of your configuration. After all you are providing access to one of your applications, extranet or not. I would also suggest some sort of signed legal agreement between the two organizations in a B2B scenario. If you would like to see a followup post on how to setup ADFS please leave comments or email me informing me of your interest. Here are some followup links to get you started:

Wednesday, June 17, 2009


Every new technology brings its own mechanism to mitigate security threats. This post discusses on how silverlight deals with cross site scripting.

What is Cross Site Scripting?
Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications which allow code injection by malicious web users into the web pages viewed by other users. Examples of such code include HTML code and client-side scripts. An exploited cross-site scripting vulnerability can be used by attackers to bypass access controls such as the same origin policy. Vulnerabilities of this kind have been exploited to craft powerful phishing attacks and browser exploits.

To avoid Cross-Site Scripting (XSS), Silverlight runtime enforces restrictions in the framework APIs. Any cross domain request requires that the server has explicitly granted permissions to access its resources from Silverlight client. Cross domain access means the Silverlight client is making network calls to domain which is not same as the domain from which the client itself has been downloaded. The restrictions are same as what Flash based clients also experience.
To allow flash based clients to access its resources, servers need to place a policy file at the root of the domain called crossdomain.xml and all access permission in that file.
Silverlight uses the same logic to allow the APIs to access cross domain resources. It supports flash based policy file. It also supports a file specific to Silverlight clients named as clientaccesspolicy.xml. This is also a xml based file with published format but different from flash format.Silverlight runtime first tries to download the clientaccesspolicy.xml file and if found, all access permissions are granted using this file. If this file is not available, it tries to download flash based policy file. If none is found, access is denied. These files are not downloaded in case of same domain access.

Sunday, June 7, 2009

Google Wave

What is Google Wave? It is a new communication service that Google unveiled at Google IO this week. It is a product, platform and protocol for communication and collaboration designed for today’s world. Is that too much of technical jargon…let’s make it simple…and take it in chewable bite size…

It is like reinventing email that was designed 40 years ago i.e. many years before internet, wiki, blogs, twitter, forums, discussion boards etc existed. The world has evolved, but we are still hooked to “Store-and-forward” architecture of email systems which mimics snail-mail. In spite of the technological advances, we are living in highly segmented world, with information living on islands – emails, blogs, photo, blogs, micro blogs like twitter, web collaboration, net meetings, IM and so on.

In Google Wave you create a wave (can be an email or IM conversation or a document for collaboration or to publish on a blog or just to play a game) and add people to it. Everyone on your wave can use richly formatted text, photos, gadgets, and even feeds from other sources on the web. You can insert a reply or edit the wave directly. Google Wave an HTML 5 app, built using Google Web Toolkit. It includes a rich text editor and other desktop functions like drag-and-drop. It has concurrent rich-text editing, where you see on your screen instantly what your fellow collaborators are typing in your wave. This means Google Wave integrates email, IM and collaborative document creation into a single experience. The most important feature is that you can also use “playback” to rewind the wave to see how it evolved. My elder son was very excited to see that. He said “If I am playing chess with my friends using Wave, I will be able to rewind and replay it to see every move. WoHoooooo..”

Google Wave can also be considered as a platform with a rich set of open APIs that allow developers to embed waves in other web services, and to build new extensions that work inside waves. The Google Wave protocol is designed for open federation, such that anyone’s Wave services can interoperate with each other and with the Google Wave service. To encourage adoption of the protocol, we intend to open source the code behind Google Wave.

Vic Gundotra of Microsoft fame is now leading this effort as VP engineering at Google. Lars and Jens Rasmussen (brothers) who came to Google with acquisition of “2 Tech” in 2004, have been driving this effort at Google for more than 18 months. They also have credible history and star reputation at Google as creators of Google Maps.

The underlying assumption is that a large scale disruptive innovation can dislodge the existing leaders and give an opportunity to other to take leading positions. Hence an attempt to create an online world where people can seamlessly communicate and collaborate across various information exchange scenarios including email, IM, blog, wiki and multi-lingual (including translation) . With this bold move, Google is trying to overcome the challenges of integration by hosting the conversation object on the server, allowing multiple channels of interactions and breaking many barriers in the process. The service seems to combine Gmail and Google Docs into an interesting free-form workspace that could be used to write documents collaboratively, plan events, play games or discuss recent news. Google has announced this as an open source project and is publishing all the standards at The ripples of this Google wave have potential of impacting the technology world for decades to come.

Some helpful links:
Main Site:
Federation Protocol:
Web Toolkit:

Thursday, April 2, 2009

What is Microsoft Forefront?
Microsoft Forefront is relatively new and just beginning to get real traction in the network security market.

The first thing is that there is no “Forefront” product. Instead, Forefront is a collection of Microsoft security products. This collection of Forefront security products is referred to as the “Forefront Security Suite”

There are three collections of security products included in the Forefront Security Suite. These include:

  • Forefront Edge — Forefront Edge products include the Forefront Threat Management Gateway (the next version of ISA Server) and the Forefront Intelligent Access Gateway 2007 (IAG 2007). The next version of IAG will be part of the Forefront Security Suite and the product will be renamed to Forefront Unified Access Gateway (UAG).
  • Forefront Server Security — There are three products that comprise the Forefront Server Security collection. These are Forefront Security for Exchange, Forefront Security for SharePoint and Forefront Security for Office Communications Server.
  • Forefront Client Security — There is one product in this collection — Microsoft Forefront Client Security (FCS).

In the future, there is likely to be another member of the Forefront family of security products, Forefront code-named “Stirling”. Stirling is a comprehensive configuration, management and reporting console that allows you to configure, management and report on the activities of all the members of the Forefront family of security products. In addition, Stirling will allow you to create proactive response policies, so that information gathered from one member of the Forefront Security Suite can be used to trigger a response by other members of the suite. Stirling will enable you to create incident response policies so that corrective actions take place immediately, instead of having to wait for you to receive and alert and implement a response manually. The first version of Stirling will probably only support a subset of Forefront products, which the long term goal being support for all members of the Forefront Security Suite.

Microsoft Forefront Family Products
Forefront family products include security servers that perform a wide range of security functions. Members of the Forefront family include:

  • Forefront Threat Management Gateway (TMG). TMG is the next version of ISA Server. In contrast to the .1 upgrade we saw with ISA 2004 to ISA 2006, the TMG is a major rewrite and feature enhanced version of the ISA firewall. Major investments have been made to improve anti-malware and anti-virus scanning for Internet downloads, and the TMG will include site filtering based on category. There are many more features planned for the RTM release of the TMG. In addition, TMG runs only on 64bit Windows Server 2008, so should expect to see major improvements in performance and stability that only a 64bit platform can provide.
  • Forefront Intelligent Application Gateway 2007 (IAG 2007). The Forefront IAG 2007 is an SSL VPN gateway. IAG 2007 can be used to publish Web servers in traditional reverse Web Proxy fashion, or you can create customized portals that provide users one click access to applications hosted on the corporate network. IAG 2007 portals provide access to both Web and non-Web based applications. Non-Web based applications take advantage of IAG 2007 port and socket forwarding features, so that even complex protocols like Outlook/Exchange MAPI connections will work over an SSL connection. And for users who need full network layer access, IAG 2007 includes the “Network Connector” feature that enables users to establish a full network layer tunnel over an SSL connection. IAG 2007 also includes easy to configure and powerful endpoint detection and information wiping on client computers.
  • Forefront Server Security for Exchange (FSE), SharePoint (FSS) and Office Communications Server. These three products provide anti-virus and anti-malware protection for Exchange, SharePoint and OCS. These products can be used to scan e-mail or libraries for existing malware, and can be used to configure them to prevent users from uploading malware. Up to 5 anti-virus engines can be used at the same time, and policies configured to use a user-defined mix of engines, depending the level of confidence and performance you desire. In addition, these products allow to configure content filtering rules, so that we can block specific file types or documents containing forbidden strings. Each product has comprehensive logging and reporting features. They are all easy to configure, manage and update. At this time OCS is in beta testing and its full feature set is in flux, but we can expect it to provide similar anti-virus and anti-malware protection as the other products in the Forefront Server Security suite.
  • Forefront Client Security (FCS). Forefront Client Security is an enterprise grade desktop and server anti-virus and anti-malware platform. Forefront Client Security includes both client and server components. You can use Forefront Client Security to deploy the anti-malware agent too all machines, or selected machines, on the network using Group Policy or any other software distribution mechanism you like. Forefront Client Security scans client and server systems for viruses and malware, and also performs security state assessments that are reported to the Forefront Client Security console. Forefront Client Security can scale from a single server solution, to one that includes a separate servers for the 6 different Forefront Client Security server roles. Using the Forefront Client Security enterprise management console, Forefront Client Security can be configured to support up to 100,000 users.
  • Forefront “Stirling”. Forefront “Stirling,” is a single product that delivers unified security management and reporting with comprehensive, coordinated protection across an organization’s IT infrastructure. The Stirling console will allow to configure, manage, and receive reporting information from all members of the Forefront Security Suite. In addition to unified management, we will be able to configure Stirling policies that enables creation of proactive incident response policies. Stirling will be able to gather security information from all Forefront products it manages and monitors, and then will be able to use that information to trigger incident response policies that fire off automatically without requiring administrator intervention. In addition to integrating Forefront products, Stirling will also leverage Windows Server 2008 Network Access Protection to isolate compromised machines from the network.

Microsoft Forefront is a collection of Microsoft security products aimed at protecting the network edge, key server applications including Exchange, SharePoint and OCS, and client and server systems with host-based anti-virus and anti-malware protection. At this time these products work separately and configuration, management and reporting work through different consoles. In the future, with the release of Forefront Stirling, a single console will expose configuration, management and reporting functionality through a single interface.

Friday, March 27, 2009

Silverlight Web Part for Sharepoint

In this post, we are going to see integrating the webpart with Silverlight contents on SharePoint Site.

For that, we need to combine all required java script and Xaml files (used to display the Silverlight content) in to single assembly without any dependent files. It makes sense to embed the Xaml and java script file as a resource and reference it in programming using the WebResource.axd handler mechanism for extracting embedded resources.

1. Create a Webpart project and create or add the required java script and Xaml files to the project.
Include some files Silverlight.js, Scene.js and Scene.xaml

2. Set the BuildAction property to “Embedded Resource” in properties window for each java script and Xaml.
This will use to include the files as Resources in an assembly.

3. Add the assembly-level attribute System.Web.UI.WebResource to grant permission for these resources to be served by WebResource.axd and to associate MIME type for the response.

[assembly: WebResource("Arun.Silverlight.js", "text/javascript" )]
[assembly: WebResource("Arun.Scene.js", "text/javascript")]
[assembly: WebResource("Arun.Scene.xaml", "text/xml")]

Now JavaScript and Xaml files are compiled into my assembly as embedded resources.

4. Now, we can use the RegisterClientScriptresource() method of the Page.ClientScriptManager class to rendered the page with the referenced files.

this.Page.ClientScript.RegisterClientScriptResource(GetType(), "Arun.Silverlight.js"); this.Page.ClientScript.RegisterClientScriptResource(GetType(), "Arun.Scene.js");

Include the above lines in the PreRender method to register the javascript files for the webpart.

5. Add the following lines to the RenderWebPart method to host the < div > tag and call the Silverlight content to the webpart,

string strLoad = "Silverlight.createDelegate(scene, scene.handleLoad)";



The method GetWebResourceUrl(GetType(), "Arun.Scene.xaml") used to retrieve the Url of the Xaml file from WebResource.axd.

Tuesday, March 24, 2009

Code Contracts

Code Contracts provide a language-agnostic way to express coding assumptions in .NET programs. The contracts take the form of preconditions, postconditions, and object invariants. Contracts act as checked documentation of your external and internal APIs. The contracts are used to improve testing via runtime checking, enable static contract verification, and documentation generation.

Code Contracts bring the advantages of design-by-contract programming to all .NET programming languages.

The benefits of writing contracts are:

Improved testability
  • each contract acts as an oracle, giving a test run a pass/fail indication.
  • automatic testing tools, such as Pex, can take advantage of contracts to generate more meaningful unit tests by filtering out meaningless test arguments that don't satisfy the pre-conditions.

Static verification tools can takes advantage of contracts to reduce false positives and produce more meaningful errors.

API documentation Our API documentation often lacks useful information. The same contracts used for runtime testing and static verification can also be used to generate better API documentation, such as which parameters need to be non-null, etc.

Using a set of static library methods for writing preconditions, postconditions, and object invariants as well as two tools from Micrsosoft:

  • ccrewrite, for generating runtime checking from the contracts
  • cccheck, a static checker that verifies contracts at compile-time.

The plan from Microsoft is to add further tools for

  • Automatic API documentation generatio
  • Intellisense integration

The use of a library has the advantage that all .NET languages can immediately take advantage of contracts. There is no need to write a special parser or compiler. Furthermore, the respective language compilers naturally check the contracts for well-formedness (type checking and name resolution) and produce a compiled form of the contracts as MSIL. Authoring contracts in Visual Studio allows programmers to take advantage of the standard intellisense provided by the language services. Previous approaches based on .NET attributes fall far short as they neither provide an expressive enough medium, nor can they take advantage of compile-time checks.

Contracts are expressed using static method calls at method entries. Tools take care to interpret these declarative contracts in the right places. These methods are found in the System.Diagnostics.Contracts namespace.
• Contract.Requires takes a boolean condition and expresses a precondition of the method. A precondition must be true on entry to the method. It is the caller's responsibility to make sure the pre-condition is met.
• Contract.Ensures takes a boolean condition and expresses a postcondition of the method. A postcondition must be true at all normal exit points of the method. It is the implementation's responsibility that the postcondition is met.

Watch this space for more developments in this area.


Thursday, March 5, 2009

Cloud Computing through the lens of SOA

Cloud computing is a style of computing that defines the way IT functions are going to be delivered or acquired in the future. This can essentially be contributed due to the emergence of revolutionary technologies such as Virtualization, Service oriented Architecture and the Web.

I will attempt to explain the influence of these technologies on the formation of the "Cloud".Let me do this by first highlighting some key attributes that basically characterizes the “Cloud”. Followed by, describing how these emerging technologies meet to address them.

ABC's of the “Cloud”:
Adapt: Being scalable and elastic to meet fluctuating resource demands
Black-Boxed: Delivery of capabilities “as a service”
Focus is on the Results and Not on Components
Delivery Service Levels are Critical
Commune: Anyone Anywhere Anytime access
The amalgamation of Virtualization, Service oriented Architectures based on open standards along with the pervasive nature of the Internet has made IT services generally available at global scales. Now let’s look at how these technologies principally function to meet the ABC’s of the cloud:

Virtualization technologies such as Hyper-V, VMware, Citrix have de-coupled software from the hardware making it possible to run multiple software instances on a single hardware. The technology allows IT administrators to seamlessly Ramp up/down computational capabilities such as processor, storage, RAM in a matter of hours or even minutes. Virtualization has enabled efficient use of shared resources along with the de-coupling increasing the economies of scale of computing.

"Functionality being delivered through a platform independent contract" is the key design principle behind service oriented application. This makes service consumers consume information by being de-coupled from the technical implementation of the provider and focus only on the results. Also being self contained, services can be designed or managed at a unit level hence allowing for more granular control of service levels.

The global pervasiveness and the open standards of the Internet have made this technology as the de-facto mode of delivering IT-services on the public cloud. Although one may also argue that in case of private clouds, say within an enterprise, the use of private networks may enable cloud-style environments delivery capabilities without ever using Internet technologies. However from a universal accessibility stand-point the access over such channels may be confined to that within the enterprise boundaries limiting the openness otherwise observed on the cloud.

Wednesday, February 18, 2009

Tuesday, February 10, 2009

BizTalk 2006 Code Samples

This news is a little old now, but there are a whole heap of BizTalk Server Code samples that were released to MSDN back in June 06.
Take a look at this list:

Publishing and Consuming Web Services with SOAP HeadersThis sample demonstrates how to publish a BizTalk orchestration as a Web service with a SOAP header and how to consume the SOAP header from a Web service request message.

BAM and HAT CorrelationThis sample demonstrates how to use the enhanced BAM features, and how to customize BAM and HAT integration. This sample also includes a Windows Forms application customizing BAM and HAT integration for the sample BizTalk solution.

Consuming Web Services with Array ParametersThis sample demonstrates how to consume Web services with array parameters.

Extending the BizTalk Server Administration ConsoleThis sample demonstrates how to use the Microsoft Management Console (MMC) 2.0 Software Development Kit (SDK) to extend the functionality of the BizTalk Server Administration console with your own custom menu items, node items, new data items and views, or different views of existing data.

Viewing Failed Tracking DataThis sample uses Windows Forms to provide a simple interface to view and resubmit failed messages.

Inserting XML Nodes from Business RulesThis sample demonstrates how to insert nodes into an XML document and set their values from a business rule by using the XmlHelper class.

Using the Mass Copy FunctoidThis sample demonstrates the use of the Mass Copy Functoid to map a source hierarchy to a destination hierarchy without mapping each individual element by hand.

Using Role LinksThis sample demonstrates how to use role links and parties.

Split File PipelineThis sample uses the FILE adapter to accept an input file containing multiple lines of text into a receive location.

Using Enterprise Library 2.0 with BizTalk ServerThis sample demonstrates how to use Enterprise Library 2.0 with BizTalk Server.

Consuming Web ServicesThis sample demonstrates how to consume Web services in a messaging-only scenario, and without using the Add Web Reference option.

Console AdapterThis sample consists of a C# console application that instantiates and hosts an instance of the receive adapter. The adapter is a Visual Studio 2005 class library that invokes the BizTalk Server 2006 APIs.

Delivery NotificationThis sample demonstrates how acknowledgments work and how to use delivery notification.

Using Long-Running Transactions in Orchestrations This sample demonstrates how to use long-running transactions in orchestrations.

Using the Looping FunctoidThis sample transforms catalog data from one format to another by using the Looping functoid.

Mapping to a Repeating StructureThis sample demonstrates how to map multiple recurring records in an inbound message to their corresponding records in the outbound message in the BizTalk Mapper.

Parallel Convoy This sample demonstrates how to design the parallel convoy pattern in BizTalk Orchestration Designer.

Policy ChainingThis sample demonstrates how to invoke a policy from another policy by calling the Execute method of the Policy class exposed directly by the Microsoft.RuleEngine assembly.

Recoverable Interchange Processing Using Pipelines This sample demonstrates how to implement recoverable interchange processing.

Using the Table Looping Functoid This sample demonstrates the use of the Table Looping functoid in gated and non-gated configurations.

Using the Value Mapping and Value Mapping (Flattening) FunctoidsThis sample demonstrates the use of the Value Mapping and Value Mapping (Flattening) functoids to transform data between different message formats.

Direct Binding to an OrchestrationThis sample processes fictitious loan requests using orchestrations with ports that are directly bound to another orchestration

Direct Binding to the MessageBox Database in Orchestrations This sample processes fictitious loan requests using orchestrations with ports that are directly bound to the MessageBox database.

Using a Custom .NET Type for a Message in OrchestrationsThis sample processes fictitious
customer satisfaction survey responses from clients who spend time at different resort properties. Clients assign an overall satisfaction rating and can optionally enter a contact address and request a personal response. A request for a personal response generates a new message that is forwarded to a customer service application for tracking and follow-up.

Writing Orchestration Information as XML Using the ExplorerOM APIThe sample performs two tasks. First, it writes configuration information for all orchestrations defined for a BizTalk server into a user-specified XML file. It then optionally transforms the XML data into a simple HTML report. This is accomplished through a console application.

Correlating Messages with Orchestration InstancesThis sample receives a purchase order (PO) message from a fictitious customer and processes the purchase order message using correlation.

SSO as Configuration StoreThis sample provides an implementation of a sample class and a walkthrough that demonstrates how to use the SSO administrative utility and the SSOApplicationConfig command-line tool.

Atomic Transactions with COM+ Serviced Components in OrchestrationsThis sample demonstrates how atomic transactions work in orchestrations.

Exception Handling in OrchestrationsThis sample demonstrates how to handle exceptions in an orchestration.

Implementing Scatter and Gather PatternThis sample demonstrates how to implement the Scatter and Gather pattern using BizTalk Orchestration Designer.

Using the SQL Adapter with Atomic Transactions in OrchestrationsThis sample shows how to use the SQL adapter with atomic transactions to keep databases consistent.

Wednesday, February 4, 2009

Custom Alerts in SharePoint 2007

In SharePoint 2007 we have a great feature called Alerts, basically it sends an email when something in a list or library (or view) is changed. I’m sure I don’t need to tell anyone about them, but when it comes to actually applying them, it would be ideal to be able to customise the alerts for your own application.

So not only might you want to change the presentation of the email that you send as an alert, but you may also want set certain custom conditions for when an alert is triggered.
The alert template xml file is located in the 12 Hive at C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\XML\alerttemplates.xml, if you open the file you will see all the different alerts for each type of list/library.

Either make a backup of the original file, or create your own copy (we will register the alert file later) and rename the file eg. CustomAlertTempates.xml
Copy the GenericList node and paste below the other nodes and rename.

If you expand this node you will see the child nodes EventTypes, Format (Digest & Immediate), Properties, and Filters.

If we look at the Format Node first, there are two types of formatting available, Digest and Immediate. Each contains a large amount of xsl/html that controls the output html of the alert email. The digest node controls the daily/weekly summary alerts, and the immediate node controls the alerts sent immediately (obviously!).
Change some of the html in the Immediate node so you can test whether the alert is using your template, eg.

The next step is to register and test your new alert type. For this you use STSADM from the command prompt to register the new alert file for a particular Site Collection.

stsadm -o updatealerttemplates -url http://yoursite/sites/sitecollname -filename “C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\XML\CustomeAlertTemplates.xml”

Now to set a particular alert to use your new specific template, you can set the AlertTemplate for the list programmatically.

SPList spList = null;
spList = spWeb.Lists[listName];
SPAlertTemplate newTemplate = new SPAlertTemplate();
newTemplate.Name = “SPAlertTemplateType.MyCustomAlertType“;
spList.AlertTemplate = newTemplate;spList.Update();
Or you can create an individual alert programmatically…
SPAlert spAlert = spUser.Alerts.Add();
spAlert.Title = alertName;
spAlert.EventType = SPEventType.Modify;
spAlert.AlertFrequency = SPAlertFrequency.Immediate;
spAlert.AlertType = SPAlertType.List;
spAlert.List = spWeb.Lists[listName];
spAlert.Filter = QueryBuilder(spUser.Name);
SPAlertTemplate newTemplate = new SPAlertTemplate();
newTemplate.Name = “SPAlertTemplateType.MyCustomAlertType“;
spAlert.AlertTemplate = newTemplate;

Once this has been registered, recycle the web app application pool, or reset IIS, then test the new alert. You should find the email alert will now include your new HTML.
So that’s how to change the HTML of an alert, in the next post I’ll create a new custom filtering option that will appear through the UI.

TIP: By default the timer job that runs the alert jobs runs every 5 minutes. And if you’re debugging that can be a painfully slow process, unless you enjoy heaps of coffee breaks! Anyway I decided I didn’t need that much coffee, so I changed the alert timer job to run every minute instead of every five.

SPJobDefinitionCollection spJobs = SpWeb.Site.WebApplication.JobDefinitions;
foreach (SPJobDefinition job in spJobs)

if (job.Id.ToString() == TaskGuid)

string guid = job.Id.ToString();
string name = job.DisplayName;
SPMinuteSchedule newSchedule = new SPMinuteSchedule();
newSchedule.BeginSecond = 0;
newSchedule.EndSecond = 59;
newSchedule.Interval = minutes;
job.Schedule = newSchedule;

Sunday, February 1, 2009

Web Content Management

Web Content Management or WCM in short is one of the more interesting topics on MOSS. This blog aims to provide an overview of WCM, explain what is WCM, why is it special and more importantly, how is it useful for you.

So what is WCM in simple terms ?
WCM is a rich content authoring and management platform. It provides a set of controls and publishing features that allow the site owners to host content centric sites. It takes care of the site branding, publishing, content authoring, workflows etc. WCM forms a part of the Enterprise content management solution, which in turn forms a part of MOSS 2007. It also leverages the Office Word and Infopath.

In short it is a very scalable solution that separates the content and presentation, relieving the burden on the IT department.To better understand the solution that WCM provides, we need to understand the problem first.

Managing a content centric web site is by no means a simple task. In most of the organization, it will be the IT team that will have access to add new pages, maintain the pages and keep the site running smoothly. The content contributor has to undergo the overhead of approaching the IT staff for each and every change. This translates to longer process and higher cost of operation. Many a times the content will have to be edited a few times before it is correctly published.

This is where WCM comes into picture. WCM provides a platform. It defines the site branding, sets the templates, look and feel, authoring rules, publishing rules, workflows, various levels of securities etc.

The content contributor can now focus on his content alone and leave the development hassles aside. He simply submits his data. This will in turn validate the data, start the workflows, approval cycles and finally publish the content without the support of IT staff. The final content will be published in accordance with the look and feel of the rest of the site.
WCM incorporates all features in Microsoft content management server 2002 (MCMS). Microsoft has discontinued providing CMS as a separate product, but instead, it provides the enhanced version ( WCM ) along with MOSS 2007.

Some of the important features of the WCM are listed below.

  • Workflows
  • Search functionality
  • RSS facilities
  • Built in Caching mechanism
  • Supports multiple devices
  • Better Versioning mechanism
  • More events captured
  • Pluggable Authentication
  • Reusable Content
  • Web based management

Having said all this, Does this really make sense?
A content heavy web site will have frequent changes. New pages will be added by various contributors. Managing the new pages, recording the version history, validating data, format etc is a mammoth task. Creating a application to handle the same will cost a fortune. WCM automates most of the processes and brings the focus to what matters the most, the content. This way, the contributor will be able to focus more on the data and be able to publish the content in a quick efficient manner.
The process does not require support from the IT department as the contributor can himself manage the content online. Thus saving a lot of effort as well as money.
In short, WCM saves Time and Money. And that makes a lot of sense.

Sunday, January 18, 2009

Identify the Subtle Bug.....

A friend of mine pointed me out to this.

This code has a subtle bug. What is it?
Hint: it has nothing to do with encryption.

using(RijndaelManaged enc=new RijndaelManaged(){Key=key,IV=iv,Mode=CipherMode.CBC })

So to outline what and why this doesn’t do what is expected let’s review what this code is
shorthand for.

First:The using block is actually shorthand for a particular try … finally pattern.Roughly this code:

using (SomeDisposableType item = new SomeDisposableType()){}

Is equivalent to:

SomeDisposableType item = null;
item = new SomeDisposableType();
If (item != null) item.Dispose();

Depending on how IDisposable is implemented, there could be an implicit cast to the interface involved as well so you’d see ((IDisposable)item).Dispose(); in the finally block instead. Meaningless to the current concept however.

The new C# feature of Object Initializers are another form of syntatic sugar for really this:

SomeTypeWithSetters item = new SomeTypeWithSetters();
Item.Prop1 = “SomeValue”;

By writing it this way:

SomeTypeWithSetters item = new SomeTypeWithSetters(){Prop1=“SomeValue”}

So when you put them together (as in the original example) you would expect the code would be equivalent to this:

RijndaelManaged enc = null;
RijndaelManaged enc = new RijndaelManaged();
enc.Key = key;
enc.IV = iv;
enc.Mode = CipherMode.CBC;
if (enc != null) enc.Dispose();

This is NOT the case however (and hence the bug)!Due to the nested statement rules in the C# spec, instead the compiler evaluates the code as a nested initializer block followed by a completely separate using block and not a unioned language construct!

RijndaelManaged enc = new RijndaelManaged();
enc.Key = key;
enc.IV = iv;
enc.Mode = CipherMode.CBC;

if (enc != null) enc.Dispose();

Note: technically the compiler will actually emit two variables pointing to the same object. For clarity I’ve skipped that as it’s frankly not important to the example.

So if there’s something that goes hinky in the initializer block, the Dispose() method is NEVER called by your using block as the code has yet to enter it. All sorts of general badness may then follow. It might be as simple as inefficient use of critical or expensive resources to something as bad as a leak. In general, all sorts of badness, in varying degrees of said badness, may happen to your application.

After that: Hilarity Ensues followed by an immediate Epic Fail.

While I agree that this should be handled by a C# language specification change regarding how the using construct works with nested statements, this is currently how it works today. While not a bug per say in the compiler, it should be considered a hole in the spec itself. Maybe we’ll see this as a change in C# 4.0?

Original Credits : Jimmy Zimmerman

Thursday, January 15, 2009

Cloud Computing and Economy

Cloud computing is a style of computing which packages computing resources such as processing power, storage, connectivity etc as a service and delivering the same to the consumer in a scale-free, cost efficient and timely manner over the web. Applications get into production much quicker than the traditional models by which applications are provisioned. This entails a shift in the way applications would be built, executed and also managed in the future.

In an attempt to understand the financial implications of the new cloud based model used for deploying and running web applications over the traditional client server web application model a little better, I will try and put it in the context of a hypothetical scenario which would highlight differences one would observe in both the cases.

A startup company that intends to have some web presence decides to build a self service web application which shall receive orders from their end customers. From a architectural perspective, they decided to build a simple data driven web application that is easily available over the internet to their customers. Let us assume that the application designed is a traditional 2 –tiered client server architecture.

So what is it that is required to build an application which is available over the Internet?

An attempt to mark out some of the key asks are in the list below and classified them under the various costing heads

Capital Expenditure
1. Construct a physical brick and mortar facility to host the servers including the cabling, USP/Generators to keep the server always ON
2. Procure a server grade hardware(s) for the client and server setup. In case you have availability requirements then you would have at the minimum two servers that bring in some redundancy to help achieve this. Additionally we would have to include redundant component such as NICs, UPS’s, switches
3. Software Licenses required to build High-Available web applications Windows Server OS’s, NLB, firewalls and security solutions such as ISA
4. Additional hardware and software cost required for setting up an available DNS server to resolve client requesting name resolution
5. Provision a static IP from your ISP
6. Database software licenses would have to be purchased
7. Operations and Management software licenses such as MOM, backup facilities.
8. Purchase a development system, assuming that you would want to have your development

environment separate from the production site
9. At the minimum Win XP license for developers
10. Purchase the Visual studio licenses to develop the web application
11. Purchase the developer edition db license for the persistent storage

Operational Cost
1. Registering your DNS addresses with ICANN
2. Per unit power charges for keeping the production systems always ‘ON’ including power consumed by the hardware, air-conditioning
3. Salaries to maintain and manage the infrastructure

Non-Operational Costs:
1. Carbon tax for companies running their own data centers

Opportunity Loss:
1. Sub-optimally utilized hardware
2. More time to market involved mainly due to the time spend on procuring and provisioning the resources

Now comparing this to an application which adopts to a cloud based architecture

The costs which shall be incurred would include:

Capital Expenditure:
1. Purchase a development system, assuming that you would want to have your development environment separate from the production site
2. At the minimum Win XP license for developers
3. Purchase the Visual studio licenses to develop the web application

Operational Cost:
1. Per unit charge to use the cloud OS services which will execute the web application
2. Per unit charge to use the cloud db services

As can be seen a business has been able to considerably eliminate its capital expenditure on IT, resulting in tremendous savings. Savings allows firms to invest in its core business areas that would lead to revenue generation. Moreover in these times of economic recession, credit for businesses is not easily available; hence any savings that businesses can achieve will help them to have that much extra to run the business.

In addition to having direct financial implications in terms cost, the cloud platform also help in enhancing the Time to Market of software applications

Time is Money
It’s an old cliché we all know and understand, but to what extent do we see IT able to support businesses in applying this in principle. Businesses have lost out on opportunities simply because the systems which they have build over the past decade or so have now become inept or non-responsive to cater to the growing dynamics of the business. Their architectures do not allow them to adapt to the dynamically changing requirements or even for that matter be elastic to cater to fluctuating user demand. Some factors effecting an applications Time to Market:
1. Time is spent on procuring or provisioning hardware or software while deploying a new application.
2. Time is spent on procuring or provisioning additional hardware if existing applications have to handle any growth in business such as during mergers/acquisitions, seasonal or market.
The evolution of the Web, SOA and Virtualization technologies have now amalgamated to herald this new style of computing. The Cloud inherits the intrinsic traits of these three technologies which allow enterprises adapting to this new style of computing build applications which are available everywhere, become agile and elastic to meet fluctuating user demand. It not only extends existing on-premise/hosted applications but also gives opportunities to realize existing architectural patterns more easily or even discover new patterns in which applications get developed, provisioned and delivered. All this in a relatively shorter span of time as compared to the traditional approach of constructing and commisioning applications.