Sunday, September 25, 2011

Java api in .Net (JavaIntrop)

In my previous blog I have said something about using a java library in .Net . My case demanded me to use a java library in .Net . Many times we find  a decent and clean library to use but its a java library and some time we find a java library that is impossible to ignore , in that case we hope for a way to use java api in .Net . I roamed the net and tried the fit and that fit happened to fit my case.  That is IKVM.Net by Jeroen Frijters .

http://www.ikvm.net/index.html

IKVM.net is a open source  java implementation , that comes with the OpenJdk  implementation  in .Net and various tools to interop between  java and .Net  . Whole JVM is implemented in  .Net and its very easy to convert java jar files to .Net assemblies and .Net assemblies to java  jars (interface and stub classes ).

Taking on my case to convert java jar files to .Net assemblies , there comes a command line utility with IKVM called ikvmc.exe . To generate the .Net assembly for a jar file we first specify the command line switch [-target:library] for generating  a dll .  Without this command line switch ikvmc.exe will generate a exe for a jar file if it encounters a main method in a class.  On command prompt we change directory to IKVM binaries folder and specify command :

C:\IKVM>ikvmc.exe –target:library  hapi-base-1.2.jar

The jar file is in same directory as ikvm binaries , this will generate the hapi-base-1.2.dll . There will be many warnings by ikvmc.exe of noclassdeffound  , this is because we haven’t specified the dependency dll’s. IKVM comes with the OpenJdk implementation in .Net and they are in form of dll’s. Now we have to specify the OpenJdk dll’s that our jar file depends upon to rectify the warnings.

C:\IKVM>ikvmc.exe –target:library  hapi-base-1.2.jar –reference:IKVM.OpenJDK.Core.dll  -reference:IKVM.OpenJDK.Util.dll

In example I have only referenced some of OpenJdk dll’s , but we have to reference all the dll’s that our jar file depends upon.  Upon completion of the command a hapi-base-1.2.dll will be generated . Now moving on to next jar file (hapi-structures-v21-1.2.jar).

C:\IKVM>ikvmc.exe –target:library  hapi-structures-v21-1.2.jar –reference:IKVM.OpenJDK.Core.dll  -reference:IKVM.OpenJDK.Util.dll

As this new jar file have dependency on previous jar file , we have to reference the dll of previous jar file to make a successful porting of this jar file to a dll.

C:\IKVM>ikvmc.exe –target:library  hapi-structures-v21-1.2.jar –reference:hapi-base-1.2.dll  –reference:IKVM.OpenJDK.Core.dll  -reference:IKVM.OpenJDK.Util.dll

And so on we can create dll’s from jar files resolving the dependency using the [-reference:<dllname>] .Now these dll’s are ready to be used in your .Net project . We have to at least add reference of IKVM.Runtime.dll in our project and other OpenJdk dll’s that out converted jar dll’s depend upon.  Keep on porting .

Friday, September 23, 2011

Health Standard 7 (HL7) Development - C#

Getting into the development of  HL7 system , there are not many free libraries on net to start with.  HL7 being the standard way to talk between various components in a modern medical infrastructure is a way too old format to work with (except with new xml based HL7 3.0).  This lend to the complexity while dealing with the HL7 development. 

HL7 messages is a flat text with standard specifying the format of the message. HL7 message is divided into segments to hold the medical record information of various types (ex patient details, insurance details,patient visit etc). These segments are separated by carriage returns. 

MSH|^~\&|DDTEK LAB|ELAB-1|DDTEK OE|BLDG14|200502150930||ORU^R01^ORU_R01|CTRL-9876|P|2.4 CR 
PID|||010-11-1111||Estherhaus^Eva^E^^^^L|Smith|19720520|F|||256 Sherwood Forest CR
EVN|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||CR OBR|1|948642^DDTEK OE|917363^DDTEK LAB|1554-5^GLUCOSE|||200502150730||||||||| CR

Segments  are  divided into fields separated by the pipe (|) and further fields are divided into   components and subcomponents separated  by  “ ^ “ .  There has been many versions of HL7 standards with several updates and new additions, primarily we can deduct most important versions to be  2.1,  2.3,  2.4,  2.5,  2.6  (2.x version) . Out of which 2.5 is most widely used and  stable version . Version  3.0 of  HL7 is a move to shift from flat file base to new xml based HL7 standard format,  but it is not yet that much accepted in common use.

Now coming on to the development of HL7 standard from the view point of .Net developer, what many option do we really have ?.  Having good knowledge of what to implement is easy as there are many resources available to make our self familiar with nitty-gritty of HL7 standard  , but having the tools to implement HL7 is another matter.  There is support from Microsoft on HL7 in form of BizTalk Accelerator for HL7 ( http://www.microsoft.com/biztalk/en/us/accelerator-hl7.aspx )   , but as you guessed its not free  . Second option is to write a new library to deal with HL7, but its not a trivial task , as HL7 standard is divided into various versions and industry implement all these versions to certain degree . Another option is to write only specific demand based api for HL7 , targeting a single module or implementation of certain feature of HL7, but that's also not a very good solution as what happens when that specific api expands to accommodate multiple version of HL7. Why to reinvent the wheel , there should be some thing on web that targets the HL7 development in  .Net.  

Two top notch free solution for HL7 development are Hapi and NHapi . Hapi is open source extensive  java library   to deal with almost all the versions and angles of HL7 development.  Nhapi is  .Net implementation of the Hapi project.

Hapi:-  http://hl7api.sourceforge.net/

NHapi:- http://nhapi.sourceforge.net/home.php

So we do have .Net open source library for HL7 development . NHapi supports versions from 2.1  to  2.5 and will suffice for most of HL7 related development.  API’s are simple and easy to use . It creates a Object model for a HL7 message being parsed and can convert that into HL7 standard xml for easy use or for transformation via xslt. Using the HL7 message object model we can do variety of things related to the HL7 parsing and creation. 

Downside is that Nhapi is not as extensive as Hapi (Java library).    HL7 message validation is not there except for some low level validation (e.g  MSH segment validation , primitive validation etc ) . HL7 communication module is not there . Support up to 2.5 version of HL7 as comparison to  Hapi support up to 2.6 version . Most important of all there has been no activity recorded for Nhapi for some time and it makes future dependence on Nhapi a little scary (although its a open source and anybody is invited to contribute and extend ). 

Hapi is on the top list when it comes to the HL7 library , they recently released  Hapi 1.2 in June and forums are abuzz with discussions . Support for 2.6 is included and future support for HL7 3.0 and more are anticipated. HL7 message validation is very mature and there is good support for HL7 message communication .

So the question is how we,  .Net developers can leverage Hapi project and use it in our .Net applications . The answer can be  in using   java in [ .Net] ,  is it possible ? . Yes we can use java libraries in [.Net] , as they are [.Net] libraries and can use  full feature of  Hapi project.

Java interop tool developed  Jeroen Frijters (who is contributed for bringing  J# to its demise)  is the best and easy bet to use Hapi java api in .Net . I have done it and it works great . I will be soon writing on how to convert java jar files to .Net assemblies.

Technorati Tags:

Friday, July 15, 2011

Microsoft Extensibility ( MEF 4.0) :-

Writing an extensible application is always a challenge , considering the incorporation of future functionalities that are not known at the time of designing application architecture.  Extensible application basically consists of the core framework that has to remain unchanged and work as the base engine for application , extending the application at runtime.

One very good example of extensible application can be seen in design architecture of the Sharp-develop  IDE.  I have previously blogged about the Sharp-develop and its design . Sharp-develop application consists of base engine -SharpDevelop.Core ,  which is the responsible for loading up all the add-ins and extending the application at runtime.  

SharpDevelop uses add-in based extensibility . Microsoft has always provided the extensibility framework to design our application upon . Add-in based framework such as Add-in pipeline framework let us to create a host application and expose its Object model to be extended by the application addins . There is well defined architecture for this kind of application development  consisting of Host,  Hostadpter, Addinadpater, Addin . You can look my previous blogs on add-in pipeline development for detail into architecture. One good example of application that follows the Add-in pipeline development is VSTA  (Visual studio tools for application) which uses the addin framework provided in System.Addin and its sister namespaces. Details for VSTA can also be found in my previous blog entries.

The new child on the block in the arena of extensibility from Microsoft is MEF (Microsoft Extensibility Framework). It is projected as the extensible framework and also have the capability of DI (Dependency Injection) as Unity framework. While Unity framework is DI framework , MEF work as extensibility framework along with DI capabilities . For detail in to MEF programming you can check my previous blog entries .

I have my share of experience  of developing application using MEF .  Although MEF acts great whereas its DI capabilities and functionalities are concerned , but lack of support for standard publish/subscribe mechanism and modularity functionalities within the framework itself  is somehow concerning.  We have to use MEF extensions provided within PRISM library to fully achieve extensibility features.  PRISM with MEF is great , where we have to develop an extensible UI based application . It provides us with standard publish/subscribe mechanism which is loosely coupled with in the application and uses weak references for events. Then we have Modularity in form of IModule which help us to get a extensible UI based application. There are all other great  features of PRISM , but if we can get the above said features right in MEF it will  greatly ease the development of application which is not UI based .

Hopefully I will write some details upon MEF extension within PRISM 4.0 .

Friday, March 4, 2011

C# :- Imaging ( AForge.net )

If you are doing some image processing there is an interesting and cool open source project called AForge.net .  It contains many routines and filters for image processing.  I started playing with Glyph’s Recognition,  glyph’s recognition engine is provided by AForge Framework.   There is  GRATF  (Glyph Recognition and Tracking Framework)  project  based upon  AForge framework.  GRATF framework mainly deals with the Glyph recognition algorithm and glyph tracking  , utilizing the AForge.Net  image processing library. 

To  get started with Glyph recognition , download the GRATF source code from source code repository at Google :-

http://code.google.com/p/gratf/downloads/list 

After downloading the source  you can start exploring the  code. There is an application that comes with download called Glyph’s Recognition Studio.  It lets you to specify the particular glyph to be recognized and also support 2d augmentation , that lets you to place a particular image over the recognized glyph.   Implementation and description of the glyph recognition algorithm is described at :-  http://www.aforgenet.com/articles/glyph_recognition/.

Glyph to be recognised is created on a white paper with outermost white border , then black border with glyph inside.

Glyph’s recognition make use of various image filters from the AForge Framework. It starts all with the bitmap to be processed , its the managed bitmap captured from the source . Bitmap image is converted into Unmanaged image using the UnmanagedImage class of AForge.Imaging namespace.  UnmanagedImage class takes the BitmapData as a parameter in its constructor and converts the Bitmap image into unmanaged image. With unmanaged image we do not have to lock image bits before processing the image pixels , as with managed bitmap image , so overhead of locking and unlocking bitmap is avoided

BitmapData bitmapData = image.LockBits( new Rectangle( 0, 0, image.Width, image.Height ),
                ImageLockMode.ReadOnly, image.PixelFormat );

UnmanagedImage unmanagedImg = new UnmanagedImage( bitmapData ) ;

With our unmanaged image , first step is to apply the GrayScale filter to unmanaged image . GrayScale filter processes the image pixels and returns only the pixels within gray range from 0 to 1 . Gray range is from complete white to black and varied gray component in-between . As we are only concerned with the intensity of the pixels, so we discard the colour information  by applying the GrayScale filter of AForge framework.

UnmanagedImage grayImage = UnmanagedImage.Create( image.Width, image.Height, PixelFormat.Format8bppIndexed );
Grayscale.CommonAlgorithms.BT709.Apply( image, grayImage );

After we have reduced the image with only gray intensity pixels, we apply DifferenceEdgeDetector filter on the grayscale image to detect the edges in the image .

DifferenceEdgeDetector edgeDetector = new DifferenceEdgeDetector();
           UnmanagedImage edgesImage = edgeDetector.Apply(grayImage);

Then we apply Threshold filter to the image to convert gray range pixels into  black or white pixels using the threshold value taken as constructor parameter by Threshold filter.

Threshold thresholdFilter = new Threshold(40);
           thresholdFilter.ApplyInPlace(edgesImage);

We use BlobCounter class to get all the detected blobs in the image of particular size and width.

BlobCounter blobCounter = new BlobCounter();
          blobCounter.MinHeight = 48;
          blobCounter.MinWidth = 48;

blobCounter.ProcessImage(edgesImage);
           Blob[] blobs = blobCounter.GetObjectsInformation();

We iterate over all the blobs detected and use SimpleShapeChecker to check if the blob is of a particular shape.

List<IntPoint> edgePoints = blobCounter.GetBlobsEdgePoints(blobs[i]);
List<IntPoint> corners = null;

if (shapeChecker.IsQuadrilateral(edgePoints, out corners))
              

After passing the IsQuadrilateral check of SimpleShapeChecker we calculate the average brightness of the pixels from both the sides of the edges of Quadrilateral . If the average brightness exceeds a predefined value we further process to get the glyph value.  We apply QuadrilateralTransformation filter on the corners of the Quadrilateral and get the rectangle image of the glyph.

Once we have the rectangle image of the glyph we can iterate over all the pixels of the image using the UnmanagedImage pixel byte pointer and calculate the intensity of the pixels for the each cell of the image and then checking it with the glyph database for glyph recognition.

http://www.aforgenet.com/projects/gratf/

http://www.aforgenet.com/projects/gratf/code_samples.html

Wednesday, March 2, 2011

Silverlight Video Streaming and IIS Smooth streaming

Previously , I have written a blog about the WCF streaming  (here) .  I have got many responses on this blog entry and now I will further explore the streaming with Silverlight streaming and the IIS smooth streaming .

Silverlight streaming make use of Microsoft Expression encoder to package the media content with a silverlight media template . Microsoft Expression encoder is the application which comes with the expression suit applications from Microsoft.  It lets you to adjust the bitrates and resolution of your media content and package it along with the silverlight media template to play it in. Microsoft expression encoder lets you to publish the media content to the silverlight services and azure cloud.

We can download the expression encoder from here download encoder . Hosting of media contents with silverlight services has been discontinued as it was in beta phase. Now we have option of hosting our media content on Azure.

These are the some links to get started with the silverlight streaming and azure media hosting using Microsoft Expression Encoder .

Silverlight Services hosting (Discontinued)

Hosting videos on Azure

Apart for hosting our media on the Azure cloud , we can host the media on our web server and take the benefit of the  IIS Smooth streaming feature .  Here are some links that will get you started with configuring and using IIS Smooth streaming.

Getting started :- IIS Smooth streaming

IIS Smooth streaming sample application

Installing and configuring IIS Smooth streaming

Smooth Streaming fundamental

There are many possibilities to build the media content delivery system on top of these streaming choices.                          Happy streaming…..

Saturday, February 19, 2011

C# :- GetHashCode() : HashTable and Dictionary

In C# when coming to data structures  HashTable and Dictionary provide us with the ability to search for the element in them based on the data rather that the index .  Arrays are stored contiguous in memory and to access the item in array there is constant asymptotic runtime  O(1)  , because we only have to know the index of the value to be retrieved. Searching through an unsorted  array has linear runtime of   O(n) and through sorted array of O(log n).  Array are useful when we have to directly access the value based on ordinal value.  But when dealing with complex data we do rarely use the index based  approach for accessing and searching items.

We are more concerned on accessing and searching items based upon real information in a data set ,  that can be anything from enrolment no’s  , credit card  no’s or complex alphanumeric sequences.  Here comes the HashTable and Dictionary , these data structure allows us to use our complex data information for indexing . To do so these structure uses hashing function to compress our complex  indexer to a reasonable space limit  and  optimal  asymptotic runtime.

Hashing function simply compresses the indexer we are using , so that a optimal set of indexer can be created  from original indexer.  If we would have to use our original complex indexer then we would have to accommodate all the values of indexer , which would be very large set of indexers to maintain relatively small set of values.  So, hashing function is the mapping from the original set of indexer to the compressed and optimal set of the indexer.

But hash values generated by the hashing function is not unique.  So we have multiple original indexers values having same compressed or hashed indexer value. This is known as hashing collisions. Collision avoidance primarily depends upon the implementation of hash function  algorithm . There are different collision resolution strategy , among them are Linear Probing and Quadratic probing

In linear probing  , we create the hash and place the value at the hashed indexer, but if we find that there is already a value allocated to the indexer (as same hashed indexer could have been generated previously by hash function)  , we simply move on to next indexer (i + 1 ….   i +n)  until we find an empty indexer slot.  In Quadratic probing we use same method as linear probing but when a indexer is encountered having the some value we move on to (i + 1^2), if it is also occupied we look at  (i-1^2)  indexer and keep on looking till we find a empty indexer slot in pattern  (i + 2^2), (i -2^2) …. (i + n^2), (i-n^2) .

HashTable and Dictionary uses different collision resolution techniques called rehashing and chaining . In .Net the HashTable contains the method GetHashCode()  that is inherited from Object class , which is responsible for creating a unique integer value .  GetHashCode() method generate unique integer for any value whether its string or any other type ,  as all types are derived from Object class. This hashcode is used as indexer to allocate a slot for coming values.

HashTable also contains a property LoadFactor that is ratio of total number of items in hashtable to  total number of available slots. Optimal loadfactor as stated by .Net is  0.72 .  So whenever we push a new item into hashtable,  .Net checks to see that loadfactor is not exceeded . In hashtable hash values are calculated based upon the loadfactor , so for a insertion of a value , hash table is rehashed to maintain the loadfactor.

Dictionary is strongly type as it uses generics in comparison to HashTable and it also uses different collision resolution technique known  as chaining . In dictionary a separate data structure is used to hold  the conflicting mapping.

As we have seen in Linear and Quadratic probing that we use a pattern to probe for next available slot for value , but in dictionary a data structure is maintained at the conflicting hashed indexer to hold all the values at that indexer. So we have same hashed indexer for different values in a data strcuture / linked list .  As the accessing and searching in HashTable and Dictionary does not depend upon the number of items due to loadfactor, the  asymptotic runtime  is         O(1) as compared to O(n) in arrays, which is fast.  

http://msdn.microsoft.com/en-in/vcsharp/default.aspx?pull=/library/en-us/dv_vstechart/html/datastructures_guide.asp

http://msdn.microsoft.com/en-us/library/ms379571(VS.80).aspx

Monday, January 31, 2011

Microsoft Help Viewer 1.0 or MS Help 3.0 :- VS 2010

Along with Visual Studio 2010 , we have new help system from Microsoft .  Previously there was Document explorer based help system  known as MS Help 2.x ,  now we have new browser based help system known as Microsoft Help Viewer 1.0  or MS Help 3.x . Starting with VS 2010 we will get our help documentation  in browser instead of Document Explorer window.  Integration of  our own library documentation with MS Help 3.0 has changed from previous process of MS Help 2.0 where we have to register our collection with help system.

Targeting  MS Help System 3.0  requires generating help files in Help 3.0 format ( i.e  .msha, .mshi, .mshc ) . To generate help documentation from source code commenting we have excellent tool called Sandcastle help file builder. Sandcastle support varied help document formats and MS Help 2.0 & MS Help 3.0 are among the generated help documentation formats.  Sandcastle is available at codeplex  : -- http://shfb.codeplex.com/ . We can provide project or solution file to Sandcastle for help documentation generation from source code commenting in MS 2.0 or MS 3.0 format.

After our help files are generated  we have .mshc (help container file), .mshi (index file), .msha (product manifest file)  as help files.  If  you already have help documentation in MS Help 2.0 format , you can use mshcmigrate tool to convert MS Help 2.0 files to new MS Help3.0 files instead of  using sandcastle .   Mshcmigrate tool can be found at :- http://mshcmigrate.helpmvp.com/home#TOC-Download

Now we have our help files in place , the only thing that is left is to integrate our help documentation with MS Help system 3.0 .  New Microsoft Help System or MS Help Viewer 1.0 is not a standalone product and is only installed along with VS2010 or Windows 7 sdk.  New help system comprises of  two services Help Library Manager and Help Library Agent.  Help Library Manager is the application that we are going to use for our help documentation installation.

To launch the help library manager application we go to All Programs –> Microsoft Visual Studio 2010 –> Visual Studio Tools-> Help Library Settings.

This will launch help library manager that will ask for the location of local help content store if not already configured. Next step is to navigate to our .msha file through Help Library Manager and select the .msha file for installation.  Help library Manager gives us the option of installing , removing and updating help contents.

In new help system ,  Help Library Agent is the service responsible for opening the help document urls in browser .  As new help is browser based so we have new url protocol to open the  Help 3.0 document pages . This protocol ms-help:// is associated with Help Library Agent. So we used Help Library Manager to install our library help in local help store. We can access our help documentation using the VS studio help menu , which shows all the help content installed or we can use the H3Viewer utility to access and see our help documentation . H3Viewer utility can be found at :---------http://mshcmigrate.helpmvp.com/viewer

Now if we are installing our custom help 3.0 files through installer , we can  use Help Library Manager exe with some command line parameters . For installing the help files silently using  Help Library Manager, without showing the dialog for installation option , we have to package our help content into a cab files  and then digitally sign it. We can use makecab tool and create digital certificate to sign out cab file using  signtool utility.

http://helpware.net/mshelp3/intro.htm

http://visualstudiogallery.msdn.microsoft.com/6a1211d5-1be9-4768-92a8-aa8af2c8cba4/