So we had an application which attempted to parse through OOXML in a word doc locate all bindings and content controls and grab all their metadata (found in the WebExtensions/Bindings elements). We had a problem with the IDs which should bind these two sections inconsistently matching. Some docs they all would, some docs they never would, most docs only some would. The ones that would not match would typically be negative numbers.

So after Binging the problem, and beating ourselves to death over it we found the post at: http://stackoverflow.com/questions/2693542/open-xml-document-contentcontrols-problem-with-signed-ids . This one saved our lives.

The problem is that in the WebExtensions/Binding, the ID is stored as a 64 bit integer. In the sdt tag binding, it is a 32 bit representation of that same ID. We followed the code samples set at the link provided above and we now have a 100% match on all IDs.

     I felt compelled to write this series after I downloaded the sample office apps from MSDN (all of them) and descended into the 7th level of hell trying to get it to work. The goal was to create an office apps to implement a Word task pane app to allow insertions of bindings, pushing bindings/tags to a SPS 2013 List, and access the SPS taxonomy to allow for tagging of bindings within a Word document. Seems simple enough. I had a pile of samples, a mess of MSDN and TechNet links, I was ready to go or so I thought.

There is actually a good deal of documentation on the Apps for Office, some good MSDN samples, and some great videos. You can certainly go through these and get a pretty good base level of knowledge on Apps for Office. When you take it to a complex solution though, you quickly find some serious gaps which hopefully will be plugged and render this post pointless. This is especially true when your intention is to self-host your apps and use a local app catalog for the manifest. This series is about the gaps I encountered which hopefully will help someone else out.

As I state throughout this series, I am really in the midst of ramping into this area and even through the course of the project the documentation has been changing. So please chime in if you have any additional info to share or questions.

Clarification on terminology

               So before I go too far, I want to clarify a couple things on the terms I use. There are 2 parts to a deployment. First, you are pushing your manifest to a catalog. Second, you are deploying the actual app functionality somewhere. When I refer to deployment in this series, I am referring to the deployment of the functionality not the manifest. I will specifically call out the manifest deployment when that is what I am referring to.

Hope you like client side code

If not, you are probably going to have a bad day. Office apps first and foremost are client side focused. They utilized JavaScript, and JQuery, pulling in libraries from the Office.js, JQuery.js, and Microsoft Ajax just to start with. If you do not understand these technologies and the concepts around coding them, you need to. There are some very subtle (and not so subtle) paradigm shifts you need to be in the know on. A good example, cross site calls. When server side, you can have your server side app call services and references all across your enterprise and possibly even the web. On the client, this is called cross site scripting, and is viewed as a security violation and will be blocked in most cases. This is for good reason. Imagine JavaScript you would write on your website just to make a simple call to a SPS farms web service API to get some info for the current user. Well that would be a client side call, made from the browser, using that individual user’s security context, from that user’s box. It would not be a stretch to write code to go all over the internal sites, maybe some real bad external ones, pulling down inappropriate information, modifying documents in SPS, etc. and it would all look like the end user did it.

Additionally, client side code is blocked from doing most modifications to the local machine. It runs in its own special little area, and is tightly controlled from what it can do to protect people from malicious code. This plays a large role in the architecture you decide to use for your application and how you code it.

Not just client side code

So the good news for server side coders, as I mentioned, the office app is a web site. So that means, you can code it as an ASP.Net site, and enjoy all the wonders of server side code as well. The solution we wrote did in fact use a god deal of C# code to make some calls and save some information. Also, for the types of calls we needed to make, a server side web service call to SharePoint was needed. So you do not need to have server code but you can have it if there is a need. Another good reason may be to run confidential business logic. That way, you are not exposing it through client side code.

Understand the model

               There are some key things to understand. An office task pane app, is running within a task pane in the word client. It is essentially a little browser winder of its own. It is given SOME access to the document through the Office.js that is inherited. That is a key concept. The app accessed the DOCUMENT not the Word client. Bearing in mind that the Word client is running on the OS under the security context of the end user, there is a serious potential security risk in allowing a Word task pane app to manipulate the Word client. If you open the door for that, you run the risk of allowing a task pane app to make malicious calls to the local OS and file system. For this reason a lot of things you would like to do that involve changing Word state, are going to be blocked. Get to know the JavaScript API for Office Task pane apps to better understand it. As you can see, the calls focus on the document not the Word client application.

In the sample I was building, another key was your app catalog is in SharePoint, your app is not. Meaning the SharePoint SP.js library and its supporting libraries are not there for you to use. If you want to work with SPS, you will need to make some web service calls. As I noted before, these likely will involve cross site scripting. So now you need to work around that. For our solution, we knew we would need to make a series of calls to the asmx and svc APIs in SPS 2013, on multiple applications. For some of these calls we could use a JSON call to bypass the cross site scripting issue, and there are plenty of good examples of that in the MSDN coding samples but not all of them. What we decided to do was use the example in the MSDN Sample code and build a controlled class which could make the server side calls for us. In this model, the Office app JavaScript makes a service call back to itself, and then the server side calls out to the SharePoint farm web services and returns data back to the client. It allowed for a good deal of flexibility for our service calls and opened the door for hitting SPS to modify anything we wanted to, and retrieve data. This ended up being pretty easy

There is a way around the missing Sp.Js and SharePoint context. I found a solution in MSDN that had a task pane app embedded in a SharePoint app. This gave me the ability to simplify the deployment, to get access to the SP.Js and all its references, to capture SPS Context info on the client side and do some real easy coding. However, it does deploy to an App tenant in SPS 2013, and there are implications to that. Additionally, when you do this, it is only going to allow client side code. This may not matter, but bear it in mind.

Plan your deployment

               While coming up with the design know how you intend to deploy this app. Will it be hosted in SPS? Will it be part of an SPS App? Will it be on an IIS server? Azure? This decision can have a very big impact on what is possible. As an example, if I am deploying to a SharePoint App, or to a SharePoint site, I am deploying typically client side code only. Meaning no compiled dlls, but my code will have access to SP context on the client and Sp.js. If I am going to IIS Server, I will be able to code whatever I want on the backend/server side, I will not have direct access to SP context or SP.Js (assuming I care for this app).

For each deployment model I will also need to consider security and user context unique to that situation as well as infrastructure considerations, firewall rules, etc.

Watch your JavaScript and CSS References

               One extremely common newbie mistake (and of course one I made repeatedly) was not being mindful of my JavaScript and CSS references. In the Home.html file generated by VS when creating an office app they link to the online ones (for JQuery and Office) and local references for the css and some other JavaScript used for the app. They have commented out lines to use a local version of these files. When running on a development virtual, that may or may not have access to the web, your first time you will undoubtedly not realize this and your apps will fail when trying to run and import these files. When using the local files and relative links, you will probably run into an issue along your testing where it cannot find them because of where your app is running. This is an extremely common mistake so when you work with an office app and are moving through development, and testing and code stops working despite not being changed, this is the first place I would look.

Testing can be painful

               So while in VS 2012 or 2013, I can run this in debug mode, it all pops up in a word doc and everything is great. However, take away you VS debug ability and you are in trouble. Office task pane apps do not allow for “Alert” functions or many other functions you would commonly use to pop up state information while walking JavaScript to see what is wrong. But hey, why would I care? I got VS debugging. So a couple reasons. First VS debugging is about the perfect scenario. VS.Net is tied directly to the local IIS and it kicks off Word and inserts the app for you. In the real world, when you go to test and deploy the app, all these tools are not there. Your Office app is running elsewhere. Your manifest is in an app catalog on yet another server, and Word is running on yet another client machine. So now you have 3 separate servers running code, oh and the example I was building called out to yet another SPS farm. Your office app is running in a task pane and is secured by IE security settings, plus ones specific to Office Apps in Word, plus any additional GPOS in your Org. You are needing to ensure open and secure communication between Word, the Office app, and any other systems you are needing to hit. You need to consider the identity each communication channel is using. A lot of those failure are silent in your word application. For example, a security setting/GPO that blocks the JavaScript from loading and/or executing when calling a remote system can create silent failure which is tough to debug. An authentication issue may be silent. Tools like fiddler can help some but very often you need to really be methodical and not assume anything. IMO, the tools have not really caught up to this technology to make the testing as easy as it should be. To be honest, I am still developing my methodology on adequately testing and debugging these systems.

References:

  1. Apps for Office Training videos – http://msdn.microsoft.com/en-US/office/dn448488
  2. Apps for Office Samples – http://code.msdn.microsoft.com/office/Apps-for-Office-code-d04762b7
  3. Apps for Office Task Pane App JavaScript API – (http://msdn.microsoft.com/library/office/fp123523(v=office.15)#FundamentalsTaskContentApp_JavaScriptSupport)
  4. How to: Create an app for SharePoint that contains a document template and a task pane app – http://msdn.microsoft.com/library/office/fp179815.aspx
  5. Fiddler – http://fiddler2.com/

              Let me say it, I love PowerShell. For building out predictable, repeatable configuration, installation, and maintenance processes, nothing beats it. So like many others, I have built up a large PowerShell script library. I have ones for full farm config, for site provisioning, Site collection configuration, and many, many more.

              Sort of scary I never came across this before but while working with a client, we combined a script that provisioned some site collections, turned on some features, installed/activate some custom features, set a custom page layout and content type to be the default on the pages library, and removed all other content types and page layouts from that library. This script failed halfway through the process, while activating a custom feature that had been installed in the previous step. It returned the error: “the feature is not a farm level features and is not found in a site level defined by the URL…”. So this was confusing as in central admin I could see the feature was there, was installed, and was at the correct scope. After Binging this for a while, I found very little useful information as it seemed most of these issues were related to incorrectly entering the name of the feature. We had the GUID in there and it definitely matched. Also, we could re-run the offending line of script in the PowerShell window and it would run fine the second time around.

              So after a little tinkering we found that by chunking up the PowerShell script and running it in pieces, it all ran to completion without error. So we found out how to get the script to work but not really why it erred out to begin with. I run many large scripts without issue. The key was in the install/activate code. This is because adding a WSP to a farm and deploying it is an asynchronous process. A really fast one for many solutions but it runs asynchronously, nonetheless.  The script we had ran so fast that it would attempt to execute the feature before it was fully installed. So we would get the error message that it could not be found.

              So lesson learned, PowerShell is fast and effective, sometime too much so. When you run a large script and encounter strange errors that appear to make no sense (at the time), try breaking it up or executing it in chunks. It also helps to analyze your code to be sure you fully appreciate which parts are running asynchronously and which are not. This may help you avoid this frustrating error.

A friend of mine recently tackled the pain of getting Remote Desktop to work with Windows 8 Hyper-V. As usual, it was a simple setup for an issue that has plagued many of us.

You can find the post here: http://lelandholmquest.wordpress.com/2013/04/02/remote-desktop-connection-to-hyper-v-virtual-machine/

Thanks Leland!

So like many SharePoint guys using Windows 8, I LOVE having Hyper-V. It has been a long while since we had a Microsoft 64 bit virtualization technology built-in or otherwise for our non server OS’. In that time I have used Oracle’s virtualization app and of course VMWare player.

So flash forward to Windows 8 Pro, got my Hyper-V working and I have a huge library of VMWare virtuals. So ok, surely there is something to convert these built into Hyper-V on Windows 8 right? Well that is where you would be wrong. A quick Bing for this information pulled up a pile of free/share ware that either gave me a 404 or when found would attempt install spyware or some other garbage and then of course would not even work. Frustration did not being to describe the feeling and the dark vocabulary that I spewed at such a gap in capability.

In my preparation for having to uninstall Hyper-V from Win 8 and go back to VMWare, I came across a tool I had not heard of called the “Microsoft Virtual Machine Converter” (located here: http://technet.microsoft.com/en-us/library/hh967435.aspx. ). Dare I say, this little nugget was the answer to my prayers. Simply install it, and you are good to go.

Now, in my case I just wanted to convert the VMDK to a VHD. So no reason to launch the UI at all, simply open a command prompt (the .exe is installed at “C:\Program Files (x86)\Microsoft Virtual Machine Converter Solution Accelerator: by default) and run a command “MVDC.EXE <source VMDK> <target VHD>:” and whammo. Good to go!

There is a lot of capability in this little tool beyond my simple case but as I have been hit by many expletive laden comments on just simply wanting to convert a VMDK to VHD, I thought I would share with the internets. Good luck everyone!

Lookup field issues in SharePoint 2010

SPS version: SharePoint 2010 Enterprise – CU Dec 2011   

Lookup fields are great but they have some serious limitations sometimes. Through the UI you can set them to a link a field in one list to another list on the same web. This allows to say set values that would be shown in a dropdown similar to how you would do it with term sets in the Managed Metadata service. The plus with the lookup fields is that you can assign a non-techie to administer that single list values.  They are also good if your lookup values are not to be available globally to any web app in a proxy group.

Now, when you want your lookup field to exist on a different web than the current web. Then you will need to turn to code. This is something, that is readily done using a custom content type and the list schema field element (http://msdn.microsoft.com/en-us/library/aa979575.aspx).

<Field Type=”Lookup” DisplayName=”Something” Required=”TRUE” ShowField=”Lookupfieldname” UnlimitedLengthInDocumentLibrary=”FALSE” Group=”Real cool code group” ID=”{fe685707-c198-45db-baf3-d8cd92a9f4f6}” SourceID=”http://schemas.microsoft.com/sharepoint/v3&#8243; StaticName=”LookupFieldStaticName” Name=” LookupFieldName” Customization=”” ColName=”int1″ RowOrdinal=”0″ />

The idea being you can set this on a base content type, and from that point forward the inheritance chain takes care of it.

In my scenario, I had a set of 3 lists on a root site collection that served as lookup source for a set of 4 libraries that appeared on dozens of subwebs, each library with half a dozen custom content types assigned to it. The libraries were all created by a custom feature that was activated when the sub webs were created. They were generated using a custom list def (Schema.xml) and a list instance built into the feature.

In MOSS 2007, this worked like a charm on a number of deployment I did. In 2010 for some reason, I followed the same methodology and found my lookups were all broken. It took some troubleshooting but what I found was that the content type inheritance was broken. I had a base content type, another set that inherited from that type, and a 3rd level that was used at the lists (Company base CT à Project base CT à SpecificList CT). The lookup settings were kept all the way through these content types. When you create the list and assign the content type there is a new content type that is a copy of the “SpecificListCT”. I found that when SPS 2010 created that copy, it discarded the lookup field list settings. No matter how I set it up, this continued to happen.

A bit of a pain, and from what I could tell a flaw in SPS 2010. I was finally able to overcome this by removing the lookup from the List schema.xml file and in the event receiver for the FeatureActivated event, going in and adding the lookup fields using “SPList.Fields.AddLookup(LISTCOLNAME, SourceLookuplistID, false” (http://msdn.microsoft.com/en-us/library/ms436747.aspx ).

With that code in place the Lookup fields all started working and life was bliss once again. As with the custom Display/Edit/Add controls, this is a fairly easy way to get this capability into your farm, but it SHOULD be much easier.

Custom Edit/Add/Display forms in SharePoint 2010

SPS version: SharePoint 2010 Enterprise – CU Dec 2011   

I recently ran into a case where I needed a wsp based solution to provide a custom edit form just for a specific content type. In this specific case we had a content type that would be one of a couple on a document library. This document library would be repeated in many subwebs throughout the site collection. In this case it needed  to implement the codeplex JQuery based filtered dropdown solution.

So yeah I could do this with Designer but it would not provide me as clean and easily implementable solution as I desired. What we came up with was a solution that was built into the schema for custom content types in SharePoint. Specifically, this is nested in the XMLDocuments element within the schema (http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spcontenttype.xmldocuments.aspx) . Within this element there is a “FormTemplates” element with “Display”, “Edit”,”New” elements. These elements allow you to set a content type to load a custom ascx control for your edit, display, and new item forms. So long as you deploy the ascx to the ControlTemplates folder that is. And the name in the elements matches the name of the ascx (minus the ascx).

An example of its usage can be seen below:

<XmlDocuments>

<XmlDocument NamespaceURI=”http://schemas.microsoft.com/sharepoint/v3/contenttype/forms”&gt;

<FormTemplates xmlns=”http://schemas.microsoft.com/sharepoint/v3/contenttype/forms”&gt;

<Display>DocumentLibraryForm</Display>

<Edit>ProjectBaseEditForm</Edit>

<New>ProjectBaseEditForm</New>

</FormTemplates>

</XmlDocument>

</XmlDocuments>

The “DocumentLibraryForm” shown in the Display element is the default form. The forms shown in the Edit and New elements are custom forms with the JQuery based dropdowns. So I rolled this into a WSP and deployed it along with my ASCX and life should be bliss…

Should be but it was not. Despite a plethora of documentation that says this SHOULD work. It did not. Further investigation showed that SharePoint 2010 was removing these values from the content types that inherited from this base type. Now, there was some documentation which suggested that removing “Inherits=”TRUE” from the content type itself would fix this. However, I needed this type to inherit from the default document type and the some additional content types to inherit from it. No matter how I ran this, during that inheritance chain, SPS 2010 always dropped this value.

After much chatting on discussion boards and some Microsoft employees, the end solution was to insert some code into the feature activated event for the feature that created the lists that utilized this content type. After grabbing a reference to the SPContentType object for the content type(s) that needed the custom form, you can set the “NewFormTemplateName”, “EditFormTemplateName”, “DisplayFormTemplateName” values as you see fit and this time they will stick.

Should anyone find any other ways around this, please feel free to throw up a comment. It was a bit of a pain but once all this was worked out, it really simplified the deployment for this solution and provided a rather surgical way to push in custom add/edit/display forms as you see fit within a farm.