Archive for the ‘SharePoint 2010’ Category

Lookup field issues in SharePoint 2010

SPS version: SharePoint 2010 Enterprise – CU Dec 2011   

Lookup fields are great but they have some serious limitations sometimes. Through the UI you can set them to a link a field in one list to another list on the same web. This allows to say set values that would be shown in a dropdown similar to how you would do it with term sets in the Managed Metadata service. The plus with the lookup fields is that you can assign a non-techie to administer that single list values.  They are also good if your lookup values are not to be available globally to any web app in a proxy group.

Now, when you want your lookup field to exist on a different web than the current web. Then you will need to turn to code. This is something, that is readily done using a custom content type and the list schema field element (

<Field Type=”Lookup” DisplayName=”Something” Required=”TRUE” ShowField=”Lookupfieldname” UnlimitedLengthInDocumentLibrary=”FALSE” Group=”Real cool code group” ID=”{fe685707-c198-45db-baf3-d8cd92a9f4f6}” SourceID=”; StaticName=”LookupFieldStaticName” Name=” LookupFieldName” Customization=”” ColName=”int1″ RowOrdinal=”0″ />

The idea being you can set this on a base content type, and from that point forward the inheritance chain takes care of it.

In my scenario, I had a set of 3 lists on a root site collection that served as lookup source for a set of 4 libraries that appeared on dozens of subwebs, each library with half a dozen custom content types assigned to it. The libraries were all created by a custom feature that was activated when the sub webs were created. They were generated using a custom list def (Schema.xml) and a list instance built into the feature.

In MOSS 2007, this worked like a charm on a number of deployment I did. In 2010 for some reason, I followed the same methodology and found my lookups were all broken. It took some troubleshooting but what I found was that the content type inheritance was broken. I had a base content type, another set that inherited from that type, and a 3rd level that was used at the lists (Company base CT à Project base CT à SpecificList CT). The lookup settings were kept all the way through these content types. When you create the list and assign the content type there is a new content type that is a copy of the “SpecificListCT”. I found that when SPS 2010 created that copy, it discarded the lookup field list settings. No matter how I set it up, this continued to happen.

A bit of a pain, and from what I could tell a flaw in SPS 2010. I was finally able to overcome this by removing the lookup from the List schema.xml file and in the event receiver for the FeatureActivated event, going in and adding the lookup fields using “SPList.Fields.AddLookup(LISTCOLNAME, SourceLookuplistID, false” ( ).

With that code in place the Lookup fields all started working and life was bliss once again. As with the custom Display/Edit/Add controls, this is a fairly easy way to get this capability into your farm, but it SHOULD be much easier.


Custom Edit/Add/Display forms in SharePoint 2010

SPS version: SharePoint 2010 Enterprise – CU Dec 2011   

I recently ran into a case where I needed a wsp based solution to provide a custom edit form just for a specific content type. In this specific case we had a content type that would be one of a couple on a document library. This document library would be repeated in many subwebs throughout the site collection. In this case it needed  to implement the codeplex JQuery based filtered dropdown solution.

So yeah I could do this with Designer but it would not provide me as clean and easily implementable solution as I desired. What we came up with was a solution that was built into the schema for custom content types in SharePoint. Specifically, this is nested in the XMLDocuments element within the schema ( . Within this element there is a “FormTemplates” element with “Display”, “Edit”,”New” elements. These elements allow you to set a content type to load a custom ascx control for your edit, display, and new item forms. So long as you deploy the ascx to the ControlTemplates folder that is. And the name in the elements matches the name of the ascx (minus the ascx).

An example of its usage can be seen below:


<XmlDocument NamespaceURI=””&gt;

<FormTemplates xmlns=””&gt;







The “DocumentLibraryForm” shown in the Display element is the default form. The forms shown in the Edit and New elements are custom forms with the JQuery based dropdowns. So I rolled this into a WSP and deployed it along with my ASCX and life should be bliss…

Should be but it was not. Despite a plethora of documentation that says this SHOULD work. It did not. Further investigation showed that SharePoint 2010 was removing these values from the content types that inherited from this base type. Now, there was some documentation which suggested that removing “Inherits=”TRUE” from the content type itself would fix this. However, I needed this type to inherit from the default document type and the some additional content types to inherit from it. No matter how I ran this, during that inheritance chain, SPS 2010 always dropped this value.

After much chatting on discussion boards and some Microsoft employees, the end solution was to insert some code into the feature activated event for the feature that created the lists that utilized this content type. After grabbing a reference to the SPContentType object for the content type(s) that needed the custom form, you can set the “NewFormTemplateName”, “EditFormTemplateName”, “DisplayFormTemplateName” values as you see fit and this time they will stick.

Should anyone find any other ways around this, please feel free to throw up a comment. It was a bit of a pain but once all this was worked out, it really simplified the deployment for this solution and provided a rather surgical way to push in custom add/edit/display forms as you see fit within a farm.

Don’t know what it is with SharePoint 2010 but the profile
import configuration can be a bit touchy. Having gone through this recently, I thought
I would share and maybe just maybe save someone some time. First of all, if you
have not done so, read this (

We have
a 6 server (2 WFE, 2 App, SQL A/P cluster) farm with SP 2010 SP1 all running on
a Windows Server 2008 R2 SP1 box. Kerberos configured. Initial configuration
went fine with 1 exception, after CA was deployed the client requested a port
change for it, which was done with the PowerShell command (this does play in
later). User Profile service was configured with its own managed account.

thing we saw, Forefront Identity Manager Service and Forefront identity Manager
Synchronization Service were disabled and would not run. They got login errors
when this was attempted. This will effectively block any profile importing or
even access the service through CA.

They were setup to run as local machine.
I also noted when they were started that there was a mention about an audit
failure on a registry in the security log. Turned out this key was at” HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Forefront Identity
Manager\2010\Synchronization Service” and that the identity of the FIMs
services could not access this registry key. I gave the managed account that
the user profile service is running under rights to this key and ensured the
FIMs service was running under this account.

So now
our services start. And there was much joy across the land. Not really. Cause
now when we try to configure a profile sync connection we get errors about our
profile import account being invalid. It won’t even list out any domains. This
was our second fun issue. Despite my request earlier for this account, it was
never given Replicate Directory Changes permissions in AD. So after a slight
battle with the AD administrator this was resolved and we moved on to the next

We hook
up the sync connection and start a full profile import, while contemplating a
trip to the local pub once it is done (that trip ends up being delayed for a
day or 2). It runs for 40 minutes, and imports exactly 0 profiles. Awesome.
Looking at the server running the service app, the application logs are filled
with warning related to the MSI installer service, and the system logs have
DCOM permissions on an APPID “000C101C-0000-0000-C000-000000000046”
and the “network service” account.

So here
I am going to cut to the fixes and save the suspense.

  1. Open regedit, find 000C101C-0000-0000-C000-000000000046,
    it will be at “HKEY_LOCAL_MACHINE\SOFTWARE\Classes\AppID\{000C101C-0000-0000-C000-000000000046}”

    1. Right click,
      properties, security,

Click advanced
Set the “administrators” group to own the
Click OK

  1. Give the “administrators”
    group full control of the item.
  2. Click Apply, then
  3. Now open component
    1. Go to the DCOM config folder under the local machine
    2. Find the 000C101C-0000-0000-C000-000000000046
    3. Right click, properties,
      security Tab
      i.     Custom radio button, edit
  4. Add Network Service with local launch and local activation rights then click OK
  5. Open windows explorer as administrator
    1. Find: C:\Program Files\Microsoft Office
      Servers\14.0, give Network service READ rights to  Tools, SQL, and Synchronization service subfolders.
    2. Now execute “C:\Program Files\Microsoft Office
      Servers\14.0\Synchronization Service\UIShell\miisclient.exe” (I made this
      a shortcut on my desktop)

      1. Click on Management Agents
      2. Find an agent called MOSS_<GUID>, right
        click and view properties
      3. Click on Configure Connection Information, If
        you had to change the port on CA, you will find that your port was likely NOT
        changed in here and still points to the old port. You will need to change this
        to get rid of the connection error in the Event viewer
      4. Verify other connection info and Verify the connection
        info on the other item in the list (should be right below the MOSS_<GUID>
        item). Verify the domain name, the account credentials, and other info.
      5. On more than on occasion in this farm SPS and
        FIM were completely out of sync on configuration. I have done other farms where
        this disconnect did not appear to happen but for some reason here, it did.

So, now that all these mods were made we kicked off a full sync and 40 minutes later we had 50,000 profiles successfully
imported. This fix list looks small but it was a couple days on Bing to sort out. Especially the wonderful gem associated with the port not updating for
FIMs when the Central administration port was changed (I HOPE this is fixed by a SPS CU or SP someday).

That’s all I got for now. I hope this saves some of you folks some time. Please shoot out any other recommendations you got as you troubleshoot these items yourself.

Happy hunting guys!

Some of my reference links(please add more via comments if
you got them):

So I recently had a fun opportunity to migrate a custom from WSS 2.0/Server 2003/SQL 2k to a Foundation 2010/Server 2008 R2/SQL 08 implementation. Learned some good things with it I thought I would share for those out there who may try the same thing.

First and foremost, simple is NEVER simple. This WSS 2 site had a single site collection with about 40 sub webs. 200 megs of content. It was completely vanilla, no styles, no use of designer, no custom anything. Not even the Fab 50, or smiling goat components (which I ALWAYS run into with WSS 2.0). So simple right? lol.

Second, because of rule 1 do NOT skimp on the dots on the i’s and crossing the t’s. I was anal and did a slew of backups before, and after the pre-upgrade check and many other steps. it saved me bigtime.

WSS 2.0 to WSS 3.0

Should have been a snap. But the server was about 2 years behind on patches.Also the server was the primary DC for the company as well as their exchange server. So the stakes were a bit high on keeping her up and running. To better raise my chances, before I migrated I applied all up to date patches on the OS and SQL, very carefully because of the critical role of this server in the client environment.

So far so good, except it took 3 hours. Due to budgetary concerns I had to do an in-place over top of the prod system (yeah did not make me warm and fuzzy either but sometimes budgets trump best practices and you do what you can do). Ran the in-place, seemed fine. Then I realized that the content webs did NOT come across into WSS 3.0. Further more, and I have NO clue how this happened, when WSS 3.0 upgrade ran, it upgraded the DBs and somehow put them into a Schema/format that SQL 2k said was a future release so SQL dropped the DBs and they were unattached with SQL refusing to attach to them. So at this point we are completely down. SQL 2k will not allow a restore, or reattach cause the DB is in post 2k format according to SQL. My IIS web app is orphaned (with host header and SSL settings) and life is looking grim.

Now we had taken a virtual snapshop before doing any of this so completely losing everything was not happening, I just did not want to lose 4 hours of work I would have to redo anyway. First task resolve SQL issues. Installed SQL 2005 Express on the WSS 3 box, created a completely new instance and reattached the old content DB there. Then plugged it into the WSS 3.0 farm and the 3.0 migration took. Heart rate went back down to normal levels. Step 1 of migration was complete. Minus the Web app configuration.

WSS 3.0 to Foundation 2010

With the mess of the WSS 3.0 migration out of the way it was time to do some real migration. And this time, things ran MUCH smoother. Took a backup of the WSS 3.0 Content Db, copied that to the Foundation 2010 SQL box and restored it to a proper DB name (at this point it still had the WSS 2.0 name to it). Then use powershell to connect the 3.0 DB and all the content was in.

We left it in 2007 visual upgrade mode to allow users to adapt to the new URL and ensure the environment was stable. A couple days down the road we will be doing the full visual upgrade to 2010 and finalizing the system.

One additional thing at the end of this while i was logging in and new users were logging in fine, the majority of old users were failing authentication. On the Foundation 2010 server Security log there were Failure Audits for all these users with the event ID:4625. Try as we could, we could not get past these. What finally solved them, on each user machine the client had them clear their browser cache. Once that was done, it started working like a champ. seeing as we ported this from WSS 2.0 to 2010 overnight along with the SPUsers in the content DB, I suspect there was some type of token cached from the WSS 2.0 environment that caused the issue. However, should you run into this have the users clear their browser cache and that may clear it up for you.

What I took away from this:

– Simple is never simple. So Plan, even for the simple migrations. Get the Technet and MSDN articles down and at your fingertips to allow you to be nimble when you need to be. Having those at the ready saved me a lot of time on search engines. The other thing is plan all the way to the end. This means finalization steps like configuring backups on the new environment. If you end up in a long running migration process like this turned into, you may not be clearly thinking at the end and may just overlook it. I can think of nothing more depressing that going through all that and finding the system down and data lost the next morning.

– Backups are crucial, at no time was I ever in danger of losing the data. My worst case scenario was restoring the virtual back to the state before the migration effort and losing MY time. While my time is valuable, it is no where near as valuable as production data.

– The effort could have been sped up if I had downloaded all the potential updates (SQL 2005 being the big one here) ahead of time, and better scoped out the WSS 2.0 environment. Doing those patches days ahead of the migration would have certainly simplified life and prevented the marathon migration effort.

Anyone have any other guidance/suggestions/experience, feel free to post up.

Happy migrations everyone!

So recently I was asked a question and annoyed enough with not knowing the answer to look it up and boy did it go into a rabbit hole.

The question…”What is it exactly that makes sandboxed code so secure?”

Simple enough question and I went into the OOB nice and clean wrapped discussion on how it is isolated but realized all the while that if I took a shovel and tried to dig under that, well I could not tell you REALLY why it was. So up came Bing and into the world I went.

So WHY is it “safe”

1. Process isolation – Unlike farm solutions which go into the solution store, sandboxed code is uploaded into the solutions library on your site collection. The includes your wsp (which is SharePoint term for a cab file), which holds your solution files. When executed, sandboxed code is run, not in your IIS Application pool that your web app runs in, but instead in a separate process (the Sandboxed worker process – SPUCWorkerProcess.exe) and each sandboxed solution is further isolated in its own thread within that process. For this reason it cannot access dlls from OTHER sandboxed processes or any DLL that is not GAC’d. It cannot even see them.  Your sandboxed code is about as isolated as it can get in a SharePoint environment.

2. CAS – Yep code access security. All your sandboxed processes on the entire farm are governed by their own CAS which is hidden in the 14 hive. This CAS restricts the access sandboxed code can have even further. Were I braver and having a virtual I could stand hosing I would really like to play with this and see how much security can be overriden by manipulating this file or just give the Sandboxed code Full trust. It is NOT, I repeat NOT supported by MS but hey, that’s what virtuals are for.

3. DLL deployment – So I started asking folks “where do my sandboxed dlls go?”. Once folks started thinking about it, well…nobody really knew. Some blogs claimed the GAC, some claimed the bin, some simply assumed they go a a better place…so where is it, c:\programdata\Microsoft\Sharepoint\UCCCache. They will be put into subfolders within that directory corresponding to user sessions. When the users session ends, they will be removed from that folder unless they are reloaded by another session within a specific period of time.

4. Selective scalability – Yeah this barely works into the realm of “safe” but I would also consider this because one thing you can do is set WHERE SharePoint executes sandboxed solution code in a farm. You can set if it runs on the server that requests it (default), or set it to use processor affinity. You can also disable the Sandboxed code service on some servers pushing SharePoint to load balance on secondary – perhaps dedicated servers. In terms of “safe” this means you can push execution of your code to a server that maybe is not directly serving content to end users and limit the impact a sandbox gone berserk can have on the farm.

Pretty cool stuff. Now, I do have to say pretty much all this can be found on MSDN ( and

I anyone out there has the bravery to mess with the trust levels of the sandboxed code, please feel free to comment back on this thread. I would love to see how far you can much with that without killing a farm. Like I said though it is NOT supported by MS so I would recommend doing this only for experimentation purposes on only on a virtual.

I truly believe the single biggest factor to long term success and/or failure of a SharePoint implementation is the creation or lack of effective governance. I have seen massive farms with buckets of money thrown at them fail their organization miserably because of it. Likewise I have seen modest farms with limited scope effectively change the way a company does business with effective governance.Effective governance reaches far beyond the technical team, past the SharePoint Architect, past the IT manager, and out into the enterprise and its employees. The true source of guidance for the architecture, design, implementation, and management for your farm lies within your business users. They KNOW how the business runs, what it needs, what works for your culture. The trick is getting them involved and channeling those raw ideas into a SharePoint based framework.

If you do nothing else with this post, go to this site: and do some reading. MS has learned a lot on governance and has changed their approach for 2010 significantly.

What’s the point?

This is a simple yet frequently asked question. WHY should you spend time and money and pull in all sorts of business users who will most likely come in kicking and screaming. It will draw out your timeline, add complexity to the entire project, and force the techies to deal with “users”. The answer for this is multifaceted. First, the governance plan gives the ensures that the SharePoint farm meets a specific business need. The strategy team sets the purpose and goal for the creation of the farm, guides its evolution, and ensures it is kept relevant to the businesses needs.  Second, user involvement and ownership. I cannot tell you how many client sites I have been to where the techies think their SharePoint implementation is great while end users would just rather use file shares, exchange public folders, emails, local office docs, etc. The end result may even be an extremely tech savvy implementation but if it does not achieve user acceptance, it is a failure. Pulling end users into the project at the beginning via the governance team assures you get their feedback and valuable insight into business need, and at the same time it gives the business community ownership in the portal. I can honestly say the BEST implementations I have done include features that I would never have thought of, but business users came up with them in a second.

Alright I get it, now what do I do?

The first step is going to be forming your strategy team. This team should include representatives from major business units as well as an IT representative. It is crucial that business gets its say but IT must also be there to provide guidance. There are many great ideas that just are not financially feasible and IT provides guidance on those areas. The first order of business as you will see in the MS docs is formulating the vision and purpose for the farm. Keep it high level, focus on what SharePoint will do for the business. Next work your way down into guidelines and objectives for the farm, define your principles only going as tech deep as is absolutely necessary. Define the roles and responsibilities. Now move into your guidelines, take it as deep as you can get. Bear in mind the Governance plan is a living document it can and will evolve with your implementation. From the get go, you may not be able to get through every item but take it as far as you can, keep it generalized, this is a high level strategy document, it is not a technical manual.

What about the tech side?

So usually when the whole governance document comes up, it is focused on the technical governance. Development and deployment of web parts, workflows, event handlers, etc. CAS vs GAC based code. Site creation, business unit specific site creation. Branding, etc. You probably noticed this has NOT been covered at this point. In my opinion, this is not part of the main governance document. It does however need to be captured. I will recommend a second governance plan be devised for this. Referred to as “Technical governance” that goes into the techie side of things. This document is subject to the guidelines, and policies set forth in the strategic document and its audience is the techies. It cannot be generated really until the primary governance is developed. Please bear in mind, this is not a stand alone document. I want to stress that, it absolutely must abide by what is set forth in the primary governance document. Any SLA’s set forth, any security principles, any customization guidelines, etc must be followed in this document. Deviation from the main document must be presented to the main strategy team, and approved by them.

Bringing the organization together

This collaborative partnership between these documents and teams not only helps provide a useful implementation, user acceptance and education, guided and controlled growth of the farm, and focus on strategic objectives, but also can have a profound impact on the culture of the organization. In many shops IT and the business units are not used to collaboration at this level. It can play itself out negatively in the applications being developed. By going through this sometimes painful exercise in developing a governance model for your implementation, you can also help redefine the cultural dynamics of the enterprise, even if it is only for the SharePoint farm, which can play out in other benefits for the org.

I have been seriously neglecting this blog so I thought I would attempt to stop that trend by posting up on some of what I have been doing. Along with heavy client work, I have been living on the MSDN and technet labs for 2010. If you have not seen them, ( and ( , they are more than worth your time. I got my 2010 certs but with all the bouncing from 2007 to 2010, things can get sideways in your head. These labs help you to refocus where you need to.

Anyway back to the point, I was doing the Sandboxed Solution with Web Parts lab and ran into an issue. I coded the hello world WP and deployed it. When trying to add it to a page I got “The sandboxed code execution request was refused because the Sandboxed Code Host Service was too busy to handle the request”. Now these labs are not always fast, and timeouts are common, so I tried simply resubmitting a few times to no avail. So I jumped on Bing and came up with this post that set me straight.

The services were all started but when I got into the registry the State key was 23c00. I set it to the 23e00 as the post directed and it was “lab on”. Apparently this is also something you will run into in the “real” world so really it was a good experience. One more little caveat to plug into your brains.