Archive for the ‘MOSS 2007’ Category

So I recently had a fun opportunity to migrate a custom from WSS 2.0/Server 2003/SQL 2k to a Foundation 2010/Server 2008 R2/SQL 08 implementation. Learned some good things with it I thought I would share for those out there who may try the same thing.

First and foremost, simple is NEVER simple. This WSS 2 site had a single site collection with about 40 sub webs. 200 megs of content. It was completely vanilla, no styles, no use of designer, no custom anything. Not even the Fab 50, or smiling goat components (which I ALWAYS run into with WSS 2.0). So simple right? lol.

Second, because of rule 1 do NOT skimp on the dots on the i’s and crossing the t’s. I was anal and did a slew of backups before, and after the pre-upgrade check and many other steps. it saved me bigtime.

WSS 2.0 to WSS 3.0

Should have been a snap. But the server was about 2 years behind on patches.Also the server was the primary DC for the company as well as their exchange server. So the stakes were a bit high on keeping her up and running. To better raise my chances, before I migrated I applied all up to date patches on the OS and SQL, very carefully because of the critical role of this server in the client environment.

So far so good, except it took 3 hours. Due to budgetary concerns I had to do an in-place over top of the prod system (yeah did not make me warm and fuzzy either but sometimes budgets trump best practices and you do what you can do). Ran the in-place, seemed fine. Then I realized that the content webs did NOT come across into WSS 3.0. Further more, and I have NO clue how this happened, when WSS 3.0 upgrade ran, it upgraded the DBs and somehow put them into a Schema/format that SQL 2k said was a future release so SQL dropped the DBs and they were unattached with SQL refusing to attach to them. So at this point we are completely down. SQL 2k will not allow a restore, or reattach cause the DB is in post 2k format according to SQL. My IIS web app is orphaned (with host header and SSL settings) and life is looking grim.

Now we had taken a virtual snapshop before doing any of this so completely losing everything was not happening, I just did not want to lose 4 hours of work I would have to redo anyway. First task resolve SQL issues. Installed SQL 2005 Express on the WSS 3 box, created a completely new instance and reattached the old content DB there. Then plugged it into the WSS 3.0 farm and the 3.0 migration took. Heart rate went back down to normal levels. Step 1 of migration was complete. Minus the Web app configuration.

WSS 3.0 to Foundation 2010

With the mess of the WSS 3.0 migration out of the way it was time to do some real migration. And this time, things ran MUCH smoother. Took a backup of the WSS 3.0 Content Db, copied that to the Foundation 2010 SQL box and restored it to a proper DB name (at this point it still had the WSS 2.0 name to it). Then use powershell to connect the 3.0 DB and all the content was in.

We left it in 2007 visual upgrade mode to allow users to adapt to the new URL and ensure the environment was stable. A couple days down the road we will be doing the full visual upgrade to 2010 and finalizing the system.

One additional thing at the end of this while i was logging in and new users were logging in fine, the majority of old users were failing authentication. On the Foundation 2010 server Security log there were Failure Audits for all these users with the event ID:4625. Try as we could, we could not get past these. What finally solved them, on each user machine the client had them clear their browser cache. Once that was done, it started working like a champ. seeing as we ported this from WSS 2.0 to 2010 overnight along with the SPUsers in the content DB, I suspect there was some type of token cached from the WSS 2.0 environment that caused the issue. However, should you run into this have the users clear their browser cache and that may clear it up for you.

What I took away from this:

– Simple is never simple. So Plan, even for the simple migrations. Get the Technet and MSDN articles down and at your fingertips to allow you to be nimble when you need to be. Having those at the ready saved me a lot of time on search engines. The other thing is plan all the way to the end. This means finalization steps like configuring backups on the new environment. If you end up in a long running migration process like this turned into, you may not be clearly thinking at the end and may just overlook it. I can think of nothing more depressing that going through all that and finding the system down and data lost the next morning.

– Backups are crucial, at no time was I ever in danger of losing the data. My worst case scenario was restoring the virtual back to the state before the migration effort and losing MY time. While my time is valuable, it is no where near as valuable as production data.

– The effort could have been sped up if I had downloaded all the potential updates (SQL 2005 being the big one here) ahead of time, and better scoped out the WSS 2.0 environment. Doing those patches days ahead of the migration would have certainly simplified life and prevented the marathon migration effort.

Anyone have any other guidance/suggestions/experience, feel free to post up.

Happy migrations everyone!

     So as of this week, I got 2 out of 4 of the 2010 certs out of the way. Historically, I have not been a big fan of certs. I have worked in enough client sites where they were loaded up with folks on certs who were not anywhere near what I would consider useful in the real world setting.  Frequently, it was the guy in the corner with 4 servers in his cube, and no certs who held the office together.

     All that being said, working for a small Gold Certified partner, it is not an option to not get certified. We all have to have them to keep our Gold Certification. With 2007 I found time to get in the MCTS on WSS 3.0 and MOSS administration and configuration. I can honestly say learning what I needed to in order to pass those exams was EXTREMELY helpful. Even with the experience I had with MOSS before the exams I found a lot of holes that I never would have plugged without the exams.

     Flash forward to 2010 and these exams are even more useful. The important thing is to resist the temptation to just pull down a brain dump and memorize questions. Yeah having the cert itself is nice but, IMO, long term, learning the content, the meat of these certifications is far more valuable. Granted this is easy to say when you have an employer who pays for your exams. However, it have been to many clients who have hired SharePoint folks with certs and who have a dangerous level of knowledge. By dangerous, I mean the guy who passed the install and config exams by memorizing the questions. Who, when put into an environment with heavy GPOs, firewalls, SSL, NLB, and Active/passive SQL servers, blows up an entire production farm and leaves 3,000 users in the dark because while he remembers the questions form the exam knows nothing about the actual functionality of SharePoint. When you get into the thick of it, memorizing questions and answers with a large multi-server platform like SharePoint, will not give you the knowledge you need to do the job. Maybe on a good day, it will be enough. Maybe on a simple deployment it will be enough. However, in a heavy deployment, on a tough day, you will hurt your organization and find yourself unemployed.

     Also, let’s be honest SharePoint is HUGE. I mean really huge. There are a ton of components/features in there.  The only way to learn how to use SharePoint to server your organization to its fullest is to learn at least high level, all the capabilities. I would recommend picking up the WROX SharePoint 2010 Administration book (http://www.wrox.com/WileyCDA/WroxTitle/Professional-SharePoint-2010-Administration.productCd-0470533331.html) ISBN: 978-0-470-53333-8 and read it cover to cover. I cannot tell you how many clients I go to who have a SharePoint implementation in place and an enterprise license, then go out and buy other software to meet a need which their SharePoint enterprise license provides. Learning the base capabilities of SharePoint can help this. Learning the meat of the product by really studying for the exams will get you there.

      Alright, I am getting off the soapbox now everyone. Good luck on getting certified and more importantly learning as much as possible on the platform to better serve your organization.

           Somehow, somewhere along the way I ended up being a Migration expert for my employer. Still trying to figure out exactly how that happened, but I can say this, I have done a lot of migrations. From SharePoint 2003 to SharePoint 2007 and now from SharePpint 2007 to SharePoint 2010. Have also had the pleasure of bringing content from many other source locations from custom ASP/SQL based solutions to lesser known third party solutions like EPrise. The fun thing is everyone thinks migration is easy to do, that all those marketing tools from vendors about how easy it is are true and that you are just one button click/file copy, etc away from migrated bliss. Such thoughts have compelled me to share my opinions/experiences on the subject.

 Migrations are a rare opportunity

         Alright we all know how corporate IT works. There is never enough money, never enough time, never enough people. The only chance you have to get something done is do it right the first time. You will not have the opportunity to go back later to fix anything. It simply does not happen in the vast majority of IT shops. Because of this migration offers a rare opportunity. You have the chance to reorganize, add metadata, add functionality. All the things you have piled up from the last 2-5 years that you wanted to do to your old site. Do not waste this chance. Do not just pull the same problems into a new farm.

PLAN, PLAN, PLAN

          WHY are you migrating? Seems like this would be an easy one and your manager will likely throw out a quick seemingly sensible reply that makes him sound like he is working for the MS marketing team. But let’s consider it. WHY exactly are you bringing this stuff over? Are you looking to leverage Managed Metadata Services to help you manage content in a sane way? Are you looking for the wonderful powers of search? Are you trying to deal with poor navigation/structure in your old site? Why? This is a very important question and should be explored. If you have bought SharePoint and really are loving the feature set then you need to plan out how and where you are going to leverage that feature set in your portal. There are many part of SharePoint that are MUCH easier to design and implement up front than to go back and redo later.

          I have many clients who at first want a straight migration. “Just copy it from the original location and we will worry about re-organizing later.” So basically take the crud from the original site, the ones you never got around to re-orging, the one that has no metadata, the one that is dysfunctional and slam it into SharePoint. Now try to act surprised when you end up with the same mess except inside SharePoint. MANY of the migrations I see fail, fail because a client copies the same mess into SharePoint and expects somehow the mess will clean itself up.

              Do not get me wrong, SharePoint is a great platform, it is absolutely awesome, I have completely swallowed the SharePoint Kool-Aid. However, without proper planning you can screw up a SharePoint implementation to a level that will make you wish for your old portal back. You MUST plan out your migration. If it was a straight copy, why would you spend the money to license SharePoint? As I said before, why would you want the exact same thing copied into a different platform? You can do some great things but you will need to plan it out and know what the target is.

 Few if any migrations are really simple

             I have done dozens of them and of those maybe one was really easy. Of all the migrations all but a handful were “straightforward”. With their systems (particularly older SharePoint systems), “vanilla”. Rarely, are things as simple as you think they are. To begin with, I go to my first 2 points, If you embrace those ideals, then you are not doing a straight migration. Second, I find few true “vanilla” sites. Particularly with SharePoint. Among the common issues I hit on “vanilla” sites: some rogue employee maybe fires up SharePoint designer on a few sites without management knowing, some random farm issue (i.e. patching) does something not too nice to the content DB schema, a set of corporate GPO’s corrupt part of the farm, that vanilla site had the Fab 40 and about 70 other freeware webparts/workflows/themes/page layouts installed on it, all are “critical” and of the 70 of them 25 of those companies have gone out of business(my personal favorite). You name it, I have seen multitudes of them. Web portals are such unique creations that the older the portal, the more likely there are going to be significant challenges in the migration effort.

Metadata NOW

              This is one of my personal pain points on any client. Mostly because the actual content here does not come from the technical folks, it comes from business users. Let’s say it together, “Metadata is a good thing.”. Metadata is what makes things work. It is what allows search to find things, allows views to filter and group and sort. It allows workflows to differentiate documents/list items it is the lifeblood of a portal. It is also one of these things that is rarely planned and often ignored. When folks talk about SharePoint messes I usually hear about how they have thousands of unused sites in a messed up hierarchy. What is usually one of the worst things though it their information architecture, the metadata they SHOULD have planned out and tied to the content in their portal. Now they have 10,000 documents in there any nobody can find them or view them in a sensible way.

               Metadata needs to be planned out, it needs to be carefully considered, and most important it needs to be there from day 1. Think of it this way, would you allow folks to start putting thousands of folders full of documents into your file cabinet without any type of label or organization? Most likely not. This is exactly what you do to SharePoint when you do not utilize at least a minimal amount of Metadata. Now, each department likely has its own metadata. Sometimes they are overlapping other departments. Sometimes you have the SAME information in different named fields, etc. This is where the planning comes in. You want to mesh these together as much as reasonably can be done. You need to help these different units work together and come along with a cohesive set of metadata elements that use can use on an enterprise level.

              Lastly, to reiterate do it NOW. Do it before you migrate a single artifact. It is thousands of times easier to do this at the front of the migration than it will be to do it later on. If it gets pushed to later on, it will most likely NEVER happen or it will be so rushed and poorly planned that it will be useless and actually hinder portal adoption.

Fund it properly or not at all

             So let me say something that will be utterly unappreciated by any MS marketing folks out there. It is alright to walk away from SharePoint. It is perfectly fine to weigh the costs to move to the platform, licensing, planning, etc., and say “We do not have the budget for it now.”. It can be unnerving, because it may mean you limp along with your current system longer. I would caution against just going to another system as my comments on planning, metadata, and such apply to many other advanced platforms out there. However, I think I would be a mistake to run half funded into a new platform, hoping for the best. What you will get is a poorly planned, poorly implemented portal that never meets the needs and overall sours your organization on SharePoint entirely. It also has you wasting the money you did spend. Many company’s want to throw that money out there just to say they have done SOMETHING. Even if it ends up falling flat on its face. Sometimes you just need to accept that you cannot afford to do something right and just wait till you can. It is alright and can be a much better decision than doing it half way.

          So that’s it. Not as technical as normal but it is out there. I could easily have made this about 10 times longer but I reckon I should actually do some real work. The short gist of this, SharePoint is an awesome platform, absolutely awesome. However, if you are going to migrate you need to learn to use it and to make your portal, content, way of thinking adapt and expand to utilize the new technology now just throw the same old problems into a new platform.

One of the more common reoccurring battles I have at client sites, is the need for a full-fledged multi-server Staging/Test farm. Many will refuse to see the cost justification in the creation of this setup. They look at it as more of a luxury or indulgence. Ironically, most of these same clients also have virtualized environments where the costs of configuring these systems is smaller than in the old world where this required physical boxes. Granted SAN space does not come cheap but I would argue the cost of engaging your technical staff in an emergency Disaster Recovery process as a result of a simple hotfix tends to get more expensive especially when you have new OS patches every month that could break SharePoint (seem to remember a particular IIS patch that cause some serious pain). Now add in the Service packs, cumulative updates, and hotfixes we encountered with SharePoint 2007.

So first my recommendation, then the why’s. I would strongly recommend a staging farm mimic production in server count, networking, SQL clustering, etc. As much as possible. At the very least the Staging farm should have multiple servers (1. WFE, 1 APP/Index, 1 SQL). I would also suggest that having a proper staging farm is not a luxury or a nice to have but absolutely a requirement for an organization wanting to run long term with the SharePoint platform. On networking, as expensive as it can be, I would recommend implementing NLB is you are using it in production with the SAME hardware/software.

In using the staging farm, there should be a couple governance guidelines as well. The first, all customizations installed in staging, should be done exactly as they are in production. This means WSP deployments. If you are not using WSPs, you need to get on the bandwagon, it is not just a good idea, it is the way things need to be done in SharePoint whenever possible. This also means the same site collections, the same web applications, security, profiles, profile import, search settings, etc.

Now the why’s. I will going into actual scenarios I have seen. There are a lot of them I could use but we will use these shining examples I have.

Company A, large insurance company 20,000 plus users on a 5 server farm. They had need to meet some very specific compliance needs. This required the implementation of GPO’s on the system. Many of these GPOs had nothing to do with SharePoint. Like many companies they got everything up and running in staging, then production then proceeded with the lockdown. They had a staging environment. The GPOs were applied in groups of 10-100 depending on perceived risk. Was followed was a 2 month endeavor in which staging was down 90% of the time. The GPOs blocked the OS’ from working and in some cases created communication issues that would NEVER have taken place without a multi-server staging farm. There was zero production outages associated with the application of GPOs because of the staging farm. With a system like SharePoint, which relies on multiple servers functioning as one unit, the only way to truly reach compliance without risking the production farm is to have a multi-server staging farm to use to determine how to reach compliance.

Another example Company A again, MOSS Service Pack 1, rolled out in staging first, DCOM permissions popped up, declarative workflows ceased to function. What was worse, we could not roll back. There was  absolutely no way to uninstall the service pack and we had to go into DR mode on the farm. Bringing it down, restoring server images and DBs. Staging was down for a full day. Production never went down.

Company B, large pharmaceutical, 10,000 employees with a SharePoint based intranet, refused to implement a staging farm, citing costs. Implemented a large WSP deployed branding solution consisting of custom master pages, page layouts, themes, feature stapling, event handlers, custom web parts, and custom CSS. Deployment on development server, completely successful. Deployment on 5 server production farm went smoothly. Testing revealed sporadic outages in production almost immediately. For a 3 days users in production had to deal with sporadic outages, data loss, and other issues. After 3 days issue was traced down to one of the branding features that failed to fully deploy on  one of the WFEs. The sporadic outages were a result of NLB bouncing the user back and forth between WFE servers. Using production as a testing platform (yes I said we were forced to use production as a testing platform, lacking a multi-server staging farm), we determined a way to force the deployment to succeed on multi-server environments and got it working. Company B employed a multi-server staging environment on the tail end of that effort and had NOT had a production outage in the last 8 months since staging was implemented. Though they have had plenty of staging outages.

Company A again, wanted SSL on their site but wanted it terminated on the load balancers. Implemented in staging, went fine. Implemented in production and had an immediate outage. Turned out their NLBs in staging were not the same as the production ones. Staging were actually older and cheaper models and worked fine. The “Good” production ones, stripped off host headers after decrypting the packets. Without host headers, IIS never sent the traffic to the correct web application. When we suggest SAME version in production and staging, this is why. This is the only time we took production down for Company A. It was a painful lesson for an otherwise VERY careful company but one they learned well.

Company C, large healthcare organization, 40,000 plus users, SharePoint 2007 Intranet. Company C instituted a multi-server staging per our suggestion. Company C has a heavily branded solution with a moderate number of coding customizations. They have multiple WSPs for branding, custom event handlers, themes, custom web parts, and other customizations. Initial WSP deployment was smooth. Updates, caused issues as a result of the self-referencing issue with master pages and page layouts in SharePoint 2007. We were able to build a customized upgrade path for the clients implementation without bring production down. In staging it took 2 days to develop this. In production, as a result of our efforts we rolled out changes and implemented the upgrade plan in 10 minutes as a result.

Company D, large insurance organization. 15,000 plus users, SharePoint 2010 intranet. Implemented without any staging farm. Heavy GPO environment. Performed an in place upgrade from SharePoint 2007 to SharePoint 2010 and pushed out multiple customizations. The end result was an immediate failure of the production system. GPO’s disabled a number of key components in the OS such as IIS, DCOM, and ACLs. The end result was a complete repave of the production servers (actually half a dozen of them), and major production outages over the period of 3 weeks it took to troubleshoot their couple of thousand GPO settings.

Company E, large aerospace firm, 80,000+ users, SharePoint 2007 based intranet, 5 server farm. Engaged us at the tail end of a development effort for architecture guidance. Per our recommendation, they implemented SharePoint in full-fledged staging system. Immediately, upon deployment numerous security issues with development customizations occurred. Mainly, they encountered NTLM double hop issues but some other deployment issues. They had to implement Kerberos with constrained delegation in staging (and eventually production. As with many very large organizations, we had a lot of free reign in staging to implement Kerberos settings, and other administrative tasks. They were handled locally. We were able to do these quickly. The production farm was not the same and managed on the other side of the country. Implementing a new Kerberos setting or any custom setting was 3 weeks from format request to implementation. We were able to implement all items in staging in 2 weeks, and in production we required a single request for all settings as a result.

I could go on with many other samples. The fact is with SharePoint 2007 we saw it over and

over and I would expect nothing but the same with 2010, it is much more complex a system with a lot more rich features to break.  A multi-server staging farm is the best way to keep your production farm up and running. Even if you have no customizations and only have a small farm, sooner or later despite their best efforts Microsoft will issue a patch, update, KB, hotfix, etc that breaks your SharePoint farm. Unless you are extremely lucky and/or deliberately keep your farm well behind on the patching (even that will not always work), you will sooner or later have an issue related to patching.

A New Twist on SSP Site Access Denied

Posted: September 28, 2010 in MOSS 2007

So you have a farm that has been working for a long time.  Then you do something to it, like say execute a script to update passwords on all your service accounts, including a stsadm edit ssp command. Then you find even your farm install account gets an “Access Denied” when you try to access the SSP administration site.  Despite your farm install account being in the site collection administrators for the SSP administration site, you still are denied access to the site. You find nothing in the event logs, and the ULS simply has an access denied error on Default.aspx in the SSP Admin site.

You can access the “_Layouts/Settings.aspx”, if you try typing in the direct URL, you can even access the search administration, User profile admin, and all other SSP Administration pages, it is JUST the default.aspx page that seems to be blocking you. What’s even more frustrating when you change the SSP service account to your farm account or Content Access account, suddenly everything works.

I have seen many posts on possible causes for this issue. The fixes include from resetting the passwords again, rebuilding your SSP, and a large number of other advanced in depth fixes.

The key to this issue lies in the URL for the access denied page you are given when the user tries to access the default.aspx page along with some default settings quietly put into your site when you configure your SSP. The quiet settings…go to “Policy for Web application” in your central administration. If you select the web application for your SSP administration site, and examine the entries you will see one of the keys. You will see your search crawling account, SSP service account, and Farm install account all have access rights given here. This is the key to your access denied issue.

Turns out when you trace the Access denied page link you will find a listID (GUID) in the url, this will point to 1 of several lists included in the SSP Administration site. The lists will inherit security from the main site. We begin to build the picture, here is the last piece. When you open the Default.aspx , the page uses the SSP Service account to access these custom lists to build its list of links. On the farm I encountered this issue on, during its inception more than a year ago, there were some serious security issues as a result of DoD compliance GPOs, particularly with search. The resulting support call ended up in a series of stsadm calls being made which reset a lot of security items. During this change the SSP Service account was modified to an incorrect setting, and during the password reset a week ago, we fixed that issue and created the access denied issue. The editSSP command will change the SSP Service account fine but it will not add the Policy for Web Application setting to the farm.

One of those little things that can cause a severe headache for you out there. Hopefully, I save someone some trouble with this one.

 

For many of us, audiences are one of those “cool” capabilities, with lots of possibilities that you rarely use and even more so, use extensively. Then you get a project that leans on them, and leans on them HARD and some serious limitations show up.

1.       SharePoint Audience rule creation UI – The UI itself is pretty straightforward. You create an audience, are given a choice between (members satisfy ALL rules or ANY rule). Then you create a rule and you are given the ability to test (“=”,”!=”, “<”,”>”,”Contains”, “not contain”) a single value against a single column for each rule.

For many clients this is enough. You can get creative and come up with some cool combinations. Then you go to a large bank, Insurance agency, etc where reality is not that simple. Where a person’s role in the organization is determined by a range of results from an even larger range of columns. You find this UI seriously lacking. It cannot do very complex logic. Where your audience logic consists of nested AND and OR logic, this UI will not allow you to do what you need.

For this issue my friends. There is a solution. It comes in the form of some freeware (http://stsadm.blogspot.com/2008/08/assigning-rules-to-audiences-via-stsadm.html) which extends the stsadm commands allowing you to script audience creation (nice) and build complex XML based audience rules.

2.       Limitation on the number of audience rules – This limitation is one of those painful ones. The blog listed in #1 begins to go into it a bit as I believe it is related to the way Microsoft decided to store these rules. Either way, it is painful. Add 18 rules you are fine. Add that 19th, I dare ya. If you are using the SharePoint UI to do this, you will get a REAL helpful error message, “Invalid Value”. You will dig through ULS, event logs, test and test and find NOTHING to help you. Guess what, in SharePoint land “Invalid Value” equals “Too many rules”. Using the object model, I got the real message, no more than 18 rules per audience. Seems like a lot until a healthcare based client with hundreds of facilities(Hospitals, clinics, dr offices, labs, etc), which rollup into a dozen major facilities(Hospitals) based on a complex set of rules. You blow 18 out of the water quick. This is not a limitation you can sidestep, you need to design your audiences accordingly. I found little to no documentation on this limitation and a dozen contacts in MCS had never hit this, simply because they had not had clients who implemented audiences to that extent.

 

I have not had the chance to test out 2010 for these limitations yet. If someone out there has or wants to give it a try PLEASE do and let me know how this turns out.

                So I was recently tasked with migrating a mess of files (for those non southern folks…a mess of files would be a number >5,000) into a clients’ MOSS farm. They had to be deployed to multiple doc libs with numerous content types and custom columns. The file source was a very familiar multi-folder hierarchy with folder names that were almost directly in line with the metadata we needed to capture.

                They have DocAve 5x already so that was our migration tool. Seemed easy enough and for the most part…it was…for the most part. There are some limitations and undocumented configuration options that you should be aware of however when you migrate with this tool.

                To begin with. Forget about mapping a folder/drive to the agent that will be running the migration. It is not currently supported (It may be soon however). You will need to map your source through the “\servernamedriveletter$foldersubfolder” format IF you need to apply custom rules. If you don’t need custom rules the mapped drive or normal share will work fine.

                Alright so you got your source set and you get to the mapping setup. Everything seems straightforward, then you get to your rules. You stare at the first field “path” and you go back and forth between Google, the DocAve docs, and you cannot really find a good definition of it. So let’s end the suspense, this field is the path, from the AGENT running the migration, to the data. But wait! There is more. It only allows “Drive:foldersubfolder”  format. So now comes the serious restriction. After a few hours with DocAve technical resources (who were EXTREMELY helpful btw), we find the first limitation. To do this, you need to copy the source files to the AGENT. It will not support a mapped drive, a file share, nor any other method ( I am stressing this is a CURRENT limitation, as my discussions with AvePoint’s technical resources led me to believe this may be corrected soon).

                So the end solution, I copied all my source data to a secondary data drive on my Agent (which was also one of my WFEs), added it as a source in DocAve, then used the path from the AGENT (in this case “D:MigrationData” to derive my rules. And voila! It worked!

                A couple words to the wise on these rules, PLAN THEM OUT. Examine your folder structure on the source, adjust if necessary, to limit the rule count. A good measure is to write a couple catch all rules to catch those files that somehow escape your normal rules. This keeps at least your base Metadata populated. If you are brining across a mess of files, it is very likely you will have a good chunk of files/folders that fall outside the rules you set.

                Second, a good methodology, Build a couple rules with the UI, then click the “Download” button to pull down the rule XML. You will find a lot of your rules can be rapidly developed with this xml file, a good XML editor, and a lot of ctrl+c, ctrl+v with minor edits. In my migration I had 30+ rules I developed this manner which would have been a royal pain through the UI but were simple with the XML file.

           Anyway, overall it was a great experience and the tool worked very nicely once the learning curve and limitations were overcome. Hopefully we can shorten both for anyone reading this.

                Happy Migrations everyone!

Recently, I ran into an issue where a client’s site was having issues while in edit mode IF they added a content editor web part and attempted to modify the content utilizing the Rich text or source editors. When they clicked OK in these editors, their page would refresh with a “The page you are attempting to save has been modified by another user since you began editing.” error. They would then have 3 scary and confusing options: 1. Preview current version, 2. Save and overwrite changes, 3. Save without changes. No matter which option they chose it seemed to be totally ignored and their changes are always saved.

This behavior is confusing to say the least. Their site is pretty heavily branded so the first thing I did was remove all the web parts 1 by one. Still the same issue. Then I tried an OOB page layout, same issue. Then I reverted back to the default.master and bam, the issue was gone. So I tried the old page layout and web parts with the default.master, no problem. So we found this was a master page issue.

Some quick googling found a couple posts which describe this was created when the site actions control was moved below the page publishing control bar. So time to ratchet the ULS log to “Please fill up my hard drive till it bleeds” mode, bust out fiddler, and do some real analysis. Found some interesting things.

First of all, when you click OK on the html or rich text editor from the content editor web part, it calls a WebPartService.asmx web service and saves your changes, then forces a full page refresh when the window closes. Every other OOB web part pushed the new content to the controls in the web part maintenance panel. Acting on a hunch I tried some CTRL+F5 and plain old F5. When the page was in edit mode even when absolutely no changes were made, this reproduced the issue the majority of times.

Second, the three options that did not appear to work, worked just fine. The page had been already saved when the user clicked OK on the editor form. No matter what option you choose you are loading the same page. By opening a second window with the same user name you can see the changes are there. Interestingly enough the page is NOT in edit mode in the second window you open. Which tends to make you think Edit mode is either tied to the session or pushed into a cache or cookie for just that browser session and client (this is important in the site action control issue).  Anyone have further info on this I would LOVE to hear it.

Third, when looking the site actions control on multiple windows, the only difference is the “Edit Page” option is in a different state (disabled vs. enabled). The ULS logs show multiple steps to actually trimming this. The XML is loaded and trimmed multiple times AFTER the publishing controls are added to the form.

My current hypothesis (still testing as time permits) is that the publishing controls detect changes between the page (the part below the publishing controls) as pulled from the DB and what is in the current active edit cache. Any changes/differences will trip the error. Now MS had counted on the difference in web parts, edit mode vs. non edit mode view, nav changes etc. but they likely never counted on, or tested the site actions control trimming.

So what we come to is this. If you can avoid it, do not put the site actions control under the publishing control. If you do, this is an issue you will have to be willing to live with. Like I said, I could not see any other repercussions but I would not be surprised if there were others that show up. Would be nice if I could get any word from MS on this but with 2010 coming out soon, I would not hold my breath.

Page Layouts, Zone Templates, and Content Editor Web parts

So I have a simple task.
 
 Create a page layout, deployable via wsp that has multiple web part zones with content editor web parts inserted into them with default content and styling applied. I also needed to do it with “no code”.

I get about the task and first run works great:

Sample:

<ZoneTemplate>

<WebPartPages:ContentEditorWebPart runat="server" ID="StorySubheadlineContent">

<WebPart xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd="http://www.w3.org/2001/XMLSchema&quot; xmlns="http://schemas.microsoft.com/WebPart/v2"&gt;

  <Title>QNews subheadline</Title>

  <FrameType>None</FrameType>

  <Description>QNews article sub headline .</Description>

  <IsIncluded>true</IsIncluded>

  <ZoneID>StorySubHeadlineZone</ZoneID>

  <PartOrder>0</PartOrder>

  <FrameState>Normal</FrameState>

  <Height />

  <Width />

  <AllowRemove>true</AllowRemove>

  <AllowZoneChange>true</AllowZoneChange>

  <AllowMinimize>true</AllowMinimize>

  <AllowConnect>true</AllowConnect>

  <AllowEdit>true</AllowEdit>

  <AllowHide>true</AllowHide>

  <IsVisible>true</IsVisible>

  <DetailLink />

  <HelpLink />

  <HelpMode>Modeless</HelpMode>

  <Dir>Default</Dir>

  <PartImageSmall />

  <MissingAssembly>Cannot import this Web Part.</MissingAssembly>

  <PartImageLarge>/_layouts/images/mscontl.gif</PartImageLarge>

  <IsIncludedFilter />

  <ContentLink xmlns="http://schemas.microsoft.com/WebPart/v2/ContentEditor&quot; />

  <Content xmlns="http://schemas.microsoft.com/WebPart/v2/ContentEditor">&lt;![CDATA[<FONT size=3><STRONG>This is formatted text</STRONG></FONT>]]></Content>

  <PartStorage xmlns="http://schemas.microsoft.com/WebPart/v2/ContentEditor&quot; />

</WebPart>

</WebPartPages:ContentEditorWebPart>

</ZoneTemplate>

 

Next time around, I manually upload the aspx into the pagelayouts/masterpage lib over top of the old one, publish it, approve it and all hell breaks loose. Sometimes I get the same web part twice,m some uploads, I get “No parameterless constructor defined for this object.” message trying to create the page with the template. I can remove everything from the entire layout except the word “hi” and this message keeps coming.  

 

It is clearing the parameterless constructor message I find my savior at:  http://vspug.com/teameli/2009/08/13/dealing-with-the-quot-no-parameterless-constructor-defined-for-this-object-quot-on-changed-page-layout/

 

What I find is MOSS would add the new web parts I put in and cache the old ones. Even though they no longer existed in the aspx file, they did to MOSS and they would until manually cleared through the web part maintenance page. So if you find yourself in this pickle, try the following procedure as you are updating the aspx file.

 

1.       Save new aspx to the development HD.

2.       Go to http://<server url>/_catalogs/masterpage/<pagelayoutname>.aspx?contents=1

3.       Check out, Delete all web parts

4.       Upload new version

5.       Publish and approve

I hope this can save some of you a good deal of time.

Using the Content Editor Web Part to Query list from other sites through SharePoint Web Services

 

                So I recently ran into a situation where I had a client who had a hosted solution that did not allow for the Content Query Web part or any custom web part or component coding. All we had was the content editor web part and some creative JavaScript coding to help us.

                The goal was to display data from a list located on the root site on each of the 35 subsites in the farm without custom coding, without having to recreate and maintain the same list or make any linked lists, lookup fields, etc on all of the subsites.  I found a number of SPS03 sites that got me nearly where I needed to be. I took their code and converted it to a format that matched what MOSS returned from the web service call.

The solution was to insert the code below into the source editor of a content editor web part:

<span id=’uniquecontrolName’></ span>

<script language=javascript>

getListList();

 

function getListList() {

  var txt = document.getElementById(‘ uniquecontrolName ‘);

 

  //Build SharePoint Web Service URL based on current location

  var wsURL;

  wsURL = window.location.protocol+"//";

  wsURL += window.location.host;

  var path = window.location.pathname.split("/");

  path.pop();

  var x;

  for (x in path) {

    wsURL += path[x] + "/";

  }

  wsURL = "http://something.something.com/_vti_bin/lists.asmx&quot;;

 

//new soap action and xml

 var wsSoapAction = "http://schemas.microsoft.com/sharepoint/soap/GetListItems&quot;;

 var wsXML = ‘<?xml version="1.0" encoding="utf-8"?>’;

 wsXML += ‘<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance&quot; xmlns:xsd="http://www.w3.org/2001/XMLSchema&quot; xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">&#8217;;

 wsXML += ‘<soap:Body><GetListItems xmlns="http://schemas.microsoft.com/sharepoint/soap/">&#8217;;

 wsXML += ‘<listName>List name or List GUID</listName>’;

 wsXML += ‘<query></query>’;

 wsXML += ‘<queryOptions><QueryOptions><IncludeMandatoryColumns>TRUE </IncludeMandatoryColumns><DateInUtc>TRUE</DateInUtc><viewFields><ViewFields><FieldRef Name="Event%20Date" /></ViewFields></viewFields></QueryOptions></queryOptions>’;

 wsXML += ‘</GetListItems></soap:Body></soap:Envelope>’;

 

//Create XML Document and get HTTP response using XMLHTTP object

var xmlDoc = new ActiveXObject("Microsoft.XMLDOM");

try

{

var httpResponse = getServiceResults(wsURL, wsSoapAction, wsXML);

if (parseInt(httpResponse) == 404) {

    txt.innerHTML = "<p>This code can only be executed from a web part.</p>";

    return;

  }

  else {

    xmlDoc.loadXML(httpResponse);

  }

 

}

  catch(e) {

    alert(e.message);

  }

 

//If getServiceResults returns a 404, then it’s probably because the

//page is being launched from a document library instead of a web part

  if (parseInt(httpResponse) == 404) {

    txt.innerHTML = "<p>This code can only be executed from a web part.</p>";

    return;

  }

  else {

    xmlDoc.loadXML(httpResponse);

  }

 

  //Get results into collection

  listitems = xmlDoc.getElementsByTagName("z:row");

 

  //Loop through results and build table rows

  var output = "";

  for (var x = 0; x < listitems.length; x++) {

 

    output += "<tr>";

    output += "<td>" + listitems(x).getAttributeNode(‘ows_Event_x0020_Date0’).text + "</td>";

    output += "<td><a href=’http://privateplacement.edensandavant.com/Lists/Calendar/DispForm.aspx?ID=&quot; + listitems(x).getAttributeNode(‘ows_ID’).text + "’>" + listitems(x).getAttributeNode(‘ows_Title’).text + "</a></td>";

    output += "</tr>";

  }

 

  //Display table

  var table = "";

  table = "<table border=’0′ width=’100%’ cellpadding=’2′ ";

  table += "cellspacing=’0′ class=’ms-summarystandardbody’ rules=’rows’>" ;

  table += output;

  table += "</table>";

  txt.innerHTML = table;

}

 

function getServiceResults(url, soap, xml) {

  //Send XML packet to web service and return HTTP response text

  try {

    if (xml.length > 0) {

      xmlHttp = new ActiveXObject("Microsoft.XMLHTTP");

      xmlHttp.open("POST", url, false);

      xmlHttp.setRequestHeader("SOAPAction", soap);

      xmlHttp.setRequestHeader("Content-Type", "text/xml");

      xmlHttp.send(xml);

      if (parseInt(xmlHttp.status) == 404) {

        return 404;

      }

      else {

        return xmlHttp.responseText;

      }

    }

  }

  catch(e) {

    alert(e.message);

  }

}

 

</script>