Thursday, May 30, 2013

Deploying SharePoint 2010 Solution on SharePoint 2013

As you know, in SharePoint 2010 there was 14 hive where you deploy your files. And then you reference them like below:

Physical Path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS
Virtual Path: “/_layouts/<your folder>/<your files>”

In SharePoint 2013, there is new 15 hive
Physical Path: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\TEMPLATE\LAYOUTS
Virtual Path: “/_layouts/15/<your folder>/<your files>”

If you have SharePoint 2010 solution (.wsp file) and you need to deploy it on SharePoint 2013, then you have the following options:
First, you have to add the solution to your farm using PowerShell as following:
  • Add-SPSolution C:\SP2010Project.wsp

Now,
  • Deploy to 14 hive è
    Install-SPSolution sp2010project.wsp
  • Deploy to 15 hive è Install-SPSolution sp2010project.wsp -CompatibilityLevel 15

Friday, May 10, 2013

Split CSV file based on content using PowerShell


You have a CSV file that contains department employees in a format like this:
Department,Employee
Sales,emp1
HR,emp2
Sales,emp3
Finance,emp4
Finance,emp5
Security,emp6
Security,emp7
Security,emp8
HR,emp9
And you need to split the contents of this file to separate files based on department name. So for the above example, we should get four files, Sales.csvHR.csvFinance.csv, and Security.csv. Each file contains only its employees.
And the solution really shows the power of PowerShell pipelining:
Import-Csv file.csv | Group-Object -Property "department" | 
 Foreach-Object {$path=$_.name+".csv" ; $_.group | 
 Export-Csv -Path $path -NoTypeInformation}
Dissecting the above commands:
  • Import-Csv file.csv: Parses the CSV file and returns an array of objects.
  • | Group-Object -Property "department": Since we need to split by department, it makes sense to group objects by the department property.
  • | Foreach-Object {...}: We need to apply an action for each group (department). So we pipeline the resulted groups to Foreach-Object.
  • $path=$_.name+".csv": Within the foreach, we need to create a temporary variable ($path) to be passed to the next pipeline responsible for the actual saving. Note that I use the semicolon ";" to separate this part from the next. And I used the name property of the group (which maps to department name in our case) to format the file name.
  • $_.group | Export-Csv -Path $path -NoTypeInformation: Then for each group we have, we need to export its contents (CSV file rows) to the file path created in the past step. So we again pipeline the group property of the group item (which is an ArrayList of original objects) to the Export-CSV Cmdlt.
And the result should be files like:
Finance.csv:
"Department","Employee"
"Finance","emp4"
"Finance","emp5"

List Google Docs using PowerShell


Are you looking for a quick and easy way to access your Google Docs from PowerShell? The Google Data Providerprovides an easy-to-use ADO.NET interface that you can take advantage of with your PowerShell Scripts. Simply use the included SQL like .NET objects (GoogleConnectionGoogleCommandGoogleDataAdapter, etc.) in your PowerShell scripts to connect to your Google Apps accounts and synchronize, automate, download, and more!

Using the Google Data Provider in PowerShell to List Google Docs:

# Load the Google Data Provider assembly 
[Reflection.Assembly]::LoadFile("C:\Program Files\RSSBus\RSSBus 
        Google Data Provider\lib\System.Data.RSSBus.Google.dll")

# Connect to Google 
$constr = "User=[username];Password=[password]"
$conn= New-Object System.Data.RSSBus.Google.GoogleConnection($constr)
$conn.Open()

$sql="SELECT Name, AuthorName, Type, Updated, Weblink from Documents"

$da= New-Object System.Data.RSSBus.Google.GoogleDataAdapter($sql, $conn)
$dt= New-Object System.Data.DataTable
$da.Fill($dt)

$dt.Rows | foreach {
 Write-Host $_.updated $_.name
}
Listing is only the first step. With full CRUD support, you can use the Google Data Provider to easily upload and download documents as well. The following bit of PowerShell code downloads one of the documents listed above:

Download a file from Google Docs:

$cmd= New-Object System.Data.RSSBus.Google.GoogleCommand("DownloadDocument", $conn)
$cmd.CommandType= [System.Data.CommandType]'StoredProcedure'
$cmd.Parameters.Add( (New-Object System.Data.RSSBus.Google.GoogleParameter("@Type", "TXT")) ) 
$cmd.Parameters.Add( (New-Object System.Data.RSSBus.Google.GoogleParameter("@Name", "myfile")) ) 
$cmd.Parameters.Add( (New-Object System.Data.RSSBus.Google.GoogleParameter("@LocalFile", "d:\myfile.txt")) ) 
$reader = $cmd.ExecuteReader()
Likewise, calling the UploadDocument Stored Procedure allows your scripts to upload documents directly to Google Docs.

As you can see, the Google Data Provider provides a hassle-free way to access the features of Google Apps directly from PowerShell script, and eliminates the headache involved with authentication, security, etc. 

Happy scripting!

Partial page load issue with ASP.NET MVC and head.js


I've been working with a small team over the past year or so on a large-scale web application built in ASP.NET MVC 3. We're using some of the best new client-side technologies with it as well - jQuery,Bootstrap and head.js to name a few. Overall, the project has gone very smoothly, with only a few minor hiccups or delays.
Well, except for one lingering issue.
We began experiencing intermittent partial page loads almost as soon as the first draft of the UI was released to our testing servers. The following behaviors were exhibited:
  • This issue could happen on any page in the application when it loaded.
  • 65-70% of the time a page would load correctly.
  • Pressing F5 (refresh) would reload the page and always fixed the issue.
  • Generally, either part of the main menu bar wouldn't load, or our datagrid wouldn't load. Both controls could have a problem on a given page if you reloaded it multiple times.
Perplexing and frustrating to say the least. First, we thought it was a problem with the tiny VMs we had in our testing environment. Then we guessed it might be a packet delivery issue with the VPN tunnel between the testing network and the office network. Then we supposed maybe our self-hosted CDN wasn't set up correctly, and switched to using Amazon S3 (which we were planning on doing anyway).
Each of these theories (and others) were tested and debunked. No love. There's no way it could be in our code, right??
What ended up working for us was moving our "core" libraries outside of head.js and using their "execute in-order" method for the rest of our libraries. Our _Layout.cshtml page looks like this:
<!doctype html>
<html lang="en">
<head>

(... stylesheets and other head content ...)

<script type="text/javascript" src="http://cdn.company.com/JS/head.min.js" />
<script type="text/javascript" src="http://cdn.company.com/JS/jquery-1.7.1.min.js" />
<script type="text/javascript" src="http://cdn.company.com/JS/jquery-ui-1.8.18.min.js" />
<script type="text/javascript" src="http://cdn.company.com/JS/modernizr-2.5.3.min.js" />
<script type="text/javascript" src="http://cdn.company.com/JS/bootstrap-2.1.0.min.js" />
<script type="text/javascript" src="http://cdn.company.com/JS/flexigrid.pack.js" />

<script type="text/javascript">
    head.js(
        "http://cdn.company.com/JS/jquery.validate.min.js",
        "http://cdn.company.com/JS/another.script.js",
        (... other javascript files ...),
    );
</script>

(... other code / markup ...)

</html>
We implemented this change about a week ago, and have yet to experience the issue once since then. Couple of additional notes / comments to share about the change:
  • This seems to work because generally all of the other javascript files you'll want to load probably depend on one or more of these "core" files to work properly. Letting head.js handle this becomes even more delicate when you consider the fact that ASP.NET is trying to load multiple Partial Views per page, many of which contain controls that need the "core" in place to be built properly. Any interruption of loading a "core" file before a control needs it may break part of the page.
  • While the head.js usage documentation lists what we have here as a correct way of using the library, this is not how their demo is built ('View Source' to check it out).
  • The most similar issue we could find was here on the head.js GitHub project site. This is where we got the idea to try an alternate implementation.
  • Since the purpose of head.js is to improve page load times, you may be wondering if this change hurt our delivery speed. Unfortunately, because pages weren't reliably loading for a number of months, it was hard to measure where we were at before the change, so I'm unable to determine a speed difference.

I decided to document this because we weren't able to find any instances online of someone else having the problem; hoping this post will save a few other teams some time and headaches.

Clean up old files using PowerShell


Doing a quick Google search (which may be what brought you here) will show you a number of variations on PowerShell scripts for deleting files. I'm sure many of them are perfectly adequate for the task, and in some cases, have features that mine doesn't. My solution excels at code readability and control, the latter of which I feel is fairly important when deleting files in bulk.
Not much else to say about it I guess; the purpose and uses of this script are pretty straightforward.
Here's the code:
# |Info|
# Written by Bryan O'Connell, February 2013
# Purpose: Delete files from a folder haven't been modified for the
# specified number of days.
#
# Sample: DeleteOldFiles.ps1 -folder "C:\test" -days_old 7 [-only_this_type ".xls"]
#
# Params:
#   -folder: The place to search for old files.
#
#  -days_old: Age threshold. Any file that hasn't been modified for more than
#  this number of days will be deleted.
#
#  -only_this_type: This is an optional parameter. Use it to specify that you
#  just want to delete files with a specific file extension. Be sure to
#  include the '.' with the file extension.
#
# |Info|

[CmdletBinding()]
Param (
  [Parameter(Mandatory=$true,Position=0)]
  [string]$folder,

  [Parameter(Mandatory=$true,Position=1)]
  [int]$days_old,

  [Parameter(Mandatory=$false,Position=2)]
  [string]$only_this_type
)

#-----------------------------------------------------------------------------#

# Determines whether or not it's ok to delete the specified file. If no type
# is specified, all files are ok to delete. If a type IS specified, only files
# of that type are ok to delete.

Function TypeOkToDelete($FileToCheck)
{
  $OkToDelete = $False;

  if ($only_this_type -eq $null) {
    $OkToDelete = $True;
  }
  else {
    if ( ($FileToCheck.Extension) -ieq $only_this_type ) {
      $OkToDelete = $True;
    }
  }

  return $OkToDelete;
}

#-----------------------------------------------------------------------------#

$FileList = [IO.Directory]::GetFiles($folder);
$Threshold = (Get-Date).AddDays(-$days_old);

foreach($FileToDelete in $FileList)
{
  $CurrentFile = Get-Item $FileToDelete;
  $WasLastModified = $CurrentFile.LastWriteTime;
  $FileOkToDelete = TypeOkToDelete($CurrentFile);

  if ( ($WasLastModified -lt $Threshold) -and ($FileOkToDelete) )
  {
    $CurrentFile.IsReadOnly = $false;
    Remove-Item $CurrentFile;
    write-Output "Deleted $CurrentFile";
  }
}

write-Output "Press any key to quit ...";
$quit = $host.UI.RawUI.ReadKey("NoEcho, IncludeKeyDown");


NOTE: If you run into problems getting the script to run on your machine, there are a few troubleshooting tips in my original article.

Extract worksheets from Excel into separate files with PowerShell

I recently had need to dust off an old VB script I'd written years ago to get worksheets out of Excel files. I've also been curious about doing more with PowerShell, and besides feeling guilty about putting a VB script into use in 2012, it seemed like a really good learning opportunity.

So why not just rewrite the script in .NET? Well, you can definitely do that; in fact, the code would look very similar. However, not everyone is a .NET developer. I wrote the original VB script on a team where we were building C++ DLLs for ETL processing; .NET wasn't part of our code base. I also think there are plenty of IT roles - DevOps, DBAs, Network Admininstrators to name a few - that might find a simple PowerShell tool like this a little easier to use and/or modify for their needs.

So that being said, just copy & paste the code below into an empty .ps1 file, and you should be good to go. To use it, simply execute the following command (should work from command-line, batch file, or managed code):

PowerShell.exe -command "C:\ScriptFile.ps1" -filepath "C:\Spreadsheet.xls" -output_type "csv"

I did run into one problem / issue while writing this script - getting it to run the first time! Thanks to this great article by Scott Hanselman, I found out that there are some very tight Windows security restrictions on PowerShell scripts - particularly the ones you didn't write yourself. After reading his article, it seemed easier for me (and for anyone who wants to use my code) to just post the source code rather than a downloadable script with certificates, at least in this instance. Maybe if I write another PowerShell article I'll give the certificate thing a go.

If you get the error message I got - "The file C:\ScriptFile.ps1 cannot be loaded. The execution of scripts is disabled on this system. Please see "Get-Help about_signing" for more details." - you can enable execution of PowerShell scripts you'vecreated by running the following command 'As Administrator':

PowerShell.exe Set-ExecutionPolicy RemoteSigned

Anyway, here's my script:


# Purpose: Extract all of the worksheets from an Excel file into separate files.

[CmdletBinding()]
Param ( 
    [Parameter(Mandatory=$true,Position=0)] 
    [string]$filepath,

    [Parameter(Mandatory=$true,Position=1)] 
    [ValidateSet("csv","txt","xls","html")] 
    [string]$output_type 
)

#-----------------------------------------------------------------------------#

# Figures out and returns the 'XlFileFormat Enumeration' ID for the specified format.
# http://msdn.microsoft.com/en-us/library/office/bb241279%28v=office.12%29.aspx 
# NOTE: The code being used for 'xls' is actually a 'text' type, but it seemed
# to work the best for splitting the worksheets into separate Excel files.

function GetOutputFileFormatID 

Param([string]$fomat_name
    $Result = 0 

    switch($fomat_name
    { 
        "csv" {$Result = 6
        "txt" {$Result = 20
        "xls" {$Result = 21
        "html" {$Result = 44
        default {$Result = 51
    } 
    
    return $Result 
}

#-----------------------------------------------------------------------------# 

$Excel = New-Object -ComObject "Excel.Application" 
$Excel.Visible = $false #Runs Excel in the background. 
$Excel.DisplayAlerts = $false #Supress alert messages. 

$Workbook = $Excel.Workbooks.open($filepath

#Loop through the Workbook and extract each Worksheet in the specified file type.  
if ($Workbook.Worksheets.Count -gt 0) { 
    write-Output "Now processing: $WorkbookName" 
    
    $FileFormat = GetOutputFileFormatID($output_type

    #Strip off the Excel extension. 
    $WorkbookName = $filepath -replace ".xlsx""" #Post 2007 extension
    $WorkbookName = $WorkbookName -replace ".xls""" #Pre 2007 extension 

    $Worksheet = $Workbook.Worksheets.item(1

    foreach($Worksheet in $Workbook.Worksheets) { 
        $ExtractedFileName = $WorkbookName + "~~" + $Worksheet.Name + "." + $output_type 

        $Worksheet.SaveAs($ExtractedFileName$FileFormat

        write-Output "Created file: $ExtractedFileName" 
    } 


#Clean up & close the main Excel objects. 
$Workbook.Close() 
$Excel.Quit() 

Deploying Workflow as WSP File


In this article we can learn how to:
  • Create WSP using Visual Studio 2010
  • Deploy WSP to another SharePoint site
  • Export a Workflow as WSP

WSP Extension


A file with WSP extension represents SharePoint Solution Package. It is actually a cab file. When we create a workflow and make a WSP file we can use the file to deploy the Workflow to multiple SharePoint sites.

Creating a WSP File inside Visual Studio


We have to use the Package command for the solution to create the WSP file.


You can get the WSP file inside the bin\Debug folder of the solution.


The WSP file is actually a cabinet file. You can try opening it with Winzip/Winrar as shown below to see the contents.


Deploying WSP to SharePoint


Now we can deploy the WSP file to SharePoint. For this do the following steps.
Open the SharePoint site and use Site Settings > View All Site Content > Site Assets.


Click on the Add document link as highlighted above.



In the appearing dialog box select the WSP file we generated and click the OK button.

After this step we need to activate the solution from Site Settings > Galleries  > Solutions.

Deploying using stsadm

We can deploy the solution using the command line tool of SharePoint. You can open the SharePoint 2010 Management Shell console from the start menu.  Execute the following command once in the debug folder.

Stsadm –o addsolution –filename YourSolution.WSP

Once done with the above command open SharePoint Central Administration and from System Settings > Manage Farm Solutions > Select the workflow and click on Deploy Solution button as shown below.


Now go back to the SharePoint site and use Site Actions > Site Settings > Site collection features to activate the workflow.


Now use the Site Actions > Site Settings > Workflow settings page to add the workflow.

The deployment is completed and the Workflow is activated. You can access the workflow from List > Site Workflows > WF2.

Export a Workflow as WSP

Now we can try exporting a WSP file from an existing SharePoint site. The exported file can be used to deploy to another SharePoint server. For exporting follow the steps below.

Open Site Assets from Site Actions > View All Site Content.


Click on an existing Workflow, for example Contact Workflow in the above screen. The browser will prompt with the Save As dialog. Click the Save button to get the WSP file. This file can be used to deploy the solution to another SharePoint server.

Apps in SharePoint 2013

 Why Apps? What's wrong with Solutions?

The world is getting smaller day by day, thanks to technology. Big desktops became Bulky laptops. Bulky laptops became Notebooks. Notebooks became Ultra books. Now the trend is moving towards Tablets and Smart phones. So does our applications. Web applications becoming Apps. "Apps" is not just a marketing strategy to increase use of SharePoint in wider markets, but also a complete replacement of sandbox approach with many other Pros for both Development, Deployment and Usage.

Do you know that sandbox solutions are deprecated in 2013?

Sandbox Solutions are introduced in SP 2010 and now they are off, to encourage the usage of Apps. May be we should understand the seriousness of Microsoft towards "Apps" in future. Of- course the conventional SP solution approach is still there.

SP 2013 Development Options

  1. Full-trust SharePoint Solutions (WSP)
  2. Apps

Main reasons for "Apps" development

  1. Custom code will not be executed on server. So this can avoid, Application / Server outages.
  2. Custom code will be executed in Client-Browser or may be in some other scope like IIS or Windows Azure, which is completely out of SharePoint scope.
  3. Server Object Model (SOM) code is replaced by Client side object model (CSOM) / Rest Services using which Apps can communicate with Server. Authentication is done by OAuth.
  4. Installing / Updating / Uninstalling of apps can be done without  affecting the SharePoint site.
  5. Better usability in Tablets and Mobile devices.
  6. Taking SharePoint to next level in terms of  Usability, Development, Deployment  and  Hosting(cloud).
  7. Finally, everything in SharePoint 2013 is an App.
I know, the next question is "Most of these reasons are just sounding like reasons for Sandbox Solutions?" Well I have a question for you, how many times we have chosen sandbox solution for real-time implementation?
  1. No full object model . . .
  2. Understanding of Sandbox architecture
  3. Not an easy task to create proxies for execution of full trust code.
What ever may be the reason, real-time applications are tough to develop using a Sandbox solution. That is why "Apps" are introduced in SharePoint 2013 for ease of development and deployment.

Hosting Options in Apps

  1. Provider-hosted
  2. Hosted in the cloud (Windows Azure autohosted)
  3. Hosted in a SharePoint environment
  4. Several combinations of these options.


How apps for SharePoint Work


In above case, App1 is a Provider-hosted or a Cloud-Hosted (Auto-Hosted) app and App2 is a SharePoint Hosted App. So anything related to App1 will be created/Maintained in respective locations, either on Provider or Azure servers. This makes App1 safe and secure in execution perspective.
Now we need to look at App2.
When you create/Imported/Added a SharePoint-Hosted App, it will create a separate sub-web under your SP Web application. This app will be executed in a separate App Domain different from Farm App Domain. So, as process runs under App Domains, any exceptions in Apps will not cause any Outage to SharePoint Farm.


We will see the creation of an SharePoint-Hosted App and issues involved in doing so, in our next post.

Wednesday, May 8, 2013

Developing Sharepoint Windows Forms

Intro:


this Tip is for all developers who would like to make a user friendly Interface using sharepoint sites and Objects.

developing a windows Form is a good choice when it comes to fast interactive tool instead of using Basic console Application 


I'm going to list the steps in details to create the windows form 
Application and how does it support Sharepoint Objects model. 


STEP 1:

First go to Visual Studio 2010 and Create New Project choose the programming language for example C# then choose Windows Forms  Application 


STEP 2:

Rename the Project ,then when it is created right click the Project to edit the properties 


STEP 3:

In the Application tab choose the target frame work  .net FrameWork 3.5.


STEP 4:

In the Build tab change the platform target to Any CPU


STEP 5:

Right click references and add
sharepoint references, Microsoft.SharePoint .dll,
Microsoft.SharePoint.Client, Microsoft.SharePoint.Client.Runtime

select Browse go to the 14%  (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI


STEP 6:

right click the Form and select view code to go to the CS file then Add the using statement in the CS file 
using Microsoft.SharePoint; 



STEP 7:

Design the form (add the controls , labels, textboxes…etc.) I have designed a simple form with 2 labels 1 text box and 1 button.


STEP 8:

If you have button just right click the button and choose view code insert the sharepoint  code in the button action . 
this is only a simple code that will ask the user to enter URL and click on the button that will display the title of the Site. 
   private void button1_Click(object sender, EventArgs e)
        {
            //ask the user to enter Site Collection URL
            string SiteURL = textBox1.Text;
            using (SPSite SiteCollection = new SPSite(SiteURL))
            {
                label2.Visible = true;
               label2.ForeColor = System.Drawing.Color.Green;
                label2.Text = "Site Collection URL is :" + SiteCollection.RootWeb.Title.ToString();
            }
        } 



STEP 9:

you are done now , only run the solution and you can display the windows form