Wednesday, December 18, 2013

Checking Server to DB connectivity easily

We’ve all been there before. We set up our new web or application server with the latest code, update the connection strings with the proper SQL Server instance and password and fire up the application only to see all sorts of SQL connection/login issues.  These issues can be very frustrating to troubleshoot because you are not sure if the Login is incorrect, the db is spelled incorrectly or if there are network issues preventing you from seeing the SQL Server. On windows machines, there is a simple way to validate that the SQL server is accessible from the server.

  • Remote desktop into the server having connection issues
  • Click on “Start” and search for Folder Options. Click on Folder options, go to the View tab and validate that the “Hide extensions for known file types” is unchecked:image
  •  

  • Go to the desktop, right click and select New –> Text Document

image

  • Rename the file connection.udl and accept the rename warning
  • Double click on the udl file and it will open the Data Link window.
  • In the Connection tab, enter the information for the SQL server and click the “test Connection” button:

image

  • Now that you are successful connecting to the server, you can close the window and right click the connection.udl file and chose “Open with”. Select “Notepad” to view the proper connection string.

It’s a simple and effective way to troubleshoot SQL connection issues between servers and the SQL DB.

Enjoy!

Friday, May 24, 2013

MSDEPLOYAGENTSERVICE 401 unauthorized–Resolution


We recently migrated a production environment for a client to new Servers. I had previously been using MSDeploy to deploy the websites/services to the servers so I figured all I had to do was install MSDeploy, point Update my deploy scripts to point to the new servers, and deploy! I was using MSDeploy 2 on the previous servers so I figured it would work on the new ones. Unfortunately it didn’t turn out to be that easy.
When I ran the updated scripts I got the following error:
Fatal: Request to remote agent URL 'http://myserver/MSDEPLOYAGENTSERVICE' failed.
Fatal: The remote server returned an error: (401) Unauthorized. Fatal count: 1
I was using an admin account and I could hit that URL above in a browser so I knew it wasn’t an authorization issue.
Here are the things I tried that DIDN’T work:
  1. Uninstall/Reinstall MSDeploy 2
  2. Install MSDeploy 3
  3. Create the fake user group on the server per these instructions:
  4. Check the file permissions on the Target server
  5. Bang my head against the wall and start crying…
After many hours of searching, I finally came across this brilliant post that solved my issue:http://forums.iis.net/post/1960329.aspx
here are the scripts from the IIS Forum post for easy access:
dism /online /enable-feature /featurename:IIS-WebServerRole 
dism /online /enable-feature /featurename:IIS-WebServerManagementTools
dism /online /enable-feature /featurename:IIS-ManagementService
Reg Add HKLM\Software\Microsoft\WebManagement\Server /V EnableRemoteManagement /T REG_DWORD /D 1
net start wmsvc
sc config wmsvc start= auto


It turns out that the MSDeploy installers don’t tell you that you need to have some of the windows features installed in order to work properly. The installers just move forward and then fail to do the dreaded 401 unauthorized error. I uninstalled MSDeploy, ran the scripts there from the command prompt and installed. IT WORKED!

After that, I did run into 1 more issue regarding file permissions on the redirection.config file, but that was easily fixed via this post: http://weblogs.asp.net/jdanforth/archive/2010/04/29/insufficient-permissions-problems-with-msdeploy-and-tfs-build-2010.aspx

I hope this saves someone as much time as I wasted over the past 2 days.

Monday, May 6, 2013

Keep a website alive – PowerShell style

 

Recently, We had a website that didn’t have frequent visitors, but when the visitors did come, the website would take a long time to load up the first time. This is expected behavior because IIS shuts down its worker threads.

1 approach would be to set the IdleTimeout to 0 which means the threads are never aborted (details here: http://technet.microsoft.com/en-us/library/cc771956(v=ws.10).aspx). Instead of that though I decided to try my hand at PowerShell and came up with the following script:

   1:  # List of URLS to Ping
   2:  $urls = @("Http://URL1.com", "https://URL2.com")
   3:   
   4:  #Ping all URLs in the list
   5:  foreach ($objItem in $urls) {
   6:      $req=[system.Net.HttpWebRequest]::Create($objItem);
   7:      $res = $req.getresponse();
   8:      $stat = $res.statuscode;
   9:      $res.Close();
  10:      
  11:      #Pump it out to a text file
  12:      $res | Out-File pingresults.txt -append 
  13:  }



After that I set up a simple scheduled task to execute this at 15 minute intervals during business hours and walk away. Since I can schedule this when users are likely to hit the site, I have a customized “keep awake” script that provides more flexibility.


Enjoy!

Wednesday, February 27, 2013

Searching the Visual Studio Toolbox

 

Ever want to add a control to the design surface, but you have so many controls that you have a difficult time finding them? I was shown this neat shortcut that has saved me a lot of time over the past few days.

  1. Click on the toolbox or use the Hotkey Alt+Ctrl+x
  2. Start typing the name of the control. The control will be highlighted
    1. image
  3. If you have multiple controls that match the text, hit Tab to go to the next one.

Simple and easy!

Monday, February 18, 2013

Querying TFS for Data

At my company we are in the process of migrating to a new TFS environment. As part of that we wanted to do some house cleaning and only migrate the projects that have been used recently.

We’ve got a lot of clients, each with their own project that have been created over the past 3 years. We wanted an easy way to see what the latest activity was on the project. That will give us a good view into what projects can be archived and which ones need to be moved.

I could have just gone to Source Control and did a “view history” for each project, but that would have taken a long time (and be very tedious). Instead I wrote a simple console app that digs into each project collection and project and exports some simple project/Check-in data to a CSV file. I can then open up the CSV in Excel and add other info like “Code Status”, and “Migration Status” easily.

Here is the code I used. This is a pretty basic example and TFS provides a lot more, but it did what I needed it to. As I always say, Keep it simple!

using Microsoft.TeamFoundation;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.Framework.Client;
using Microsoft.TeamFoundation.Framework.Common;
using Microsoft.TeamFoundation.VersionControl.Client;

namespace TFSInterrogator
{
    class Program
    {
        static void Main(string[] args)
        {
            //Set up the TFS SErver URL
            string tfsServer = "https://tfs.projectleadership.net/tfs";
            tfsServer = tfsServer.Trim();
            var tfsUri = new Uri(tfsServer);

            //Get the Configuration Server
            var configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(tfsUri);

            // Get the catalog of team project collections
            Guid[] gVar = new Guid[] { CatalogResourceTypes.ProjectCollection };
            var collectionNodes = configurationServer.CatalogNode.QueryChildren(gVar, false, CatalogQueryOptions.None);

            List<TfspcInfo> collections = new List<TfspcInfo>();

            //Iterate over the list of collections
            foreach (var collectionNode in collectionNodes)
            {
                Guid collectionId = new Guid(collectionNode.Resource.Properties["InstanceID"]);

                var teamProjectCollection = configurationServer.GetTeamProjectCollection(collectionId);
                TfspcInfo info = new TfspcInfo();
                info.Name = teamProjectCollection.Name;

                Console.WriteLine("Checking Project Collection:" + info.Name);

                VersionControlServer versionControl = null;

                //If a project collection is offline, the Version Control cannot be retrieved. This catches this scenario
                try
                {
                    versionControl = teamProjectCollection.GetService<VersionControlServer>();
                }
                catch (TeamFoundationServiceUnavailableException)
                {}

                // Get a catalog of team projects for the collection
                Guid[] hVar = new Guid[] { CatalogResourceTypes.TeamProject };
                var projectNodes = collectionNode.QueryChildren(hVar, false, CatalogQueryOptions.None);

                info.Projects = new List<TfsProjectInfo>();

                // List the team projects in the collection
                foreach (var projectNode in projectNodes)
                {
                    TfsProjectInfo pInfo = new TfsProjectInfo();
                    pInfo.Name = projectNode.Resource.DisplayName;

                    Console.WriteLine("Checking last check in for Project :" + pInfo.Name);

                    // get the path to the project.
                    string vcpath = @"$/" + pInfo.Name + "/";

                    //Gets all changesets for the project and returns the most recent one.
                    if (versionControl != null)
                    {
                        List<Changeset> items = versionControl.QueryHistory(
                                vcpath,
                                VersionSpec.Latest,
                                0,
                                RecursionType.Full,
                                String.Empty,
                                null,
                                VersionSpec.Latest,
                                Int32.MaxValue,
                                false,
                                true).Cast<Changeset>().ToList();

                        if (items.Count > 0)
                        {
                            pInfo.LastCheckin = items.FirstOrDefault();
                        }
                    }

                    info.Projects.Add(pInfo);
                }
                collections.Add(info);
            }
            WriteToFile(collections);
        }

        private static void WriteToFile(List<TfspcInfo> collections)
        {
            var sbLine = new StringBuilder();
            sbLine.Append("Project Collection,Project,Last Check-in User, Last Check-in Date,Last Check-in Comment");
            sbLine.AppendLine();
            foreach (var projectCollection in collections)
            {
                Console.WriteLine("Packaging up Project Collection " + projectCollection.Name + " For Export.");
                foreach (var project in projectCollection.Projects)
                {
                    sbLine.Append(projectCollection.Name + ",");
                    sbLine.Append(project.Name + ",");

                    if (project.LastCheckin != null)
                    {
                        sbLine.Append(project.LastCheckin.Committer + ",");
                        sbLine.Append(project.LastCheckin.CreationDate + ",");
                        sbLine.Append(StringToCSVCell(project.LastCheckin.Comment) + ",");
                    }
                    else
                    {
                        sbLine.Append(",,,");
                    }
                    sbLine.AppendLine();
                }
            }
            Console.WriteLine("Exporting to File...");
            File.AppendAllText("c:\\TFSExport.csv", sbLine.ToString());
        }


        public static string StringToCSVCell(string str)
        {
            bool mustQuote = (str.Contains(",") || str.Contains("\"") || str.Contains("\r") || str.Contains("\n"));
            if (mustQuote)
            {
                StringBuilder sb = new StringBuilder();
                sb.Append("\"");
                foreach (char nextChar in str)
                {
                    sb.Append(nextChar);
                    if (nextChar == '"')
                        sb.Append("\"");
                }
                sb.Append("\"");
                return sb.ToString();
            }

            return str;
        }


    }

    public class TfspcInfo
    {
        public string Name { get; set; }
        public List<TfsProjectInfo> Projects { get; set; }
    }

    public class TfsProjectInfo
    {
        public string Name { get; set; }
        public Changeset LastCheckin { get; set; }
    }
}

Wednesday, February 6, 2013

Quickly and Easily Deploy Websites/Web Services with TFS Build via Web Deploy (MSDeploy)

 

When I first started deploying code from TFS I took the simple approach and created a batch file and deployed the websites via RoboCopy. I’m a very “Keep it simple” kind of guy, so this worked for us for a long time and so nothing was changed.

With my most recent project however, we were deploying code over a slow VPN tunnel from our servers in Chicago to servers located in Europe. Due to this, the RoboCopy was taking over 4.5 hours to complete. I needed a better/faster way so I started looking into Web Deploy (MSDeploy). I was able to get it working fairly easily and I was pleasantly surprised how easy it was to get it working, and how much time its saved me! I can now deploy the code in less than 20 minutes!

I’ve outlined the process in detail below, but in general you only need to do this:

  • Add MSBuild parameters to your automated build
  • Customize the deployment parameters for the website
  • Create a batch file to an auto-generated MSDeploy script
  • Execute the batch file from the automated build

 

Note: In order to use MSDeploy, you need to install MSDeploy on the destination server. Details can be found on the Web Deploy/MSDeploy website here:

http://www.iis.net/learn/install/installing-publishing-technologies/installing-and-configuring-web-deploy

Step 1: Create an automated Build in TFS

This is obvious, but worth mentioning. If you don’t have an automated build, you can’t do automated deployment. here is an MSDN Article that walks you through it: http://msdn.microsoft.com/en-us/library/ms181716.aspx

Kick off your build and verify that it builds and deploys the websites properly. You can do this by making sure the build completes properly and that the code is moved out to the code drop location. The code should be in the following location:

<Code Drop Location from Build Defaults Tab>\<Build Name>\<BuildName>_<timestamp>\PublishedWebsites

Example:

image

Step 2: Customize the build for MSDeploy

After the build is created, there are a few MSDeploy specific changes you need to do.

1. For the Default Template, create a new template by clicking on “New…”, selecting the Default template, naming it and clicking “OK”. This is the template we’ll be editing later on.

image

2. On the “Process” tab, you need to enter the following for the MSBuild Arguments:

/p:DeployOnBuild=True /p:Configuration=Release

image

Step 3: Kick off the build and verify MSBuild Arguments:

Now that the MSDeploy stuff has been added, you want to kick off another build. When it completes you should now start seeing some “_Package” folders in the Code Drop Location:

image

If you don’t see these, then go back to steps 1 and 2 and make sure you’ve got everything set up properly.

Take a look at the contents. You can see that TFS/MSBuild take care of a lot of the heavy lifting.

image

  • Deploy.cmd: This is a batch file that can be executed to publish code to a server that has Web Deploy installed on it. You can open it and take a look to get an idea of what its doing. Its some pretty cool stuff
  • SetParameters.xml: This is an xml file that allows you to set some specific parameters for the deployment  like IISWebsite name, etc. We’ll be using this file later to tell MSDeploy where to put the code.
  • Sourcemanifest.xml: Config file used by MSDeploy to Locate where the code files are
  • zip file: Compressed version of the published website. This is what is deployed to the server during deployments

 

Step 4: Save off “SetParameters.xml” files

I like using batch files for deployments and I like to save them on the build server so they are easily executed during deployments. When I do this, I add folders into the Code drop location for each build definition and create a new folder called “Batches”. We’ll need to save off the SetParameters.xml files as well, so lets create another folder in the Batches folder called SetParameters. Grab the SetParameters.xml file from step 3 and copy it into this folder.

image

Now we need to customize this XML file for our deployment.

Open the SetParameters.xml file in a text editor. You’ll see a “setParameter” XML node where the IIS Web Application name is being set. Update this to match the Website/Web app in IIS on the server where this will be deployed:

In my example, the WebService is actually deployed on the server to the Default Web Site –> NewsService location

IIS on the server:

image

WebService.Setparameters.xml:

image

Step 5: Create a batch file to execute the Deployment

Now that we have the Package folder being created and we’ve customized our XML file, we need to set up a batch file to execute MSDeploy and have it published to the server. The auto generated deploy.cmd file comes in extremely handy.

Again, we’ll leverage the “Batches” folder on the build server.

Create a new batch file called “RunDeployment.cmd” in the Batches folder and add the following lines into it:

image

  1. Here we set up variables used to execute the MSDeploy. I use variables so that we can easily change things like server names, etc. without changing the actual MSDeploy calls
    • _xcopyFlag: normal xCopy flags used to silently overwrite files via batch files
    • _location: this trims extra quotes from the location parameter passed in. (We’ll pass in this from the Deployment workflow later on)
    • _logfile: Sets up a log file. Very useful for troubleshooting/verification of deployments
    • _Server: name of the server were this will be deployed
    • svcname: name of the website/web service that is being deployed
  2. This moves the customized SetParameters.xml file back into the _Package folder to overwrite the auto generated one.
  3. This calls the deploy cmd file to execute the deployment.
    1. In my example, I’ve got a few extra flags for usename/password for security.
    2. There are other flags that can be used such as “-skip:objectName” that will ignore folders, but in this case we’ll keep it simple.

Step 7: Modify the Build workflow:

Now we need to make a few simple additions to the build workflow to call the batch file.

Open the Build template XAML file that was created in step 2. Typically these are stored in Source Control at $Project\BuildProcessTemplates:

When the workflow opens, add a new Sequence workflow after “Check in gated Changes..” step named “Deploy to Servers”

image

Open the Sequence object and drag an “InvokeProcess” task onto the surface:

image

Right click and select properties and enter the following information:

image

  • Arguments:The arguments we want to pass into the batch file
    • Example: """" + BuildDetail.DropLocation + "\_PublishedWebsites\" + """"
  • FileName:The location of the batch file created in step 5
    • Example: <Code Drop Location>\<BuildDefinition>\Batches\RunDeployment.bat”

 

Step 8: Run the build

Run the build and check your server. If the bits have been updated, you’ve been successful. If not, check the log file created by the batch file(filecopy.txt) and troubleshoot.

Important: Before you run the build, please make sure to backup the deployed folder on the server if you are concerned about losing anything. MSDeploy cleans up files on the server that aren’t part of the deployment, so if you have images or other files that are not part of the build, they will be deleted from the server. Also, run this on non-production environments first and hone this process down before you use it in production. If you build the batch files with parameters and variables, using MSDeploy in Staging/production environments will be easy.

 

Summary:

While the information above may seem complex, most of the work is done by the auto generated files. Basically you only need to update your build with the MSBuild parameters, create a batch file and execute the batch file from the build.

there is a lot of power in MSDeploy that is not being used at this point, but I’ll post something else if I come across anything else useful

Enjoy!

Thursday, January 31, 2013

Restoring a DB when the database is “In Use”

We’ve all run into this issue before. We have a backup of a database that we need to restore into a DEV/QA environment, but when we attempt to restore, we get the “System.Data.SqlClient.SqlError:  Exclusive access could not be obtained because the database is in use

This error can be very frustrating to get around and a person can spend a LONG time trying to kill all of the attached queries/threads. I know I’ve (almost) lost my sanity on more than 1 occasiaion trying to do this.

The good news is that I’ve found a simple/easy way to restore a backup when the database is in use

Steps:

  • Setting the database to single user (you!)
  • restore the backup
  • set it back to multiuser

Here is the SQL Script I use.

image

Simple and fast. Enjoy!

Thursday, January 24, 2013

Repository Pattern with Cross Table Objects

 

In a recent post, I mentioned one of the advantages of using Entity Framework in your application is that you can build a generic repository class very easily. Of course, its not realistic to think that all access to the data will be a single table at a time. Most often it’s the case where you need to return data that spans multiple tables.

I’m  going to show you how I created a simple repository class that spans tables.

Creating the Summary/DTO Object

The first thing I like to start with is to create the simple POCO object that will be used to transport the data. This is essential to define first so that you do not get caught up in data structures, but instead define the data as the application is going to need it.

In the case with my database, I have a table called “Avail” that contains a ton of foreign keys to a contact table. I needed to display this data, but instead of a bunch of foreign keys, I needed to display the actual names of people. etc.

I ended up defining the object as follows:

 image

Creating the Repository Class

After the DTO Object is created, its just a matter of creating a repository class that manages this object. In my implementation of the repository, I have an IRepository Interface that all of the repositories use to promote ease of understanding when working with them:

image

On top of this, I created a separate IFullAvailRepository so that I could customize this down the road if necessary with custom calls. For now, its empty.

image

I then created the repository class an implement the interface:

image

Implementing the interface

Now the hard (easy?) part. Using the power of Entity Framework and LINQ to Entities, implementing the interface for a custom object is actually quite easy. The most important part is now the objects are retrieved from the data context. By using an IQueryable object, we can abstract out filling the custom object into a simple property, and the rest of the methods just leverage that property.

image

Using the repository

Using this implementation, the DB isn’t actually hit until needed, so we can do queries to retrieve objects or sets of objects and only the items requested are in memory, instead of loading all of the items and doing the queries there. For example, this unit test retrieves 1 specific item out of the database, and its super fast.

image

 

Summary

I’m new to the repository pattern, but so far I really like the abstraction is provides to the data layer. I’m sure I’ll run into issues in the future, but so far, its working great!

Friday, January 11, 2013

Showing System message to user when logging in



As part of my role at my company, I’m responsible for deploying code to servers as well as troubleshooting issues with environments as they come up. As a result of this, I’m typically logged into multiple servers at a time. As anyone who does this sort of thing knows, it can be very easy to forget which environment you are in and accidentally modify the wrong server at the wrong time. If this happens to be a production server, the consequences can be catastrophic.

Here is a nice little trick I use to make sure that people know which server they are accessing.
Remote desktop into the server and open Windows Task Scheduler: clip_image001
On the right hand side in the Actions pane click "Create task:" clip_image002
When the Create task window opens, on the General Tab enter a name:
clip_image003

Click on the Triggers tab and click the "New..." button
On the New Trigger window, select "At Log on" in the Begin the Task dropdown and make sure "Any User" is selected in the Settings box and click "OK". This will show the message when a user logs in to the server.

image
Add another trigger and select "On connection to user session". This will show the message when a user reconnects to an existing user session.

image
Click on the Actions tab and click "New..."
In the Action dropdown, select "Display a message". Then enter a title for your message and the message you want to show to the user:
clip_image006
Click "OK" to close the Create Task window.

Done!
Now when the user logs in or reconnects to a session, you'll see the message:
clip_image007

Wednesday, January 9, 2013

Entity Framework -Revisited

I'm starting a new project soon for a new client and I've finally had some time to do some prep work to set up the skeleton of the solution/projects as well as investigate some "new" technologies. One of the things that I've revisited is Entity Framework(EF). I thought a great way to get my blogging going again was to post my thoughts on EF.

<gasp>Turns out is not as bad as I thought it was...</gasp>

Why I hate(d) Entity Framework:

Until 4 weeks ago, I was not a fan of EF. This hate of EF was born out of early adoption of the technology (V1.1) and attempting to implement this in an N-Tier Application. The pain points were many and there were numerous times when we almost put the project on hold for a week while we ripped out all Entity Framework pumping and re-laid L2SQL  Each time the pain of doing so seemed to out weight the benefits and then 2 days later we would uncover something ELSE that would make EF seem even worse. This would then force us to revisit the EF vs. L2SQL thing again with the same resolution.

Note: I will allow that we were all moving really quickly in the project and that none of us really understood EF to the level that we all should have, but the idea of the ORM in general is to make the lives of developers easier. (If it can't do that, then there is no reason to use a technology IMO). However, if we had known how it really worked, we probably would not have used it, so........

Here are 2 of the many reasons why it was hard to use:

 

Reason 1:

If you just wanted to update the foreign key on a row to a different foreign key, you had to use EF to pull that item out of the Database and then assign that item to the navigation property. So if an address table had a foreign key to the state table (StateId), you couldn't just update the value in the StateId column to the proper state, you had to set the Address.State object and then save changes.

Reason 2:

There was a large amount of work/tracking that the code (and hence the developer) had to do to track the various states of the objects through the life cycle. Essentially pulling data out of the database was nice and easy, but inserting or updating data was a nightmare. We would end up with duplicate entries in the database, or exceptions thrown by the EF context for primary key constraints as the context would attempt to insert an object instead of updating an existing item.

There are many more reasons, but for the sake of sanity, I've repressed those into the dark recesses of my mind which I hope never again see the light of day.

How much did I really hate it?

It was so difficult that eventually we stopped any new development with EF and created a L2SQL context that we started using from that point with any new code. While it would seem like managing two ORM instances would be a pain, EF was so immature that it seemed less painful to us.


Additionally, On the 2 subsequent projects I've been on, we've exclusively used L2SQL, even though there hasn't been any new Features or developments in that technology for a while, and is for the most part considered Dead. It did what we needed it to do, and kept simple things simple. What else can you ask for?

2nd chance?

I've been to a few conferences and MSFT events over the past couple of years that piqued my interest in the newer versions, but my previous experiences prevented me from actually looking into it as a viable ORM for future projects. It still didn't have the feel of an enterprise solution, especially when all I would see mentioned in the talks  would be the code first approach. While the code first approach is "cool" IMO I its not something an enterprise app could/would use, much like RIA services for Silverlight. Potentially useful in small rapid application development scenarios, but not much elsewhere

However, I decided to take another look and I was pleasantly surprised. Here are the things I'm finding super useful:
DBContext object
The introduction of the DBContext object makes working with objects much easier. It abstracts away most of the dirty work of attaching/detaching and monitoring objects that are going in and out of the database. There is a lot of information about there about this that I wont reproduce here, but it simplifies working with the database context and the object graph much easier.
If you need more functionality than what is exposed in the DBContext object, you can still use the object context for more operations that are closer to the metal.
If you are able to accomplish all of your work with the DBContext, upgrading to a newer version of EF is supposed to be pain free as the DBContext object is "guaranteed" to be supported in future versions of EF
Object Inheritance:
Entity Framework now supports multiple types of Entity structures(Table Per type, Table per Hierarchy, and Table per Concrete Type). For us this is going to be immensely helpful. We're working off of an existing database, that has tables that contain data for multiple types of entities so by setting up inheritance in the model we can use built in EF functionality to extract just the types we need.

Example: We have a large "Contact" table that has Vendors, Companies, People etc. We simply create a "Person" entity and base it off of the type field on the database. (Great explanation here)

While the idea of inheritance isn’t brand spanking new to EF, its something we never took advantage of before and I have to say I’m super excited about it.

Enforcement of Non-existent relationships
As mentioned, we're working with an existing database. Unfortunately, the people who created the database didn't believe in using  relationships to enforce data integrity. While this has the "nice" side effect of making data cleansing easy by being able to delete almost any row out of any table, it destroys the massive benefit of navigation properties in the ORM. Entity Framework allows you to add the relationships directly into the model. This allows the code to enforce the relationship, even though the database does not.
Simply open the design surface, select Relationship and drag the relationship between the entities. The properties of the relationship can be modified to match you specific scenario (Multiplicity, name, etc.)

Creation of a generic EF Repository Class
While this might not be something that everyone needs (or even endorses) Entity Framework does allow you to create a simple Repository Class that can be used for all entities in the model. The use of DBContext.Set() allows you to create a repository class of type TEntity and do all manner of operations generically, thus reducing the amount of code needed for a simple repository (Context name blurred to protect the innocent)


Cross your fingers!

I hoping that since I've done more investigation into EF this time and understand it a bit better, that we'll be more successful using it and hopefully reap the benefits of the good work of the folks on the Entity Framework team. I'll post more later as a follow up to let everyone know how it went!