Bind DNS – Configuring multiple views on primary and secondary DNS servers

If you have a firewall with DMZ and Internal zones and you have machines that sit behind the firewall that query DNS that resolves back to your machines inside one of those zones then it is likely that you have had to setup some type of DNS configuration that allows internal hosts to resolve a DNS name to the internal IP rather than the public IP address.

Now you could do that by creating hosts entries on each machine so that it does not use DNS to resolve the machine name, or you could setup a set of DNS servers for internal machines to use that only host internal DNS. However that complicates things and makes it very hard to maintain.

If you are using BIND DNS then you have an option called Views. Views allow you to create zones  that will return different information based on the IP address or range of IP addresses that are querying. However getting this configuration right can be a little tricky as my experience it researching the topic resulted in lots of bad information.

So I am attempting to create an article that hopefully will help the next person who needs to set this up.

So lets take a look at views. A view as defined by the creator of BIND is “The view statement is a powerful feature of BIND 9 that lets a name server answer a DNS query differently depending on who is asking. …. Each view statement defines a view of the DNS namespace that will be seen by a subset of clients. A client matches a view if its source IP address matches the address_match_list of the view's match-clients clause ”

This looks like it will accomplish exactly what we want. So let me provide an example scenario to help us walk through the configuration.

We will be setting up 2 DNS servers. One of the servers will act as our primary (master) server and the other as a secondary (slave) server. The master is were all DNS changes will be made to our zones and those changes will then get propagated to the slave server. We want DNS servers and clients on the public internet to be able to query for our zones and get returned to them the public IP information. However the servers and machines inside our firewall should have returned to them the internal IP addresses when querying the same zone.

I am attaching the two named.conf files that are referenced below.

Master Server  named.conf

Secondary named.conf

The following information is fictitious and only used to create this example.

External Network: 74.125.127.0/24

Internal Network: 192.168.1.0/24

Domain: mydomain.com

So lets assume our external zone should look like the following.

$ttl 38400
mydomain.com.   IN      SOA     dns1.mydomain.com. postmaster.mydomain.com. (
                        1271863698
                        10800
                        3600
                        604800
                        38400 )
mydomain.com.   IN      NS      dns1.mydomain.com.
mydomain.com.   IN      NS      dns2.mydomain.com.

dns1.mydomain.com.     IN      A       74.125.127.10
dns2.mydomain.com.     IN      A       74.125.127.11

mail.mydomain.com.      IN      A       74.125.127.5
mydomain.com.   IN      MX      10 mail.mydomain.com.

www.mydomain.com     IN      A       74.125.127.6

and our internal zone looks like this

$ttl 38400
mydomain.com.   IN      SOA     dns1.mydomain.com. postmaster.mydomain.com. (
                        1271863698
                        10800
                        3600
                        604800
                        38400 )
mydomain.com.   IN      NS      dns1.mydomain.com.
mydomain.com.   IN      NS      dns2.mydomain.com.

dns1.mydomain.com.     IN      A       192.168.1.10
dns2.mydomain.com.     IN      A       192.168.1.11

mail.mydomain.com.      IN      A       192.168.1.5
mydomain.com.   IN      MX      10 mail.mydomain.com.

www.mydomain.com     IN      A       192.168.1.6

So what we want to happen is that if a client on the internet queries for www.mydomain.com they will get 74.125.127.6 and if anything behind the firewall queries for www.mydomain.com they will get the internal IP 192.168.1.6

MASTER SERVER

So lets take a look at the named.conf on our master server and see how it looks.

We first create some acl entries that will make it easier to configure some of our options by using a friendly name that contains a list of IP addresses.

The first ACL “dns_slaves” will be used to hold the IP addresses we will use on our secondary servers. Notice that it has two IP addresses .11 and .12. This is because later we will be configuring our secondary server with 2 IP addresses to allow it to use a different source IP address for queries to the master server based on if we are getting external or internal zone information.

acl "dns_slaves" {
        192.168.1.11;
        192.168.1.12;
};

The ACL “internal_slave” is the IP address that will be used by the secondary DNS server to query for internal zone information

acl "internal_slave" {
        192.168.1.11;
};

The ACL “external_slave” is the IP address that will be used by the secondary DNS server to query for external zone information

acl "external_slave" {
        192.168.1.12;
};

The ACL “internal_hosts” represents the IP range of our internal network that queries will come from

acl "internal_hosts" {
        192.168.1.0/24;
};

options allows us to define global defaults for all zones and views. The two options that are important are the allow-query { any; } which allows by default any client to query the zone mydomain.com and any others we may decide to host in our DNS servers. The next important settings is recursion setting which we have defaulted to no. Recursion allows clients to query a DNS server for any domain that the server is not responsible for and get back DNS information. Since we do not want the world to use our DNS servers except when resolving our domains we have set this by default to no. Later we will override that setting for clients behind the firewall since they will be using the DNS server to also resolve other domains.

options {
        directory "/etc";
        pid-file "/var/run/named/named.pid";
        allow-query { any; };
        recursion no;
};

Now here is the configuration for our internal view.

match-clients - Since the internal view should only be used by IP addresses querying from inside our network the match-clients has two entries. !external_slave which will block a query coming from the slave server IP that is used for the external zone and an entry matching our internal_hosts acl which is any IP address in that acl range. Note that since we do not have any other acl anything else is automatically denied.

recursion –Set to yes so that internal clients can do recursive lookups using this server.

allow-transfer - Set to internal_slave so that the secondary can request transfers for this views zones.

also-notify - Set to the IP address of the secondary server. For some reason BIND does not allow you use an ACL when setting this entry. This is also not necessary to add but I found it made it very explicit. By default BIND will use the DNS entries listed for the zone.

zone mydomain.com - Simply defines that it is a master zone.

view "internal" {

        match-clients {
                !external_slave;
                internal_hosts;
        };

        recursion yes;

        allow-transfer {
                internal_slave;
        };

        also-notify {
                192.168.1.11;
        };

        zone "." {
                type hint;
                file "/var/named/root.internal";
        };
        zone "localhost" {
                type master;
                file "/var/named/localhost.internal.hosts";
        };
        zone "1.0.0.127.in-addr.arpa" {
                type master;
                file "/var/named/127.0.0.1.internal.rev";
        };
         zone "mydomain.com" {
                type master;
                file "/var/named/mydomain.com.internal";
        };

};

Now we jump to our external view

match-clients – Matches a query from external_slave which is the IP address that will be used by the slave server to query for zone information to the primary server. !internal_hosts excludes any other IP address internal from querying the zones in external and the last statement any allows all other IP’s to match the view.

recursion – Set to no so that no external clients can use the DNS server for recursive queries

allow-transfer – Set to allow the IP of the slave server assigned for external view to make zone transfers

also-notify – Set to IP of secondary to notify it of zone updates

zone mydomain.com – points to file containing the internal zone information

view "external" {

        match-clients {
                external_slave;
                !internal_hosts;
                any;
        };

        recursion no;

        allow-transfer {
                external_slave;
        };

        also-notify {
                192.168.1.12;
        };

        zone "mydomain.com" {
                type master;
                file "/var/named/mydomain.com.external";
        };

};

So that finishes up our primary (master) server confirmation. In summary our master has two views “Internal” and “External” which hosts different zone files based on the clients that will query it. If someone queries and matches our “External” view they will get the public IP addresses. If someone queries and matches our “Internal” view they will get the private IP addresses.

SECONDARY SERVER

ACL entries

dns_master is the IP address of the master server

acl dns_masters {
        192.168.1.10;
        };

internal_hosts is our internal ip range

acl internal_hosts {
        192.168.1.0/24;
        };

options has two entries that are important. allow-query {any;} which allows any IP to query the zones we host and recursion no to not allow by default recursive lookups.

options {
        directory "/etc";
        pid-file "/var/run/named/named.pid";
        allow-query { any; };
        recursion no;
};

Here is our internal view

match-clients – Set to internal_hosts to only allow IP addresses inside the network to match the view

recursion – Set to yes to allow internal clients recursive lookup capabilities

zone mydomain.com – This is the important part. Notice that transfer-source is set to 192.168.1.11. This forces the zone transfer request to come from the IP 192.168.1.11. When the master server see’s the query coming from that IP it will match the internal view and transfer the correct zone information for mydomain.com that matches the internal zone.

view "internal" {

        match-clients {
                internal_hosts;
                };

        recursion yes;

        allow-notify {
                dns_masters;
                };

        zone "." {
                type hint;
                file "/var/named/root.internal";
                };

        zone "localhost" {
                type master;
                file "/var/named/localhost.internal.hosts";
                };
        zone "1.0.0.127.in-addr.arpa" {
                type master;
                file "/var/named/127.0.0.1.internal.rev";
                };

        zone "mydomain.com" {
                type slave;
                masters { 192.168.1.10; };
                transfer-source 192.168.1.11
                file "/var/named/slaves/mydomain.com.internal";
        };
};

External View

match-clients - !internal_hosts prevents internal IP’s from getting external view and matches everything else due to the any statement.

recursion – Set to no so that external clients are not able to perform recursive lookups.

allow-notify – Set to dns_masters to allow notifications from the master server

zone mydomain.com - This is the important part. Notice that transfer-source is set to 192.168.1.12. This forces the zone transfer request to come from the IP 192.168.1.12. When the master server see’s the query coming from that IP it will match the internal view and transfer the correct zone information for mydomain.com that matches the internal zone.

view "external" {

        match-clients {
                !internal_hosts;
                any;
                };

        recursion no;

        allow-notify {
                dns_masters;
                };

        zone "mydomain.com" {
                type slave;
                masters { 192.168.1.10; };
                transfer-source 192.168.1.12;
                file "/var/named/slaves/mydomain.com.external";
        };

};

Reed Copsey, Jr presented Windows Forms to WPF with MVVM at last Seattle DOT Net User Group Meeting

I am mostly an ASP.NET developer but after watching Reed show the group the basics of WPF and how easy it was in my opinion compared to winforms I am convinced now that I am going to start doing WPF going forward.

Reed put together some great slides and a basic application to demonstrate the power of WPF. He moved from a basic winform app using event driven development to a converted WPF app with the same event driven style as the winform version. Then he slowly moved his app over to more advanced features of WPF and ended with using the MVVM design pattern.

Reed was kind enough to publish his slides and samples: http://reedcopsey.com/talks/from-windows-forms-to-wpf-with-mvvm/

Here is an excerpt from the page

This talk illustrates how Windows Presentation Foundation can dramatically improve the experience of developers, not just designers. Two versions of a simple application will be demonstrated, one developed using Windows Forms, and one using the same approach in WPF. By showing the same application in both technologies, we will show how a short learning curve can be used to migrate development to WPF.

I’ll then discuss three new features of WPF: Data Binding, Templating, and Commanding. I’ll show how they enable a new application architecture, the Model-View-ViewModel pattern, and illustrate how rethinking our approach to design in terms of these three features allows for huge gains in flexibility, testability, and maintainability of our applications.

Finally, we’ll look at third version of our application, a rewrite of our previous application using the Model-View-ViewModel pattern.

This talk is approximately 1 hour and 15 minutes in length.

 

 

Seattle DOT Net User Group Meeting - March 2010 Meeting : An Introduction to Visual Studio 2010 Extensibility

For more information about the usergroup visit www.seattledotnet.org

When

From: March 10, 2010 05:45 PM

To: March 10, 2010 08:00 PM

Publication date: February 07, 2010 12:00 AM

Location

Where
Street: 2401 Utah Ave. S.
City: Seattle
State: Washington
Country: USA

What

Speaker: Nathan Halstead

 
Nathan Halstead is the Program Manager responsible for the Visual Studio 2010 SDK.  Over the past two years, Nathan has worked to improve the packaging, licensing, diagnostic, and extensibility technologies at the core of the Visual Studio architecture.  Prior to his work with the Visual Studio team, Nathan worked on the data modeling features in the Microsoft Office PerformancePoint Server product suite, and as a research assistant on machine learning technologies at Carnegie Mellon University’s Institute for Software Research.  Nathan holds a Bachelor’s degree in Computer Science from Carnegie Mellon University.  In his spare time, Nathan has been spotted playing amateur hockey at local ice rinks or falling gracefully down mountains covered in snow in an attempt to ski.

 

Abstract : An Introduction to Visual Studio 2010 Extensibility

Have you ever wanted to customize and enhance the out-of-box Visual Studio 2010 development environment? Do you have an idea for a new integrated tool, but don't know how to get started? This talk will cover the basics of Visual Studio extensibility and show you how to take advantage of the numerous extension points inside Visual Studio. Additionally, we'll be discussing the new extensibility enhancements in Visual Studio 2010 and how you can leverage those in your application. This will be a fun session with plenty of demos, and will have something for both the Visual Studio novice and the seasoned veteran.

Want a head start?  Visit the developer center: http://www.msdn.com/vsx.

Visual Studio - CloudService app "Build fails" Unable to remove directory.

While working on a CloudService application for Windows Azure, I encountered an error saying "Unable to remove directory "C:\HelloAzure\HelloAzure_WebRole\bin\_PublishedWebsites". The directory is not empty.

My version of Visual Studio is 2008 SP1, althopugh I assume the issue may be with other versions as well. The error occurs most every time I build, rebuild or publish.

I've attached a sample of the error below, note the WebRole\bin\_PublishedWebsites directory was the only location affected and was consistent across all projects receiving the error.

This looks to be as the result of the Microsoft.CloudService.targets calls for the cleanup of the _PublishedWebsites folder after the Webrole is compiled and it added to the .cspkg file. 

 

 In the end the issue was related to MS Forefront endpoint detection (antivirus), after adding devenv.exe to the excluded processes list the error was immediately resolved.  

Cheers,

John

 

(MVP) Model View Presenter - Passive View

In my journey to write better software I have been looking at various patterns available. One of them is known as the Model View Presenter or MVP pattern. In doing some reading many of the sites I have come across point to Martin Fowlers website who is well known for his contribution in design patterns. He has put the MVP pattern into two separate types Passive View  and Supervising Controller.

In this blog entry I am going to focus on the Passive View as it provides complete separation from your model and allows you to completely mock out your view for testing. One additional note is that I found many people swap out the term Presenter for Controller which is a bit confusing since another pattern exists called (MVC) Model View Controller. So I will be sticking to the term Presenter.

 mvp-pattern-diagram

Disclaimer: I am just starting to understand this pattern so I may not explain it or design it in a way that maintains 100% true to the pattern. I am looking forward to feedback or comments that would correct anything I have misunderstood or may provide a clearer understanding of the pattern.

Views - Views are the entry point into the process. In the case of a asp.net application this would be the .aspx page. In the case of a winform application it would be the form itself.  Each view will have a interface that represents the data that needs to be inserted, retrieved, or databound to the view.

Presenter - Each view will have a presenter assigned to it that is responsible for handling all interaction with the view. The goal of the presenter is to move the logic out of the view itself and put the responsibility into the presenter.

Model - The model is a representation of the data that should be displayed on the view. In some cases the model is represented by some object that is returned from a datasource such as a database. In many cases additional layers exist between the datasource and model and business specific entities may have been created to represent the model while abstracting away the source of the information.

So how does this relate to a standard project we would create. In order to demonstrate the benefit of the pattern I am going to create a small winform application using the way I would have normally coded it and then refactor the project to use the MVP pattern.

Project Summary: Create a windows application with one form that if given a customerId will lookup customer in our datasource and display the first and last name, City and State. Once a customer has been retrieved the user should be able to edit the City or State and save the changes. Validation should also be performed to ensure that customerId is an integer value, City and State is required and cannot be empty.

Project Details: Winform application using .net 3.5 framework. Data will be stored and retrieved from SQL Express database using Linq to SQL and will return to the Presenter the actual Ling to SQL entities that this framework creates.

Project Layout: I created a folder called Views which will contain a sub folder for each view I am creating. I personally do this because each view has an associated Interface and Presenter class created and this provides a simple way to keep them together. I will also create a Models folder which will hold my Linq to SQL code and anything I do to extend the Models. 

ProjectFolderView

CustomerView Form – Here is the form that will display and allow editing of City and State

CustomerView.cs - Here is the initial code with all interaction between the model and the view being directly inside the form. The problem with this particular way of developing is that the only way to test the application is to bring up an instance of the application and manually test or use some kind of test automation software that records and plays back things.

public partial class CustomerView : Form
{
    public CustomerView()
    {
        InitializeComponent();
    }

    private void buttonSearch_Click(object sender, EventArgs e)
    {
        string customerIdString = this.textBoxCustomerId.Text.Trim();
        int customerId;
        Customer customer = null;

        if (customerIdString == "")
        {
            MessageBox.Show("CustomerId cannot be empty");
            return;
        }

        try
        {
            customerId = Int32.Parse(customerIdString);
        }
        catch
        {
            MessageBox.Show("CustomerId must be an integer value");
            return;
        }

        try
        {
            customer = Customer.GetCustomerById(customerId);
        }
        catch (Exception ex)
        {
            MessageBox.Show(ex.Message);
            return;
        }

        this.textBoxCustomerIdReadOnly.Text = customer.CustomerId.ToString();
        this.textBoxFirstName.Text = customer.FirstName;
        this.textBoxLastName.Text = customer.LastName;
        this.textBoxCity.Text = customer.City;
        this.textBoxState.Text = customer.State;

    }

    private void buttonSave_Click(object sender, EventArgs e)
    {
        string city = this.textBoxCity.Text.Trim();
        string state = this.textBoxState.Text.Trim();

        if (this.textBoxCustomerIdReadOnly.Text == "")
        {
            MessageBox.Show("No customer has been loaded");
            return;
        }

        if (city == "")
        {
            MessageBox.Show("City cannot be empty");
            return;
        }

        if (state == "")
        {
            MessageBox.Show("State cannot be empty");
            return;
        }

        Customer customer = new Customer();
        customer.CustomerId = Convert.ToInt32(this.textBoxCustomerIdReadOnly.Text);
        customer.FirstName = this.textBoxFirstName.Text;
        customer.LastName = this.textBoxLastName.Text;
        customer.City = this.textBoxCity.Text;
        customer.State = this.textBoxState.Text;

        try
        {
            Customer.SaveCustomer(customer);

            MessageBox.Show("Customer Saved");
        }
        catch (Exception ex)
        {
            MessageBox.Show(ex.Message);
        }
    }
}

Refactoring to use MVP: Now I will refactor the above code to use the MVP Passive View pattern. This will create an Interface that represents the view and place a presenter in between the Model and the View to do all the interaction between layers.

ICustomerView – I created an interface ICustomerView that will represent the data on the form as shown below. This view will get injected into the Presenter constructor so that it has access to writing and retrieving values from the View.

public interface ICustomerView
{
    string CustomerIdInput { get; }
    string CustomerIdReadOnly { set; get; }
    string FirstName { get; set; }
    string LastName { get; set; }
    string City { get; set; }
    string State { get; set; }
    void ShowMessage(string message);
}

CustomerViewPresenter – The CustomerView form will contain an instance of this class which will take control of the setting and getting of values from the form. It will also handle the retrieving and saving of information from the Model. Notice that the constructor takes an instance of ICustomerView

public class CustomerViewPresenter
{
    private ICustomerView customerView;

    public CustomerViewPresenter(ICustomerView customerView)
    {
        this.customerView = customerView;
    }

    public void LoadCustomer()
    {
    }

    public void SaveCustomer()
    {
    }

}

Refactored Project View – This shows the additional files created ICustomerView and CustomerViewPresenter.

ProjectFolderView-refactored

CustomerView Refactored - Now that I have the interface and presenter created let’s move our code from our CustomerView form into our presenter. Notice that the form now has a private instance declared CustomerViewPresenter and in the constructor of the view we instantiate a new instance of the presenter and pass in ICustomerView represented by this. Also note that the search and save button click events no longer contain any code to load or save the customer but rather calls the presenter methods to take care of the logic.

public partial class CustomerView : Form, ICustomerView
{
    CustomerViewPresenter customerViewPresenter;

    public CustomerView()
    {
        InitializeComponent();
        customerViewPresenter = new CustomerViewPresenter(this);
    }

    private void buttonSearch_Click(object sender, EventArgs e)
    {
        customerViewPresenter.LoadCustomer();

    }

    private void buttonSave_Click(object sender, EventArgs e)
    {
        customerViewPresenter.SaveCustomer();
    }

    #region ICustomerView Members

    public string CustomerIdInput
    {
        get { return this.textBoxCustomerId.Text.Trim(); }
    }

    public string CustomerIdReadOnly
    {
        get { return this.textBoxCustomerIdReadOnly.Text; }
        set { this.textBoxCustomerIdReadOnly.Text = value; }
    }

    public string FirstName
    {
        get { return this.textBoxFirstName.Text.Trim(); }
        set { this.textBoxFirstName.Text = value; }
    }

    public string LastName
    {
        get { return this.textBoxLastName.Text.Trim(); }
        set { this.textBoxLastName.Text = value; }
    }

    public string City
    {
        get { return this.textBoxCity.Text.Trim(); }
        set { this.textBoxCity.Text = value; }
    }

    public string State
    {
        get { return this.textBoxState.Text.Trim(); }
        set { this.textBoxState.Text = value; }
    }

    public void ShowMessage(string message)
    {
        MessageBox.Show(message);
    }

    #endregion
}

CustomerViewPresenter - Below shows what is inside the presenter now. Since the presenter gets passed into it an interface representing the view it can now interact directly with the view. It is now up to the presenter to talk to the Model and get a customer and set the views information or retrieve information from the view and save it.

public class CustomerViewPresenter
{
    private ICustomerView customerView;

    public CustomerViewPresenter(ICustomerView customerView)
    {
        this.customerView = customerView;
    }

    public void LoadCustomer()
    {
        Customer customer = null;
        int customerId;

        if (customerView.CustomerIdInput == "")
        {
            customerView.ShowMessage("CustomerId cannot be empty");
            return;
        }

        try
        {
            customerId = Int32.Parse(customerView.CustomerIdInput);
        }
        catch
        {
            customerView.ShowMessage("CustomerId must be an integer value");
            return;
        }

        try
        {
            customer = Customer.GetCustomerById(customerId);
        }
        catch (Exception ex)
        {
            customerView.ShowMessage(ex.Message);
            return;
        }

        customerView.CustomerIdReadOnly = customer.CustomerId.ToString();
        customerView.FirstName = customer.FirstName;
        customerView.LastName = customer.LastName;
        customerView.City = customer.City;
        customerView.State = customer.State;
    }

    public void SaveCustomer()
    {

        if (customerView.CustomerIdReadOnly == "")
        {
            customerView.ShowMessage("No customer has been loaded");
            return;
        }

        if (customerView.City == "")
        {
            customerView.ShowMessage("City cannot be empty");
            return;
        }

        if (customerView.State == "")
        {
            customerView.ShowMessage("State cannot be empty");
            return;
        }

        Customer customer = new Customer();
        customer.CustomerId = Convert.ToInt32(customerView.CustomerIdReadOnly);
        customer.FirstName = customerView.FirstName;
        customer.LastName = customerView.LastName;
        customer.City = customerView.City;
        customer.State = customerView.State;

        try
        {
            Customer.SaveCustomer(customer);

            customerView.ShowMessage("Customer Saved");
        }
        catch (Exception ex)
        {
            customerView.ShowMessage(ex.Message);
        }
    }

}

So you might be asking about now “Why do I want to move all my logic to another class? Seems like all I did was make things more complicated and created more code!”. The real value of doing it this way comes in the ability to now write unit tests against our code.

Since the CustomerView now has an interface ICustomerView that represents the view, we can now substitute the real form for a mocked version of it and perform all the validation we would normally have done manually.

The other benefit you get is that you have separated the logic from your view so you could use this same code to create an asp.net web application by just creating your aspx pages and reusing all of your interface and presenter code.

Testing – Below is the project view of my test project. It is when you get to this part you find the real advantages of your work above. Below is the view of the test project. Note the two files CustomerViewMock.cs which is a mock of our real view and CustomerViewTests.cs which contains all our test we want to perform.

testprojectview

CustomerViewMock – This class in my test project represents my view. Note that the class inherits from my ICustomerView interface. You will also notice that I created public properties representing each of the components that would have been on my real form. I do this to keep my tests feeling like I am interacting with the real form.

public class CustomerViewMock : ICustomerView
{
    public string textBoxCustomerId { private get; set; }
    public string textBoxCustomerIdReadOnly { get; private set; }
    public string textBoxFirstName { get; private set; }
    public string textBoxLastName { get; private set; }
    public string textBoxCity { get; set; }
    public string textBoxState { get; set; }
    public string messageBox { get; private set; }

    private CustomerViewPresenter customerViewPresenter;

    public CustomerViewMock()
    {
        customerViewPresenter = new CustomerViewPresenter(this);
    }

    public void ButtonLoad()
    {
        customerViewPresenter.LoadCustomer();
    }

    public void ButtonSave()
    {
        customerViewPresenter.SaveCustomer();
    }

    #region ICustomerView Members

    public string CustomerIdInput
    {
        get { return textBoxCustomerId; }
    }

    public string CustomerIdReadOnly
    {
        get { return textBoxCustomerIdReadOnly; }
        set { textBoxCustomerIdReadOnly = value; }
    }

    public string FirstName
    {
        get { return textBoxFirstName; }
        set { textBoxFirstName = value; }
    }

    public string LastName
    {
        get { return textBoxLastName; }
        set { textBoxLastName = value; }
    }

    public string City
    {
        get { return textBoxCity; }
        set { textBoxCity = value; }
    }

    public string State
    {
        get { return textBoxState; }
        set { textBoxState = value; }
    }

    public void ShowMessage(string message)
    {
        messageBox = message;
    }

    #endregion
}

CustomerViewTest - And here is my test class with the various tests that I would have normally had to do manually. Now I can go in and make changes to the logic of my presenter and quickly run my existing tests and be sure I am not going to break any existing logic.

[TestClass]
public class CustomerViewTest
{
    public CustomerViewTest()
    {
    }

   
    [TestMethod]
    public void CustomerView_LoadValidCustomerId()
    {
        CustomerViewMock customerView = new CustomerViewMock();
        customerView.textBoxCustomerId = "1";
        customerView.ButtonLoad();

        Assert.AreEqual<string>("1", customerView.textBoxCustomerIdReadOnly);

    }

    [TestMethod]
    public void CustomerView_SaveCustomerCity()
    {
        CustomerViewMock customerView = new CustomerViewMock();
        customerView.textBoxCustomerId = "1";
        customerView.ButtonLoad();

        customerView.City = "Bellevue";
        customerView.ButtonSave();

        Assert.AreEqual<string>("Customer Saved", customerView.messageBox);

    }

    [TestMethod]
    public void CustomerView_CustomerIdTextBoxEmpty()
    {
        CustomerViewMock customerView = new CustomerViewMock();
        customerView.textBoxCustomerId = "";
        customerView.ButtonLoad();

        Assert.AreEqual<string>("CustomerId cannot be empty", customerView.messageBox);

    }

    [TestMethod]
    public void CustomerView_CustomerIdTextBoxWithSpace()
    {
        CustomerViewMock customerView = new CustomerViewMock();
        customerView.textBoxCustomerId = "";
        customerView.ButtonLoad();

        Assert.AreEqual<string>("CustomerId cannot be empty", customerView.messageBox);

    }

    [TestMethod]
    public void CustomerView_CustomerIdTextBoxNotInteger()
    {
        CustomerViewMock customerView = new CustomerViewMock();
        customerView.textBoxCustomerId = "ABC";
        customerView.ButtonLoad();

        Assert.AreEqual<string>("CustomerId must be an integer value", customerView.messageBox);

    }
}

That is all I have for now. Please provide feedback if you see anything wrong or have any suggestions that would further improve my approach. I am thinking about adding on to this post an example of taking the Interface and Presenter and moving them into its own project. Then creating this same interface in asp.net just to show the ability to reuse the code. For now I hope this helps someone with understanding the MVP – Passive View pattern.

kick it on DotNetKicks.com

Seattle .NET User Group - January 2010 Meeting : SharePoint development in Visual Studio 2010

Upcoming event for Seattle .NET User Group. For more details about location and group visit http://seattledotnet.org/

When

From: January 13, 2010 05:45 PM
To: January 13, 2010 08:00 PM

 

Location

Starbucks
Street: 2401 Utah Ave S
City: Seattle
State: Washington
Country: USA

What

Speaker:  Boris Scholl
Boris is a Program Manager with the Visual Studio for BizApps team. Besides taking care of the Visual Studio community he is focusing on LOB integration with SharePoint and is working on the next generation of SharePoint tooling. Prior joining Visual Studio he was working as a Technical Product Manager for Office Server building white papers for architectural guidance and LOB integration.

Boris started his Microsoft career working as an Application Development Consultant for portals back in 1999. He then was called into the World Wide IW Centre of Excellence working on large cross border SharePoint and Project Server implementations doing architectural design, reviews and LoB integration.

Abstract: 
The talk focuses on the SharePoint tools available in Visual Studio 2010. We will take you on a tour of the new SharePoint tools available in Visual Studio 2010. In addition we will have a closer look at how to develop SharePoint applications that integrate external data using the new Business Data Connectivity designer and how to take advantage of the Visual Studio Extensibility feature.

ASP.NET MVC Tutorials

Over the last week I have been playing around with the latest ASP.NET MVC 2 RC framework available here http://go.microsoft.com/fwlink/?LinkID=157071 

So far I feel like I have gone backwards from the rich UI development environment of ASP.NET Webforms. However as I write more an more code and look over more examples I am finding the main benefit is total control over what is output to the browser and very testable. Testing has been something I have avoided so far when it comes to webforms because of the complex approach of having to simulate the browser.

Today I was about to write up some blog posts about my experience so far and some of things I found helped me get started. However I also found a great resource today that explains exactly what I was going to write about and then some. Whoever wrote these articles thinks exactly like I do and did a fantastic job of explaining how to use MVC.

Take a look at this site http://www.asp.net/learn/mvc/

I plan on writting several applications using MVC to get myself more familiar and will then be posting some articles showing the same application written in winforms and mvc so that I can compare the advantages of each.

Here is the list of Applications I plan to build. Note each of them pretty simple apps.

Knowledgebase Application - Will allow creating / search of knowledge base information similiar to the microsoft knowledgebase available at support.microsoft.com

Download Center - Will allow simple management for publishing documents/software that need to be made available for download. Will be similiar to downloads.microsoft.com

 

HP MediaSmart Server is my favorite device for 2009

Automatically Backup all your PC’s and MAC’s. If backing up your machine is important to you but you have just not taken the step to implement a backup process, then I would highly recommend the HP MediaSmart Server . While it has many features the most important and coolest one in my opinion is the backup feature.

HP MediaSmart Server Drive Bays HP MediaSmart Server Back View

So what is so cool about it?

No matter how many computers you are backing up the HP MediaSmart Server will only keep a single copy of a given file. That means if you have 3 machines in your network all running Windows 7 the space required to backup the OS will only be the size of one of them. Below is the description from a Microsoft Document explaining how backup works.

The home computer backup solution in Windows Home Server has a single-instance store at the cluster level. Clusters are typically collections of data stored on the hard drive, 4 kilobytes (KB) in size. Every backup is a full backup, but the home server only stores each unique cluster once. This creates the restore-time convenience of full backups (you do not have to repeat history) with the backup time performance of incremental backups.

The home computer backup occurs as follows:

  • When a home computer is backed up to the home server, Windows Home Server software figures out what clusters have changed since the last backup.
  • The home computer software then calculates a hash for each of these clusters and sends the hashes to the home server. A hash is a number that uniquely identifies a cluster based on its contents.
  • The home server looks into its database of clusters to see if they are already stored on the home server.
  • If they are not stored on the home server already, then the home server asks the home computer to send them.
  • All file system information is preserved such that a hard disk volume (from any home computer) at any backup point (time) can be reconstituted from the database.

So imagine this scenario: You bring home your new HP MediaSmart Server and set it up by plugging in power and an ethernet cable. You install the client application on one of your machines. You run your first backup of your machine which for me took 17 minutes and 21 seconds to backup a base install of Windows 7 Ultimate Edition with all the updates applied (see below image).

image

By default it ignores things like user temporary files, system page file (mine was 2.29GB) , recycle bin, Hibernation file (mine was 2GB), Shadow volume implementation folders (mine was 19.54GB). That saved a tone of space and I would never want to really backup those anyways.

Ok, so you are probably thinking is that all? No it only gets better. Now you go to your second machine that one of your kids use and install the software because you want to have it backed up. The backup software reads all of the data but instead of having to backup everything it only needs to transmit content of those clusters that the home server does not already have. So if this machine has a lot of the same files or the same OS you wont be backing up a second copy of the same file. So think about it. How many machines do you have that have all the same music, photos, software etc… Only one copy of any of that will have to be stored.

Ok, getting cooler? Now you have all your machines in your network setup and backing up. Keep in mind that they are also backing up to a network device that is dedicated to your backups.

Ok, so hopefully you have made it down this far as I saved the best part for last. BARE METAL RESTORE!! For those not familiar with that term it means I could have a machine have a complete hardrive failure, pull it out and install a new blank one, grab the provided restore CD provided by HP, boot up my machine from the CD and it will then search the network, find my HP MediaSmart Server and provide me with a list of backups I can choose to restore from. Come back in a bit and my machine is back up and running as if nothing happened.

Now I don't know about you but that is worth a lot of money. In fact I got to try this process out 2 weeks after I purchased it. One of the things I did when purchasing this unit was to purchase a new larger hardrive. 2 Weeks later my new drive would not work and I had to take it back to the store and get a replacement. I brought the new drive home and restored from backup for the previous day and was back up and running in about 2 hours which I only had to walk away and come back when it was finished.

Other Features: Now I have only talked about backup. While that is in my opinion the coolest feature it is not the only. The HP MediaSmart Server also has the following features.

Media Storage and Streaming: You can setup the HP MediaSmart Server search computers on an interval and look for Pictures, Music and Videos and take a copy of them over to the server. Music is then available to be streamed to Media Extender devices, iPod, XBOX, Media Center machines, or access via web interface.

Media Streamer: Page accessible to stream your Music, Videos and Photos

image

Online Photo Album: Publish pictures that you can then provide access to.

image

Small Size: The unit is only 5.5” x 9.8” x 9.2” in size.

Remote Access: If you have the ability to make the device internet accessible then you can also provide Remote Access to any of your machines using Microsoft Remote Desktop services. This provides the ability to remotely access your machines from anywhere. As well once internet connected you can access your files, music, videos and pictures. Also your music and videos are streamed to you as you listen to them so you can instantly start listening to them rather than having to download them first. This also provides a optimum experience if you don’t have a lot of bandwidth.

HP MediaSmart Server Home: Default page showing what is available via a web browser.

image

Computer Access: Shows the list of machines in the network that are available to be connected to remotely.

image

User Accounts
Each person in your home or small office can have a user account in which they can use to store files in a central location. Now you can share folders on the HP MediaSmart Server and give access to only those that need them.

Storage
You can have up to 4 drives in the HP MediaSmart Server and they are also able to be HOT Swapped. My unit came with a 750GB drive and I purchased a 1TB drive to add more capacity and provide a backup drive. The HP MediaSmart Server does not use traditional RAID configuration but rather allow you to define folders that should have the data stored on more than one drive. So in a raid configuration if you put in another drive you do not get access to it as it is only used to provide a backup drive. With the HP MediaSmart Server you get all the extra capacity minus whatever is needed to keep a backup of the folders you said should be mirrored. Maximum storage is only limited by the number of drives and USB ports. So take 4 2TB drives and put them in and 4 2TB external USB drives and you have a whopping 16TB of storage.

Many More Features
There is a lot more features that you can check out on Amazon. Here is a link to the one I purchased. HP EX490 1TB Mediasmart Home Server (Black)

Automatically attaching VHD files in Windows 7 and Windows 2008

If you have played with the new VHD feature in windows 7 or windows 2008 then you know just how cool of a feature this is. However the problem is that when you reboot your machine you find that when it comes back up all your VHD files are no longer attached? Here is what I did to get around the issue.

  1. Create a batch file that will hold the following line:

    diskpart /s “c:\path to script\diskpartscript.txt”

    I named my batch file attachvhd.bat and placed it in the same folder as my VHD files


    attachvhd

  2. Create the script file that is being referenced by the attachvhd.bat batch file. Here is what the contents of that script needs to contain:

    select vdisk file="c:\path to vhd files\myvhddrive.vhd"
    attach vdisk

    I named my script file diskpartscript.txt and placed it in the same folder as my VHD files.

    script 

  3. Create a scheduled task that will automatically run when your machine starts up.

    • Go to Start / Administrative Tools / Task Scheduler
    • Click Create Basic Task

      1_addtask

    • Fill in the name of the task and the description and Click Next

      2_createbasictask 

    • Select “Start a program” radio button option and then Click Next

      3_start_program

    • Select “When the computer starts” and then Click Next

      3_when_computer_starts

    • Browse to the folder that you setup your batch file in and select it. Click Next

      5_script

    • Click Finish

      6_finish 


  4. You have now completed all the necessary steps. Restart your computer and you should find that your VHD files are now automatically attached. One caveat is that if you reboot and you log into your machine quick enough it is possible that the task may not have been run yet. Once you are logged in if the task runs you will get an Autoplay dialog as follows. Simply close it. This does not happen if the task runs before you get logged in.

    auto_play

Seattle Dot Net User Group - Sept 9th 2009 Meeting - .NET Troubleshooting in a Production Environment

 For more information visit user group website http://seattledotnet.org/

Topic: .NET Troubleshooting in a Production Environment

Abstract:
When your .NET application experiences a problem on a production or QA server, your very first task is to avoid or at least minimize end user inconvenience and negative business impact.

.NET troubleshooting in production is often an art.

How do you pre-configure your production environment in order to make the troubleshooting process easy and effective? What's the most straightforward way to understand the root cause of the problem? How do you avoid the cost and time consuming process of reproducing the problem in your development environment?

There are a variety of troubleshooting tools and approaches: debuggers (traditional, remote, core, on-the-fly), custom instrumentation, profilers, monitoring. Which one do you use for your exact situation? When is the core debugger applicable and how do you read memory dumps? What is an on-the-fly debugger? How do you design helpful and valuable custom instrumentation?

We will cover these questions in this presentation.


Speaker: Alex Zakonov

Alex Zakonov is Chief Architect of AVIcode, the leading provider of .NET application and troubleshooting solutions.  With expertise in software management and over ten years of experience as a software engineer and architect, Zakonov leads AVIcode’s product strategy, development efforts and customer support operations. His in-depth knowledge of the development, operations and support phases of the application cycle allows him to contribute a unique perspective on these functional silos and the intersections between them. Under his leadership, AVIcode has demonstrated 50-75% year-to-year company growth and has established a recognized leadership position in the application performance management market. 

Zakonov’s experience includes developing and implementing manageable systems that span multiple vertical industries, including monitoring software for energy management, data processing software for the telecommunication industry, and business automation software for the legal industry. This combination of experience resulted in him co-founding AVIcode and developing the company’s core technology, Intercept Studio, which is based on his patent-pending inventions in software monitoring. Zakonov is also actively involved with several Microsoft products groups, including working with the Windows Management team on the operations aspects of software manageability and with the Visual Studio Team System Team on the development aspects.

Zakonov is fluent in Russian, and holds an MS in Math and Computer Science from St. Petersburg State University.

When:
Wednesday, September 9th, 2009
5:45 – 6:15 PM – Mixer, group kickoff and speaker introduction
6:15 – 7:45 PM – Presentation
7:45 – 8:00 PM – Spillover time and raffle

Where:

Starbucks Support Center
2401 Utah Ave S.
Seattle, WA 98134