Thursday, October 22, 2009

nopCommerce on Azure: How much would it cost?

This is part of what has turned out to be a series on running an eCommerce solution in the Cloud using nopCommerce and Azure.  The other posts are:
   eCommerce in the Cloud: Running nopCommerce on Azure
   More on nopCommerce for Azure

Over on the nopCommerce forum, somebody asked what pricing would be for such a solution.  In theory, the benefit of Azure is that you pay for what you use, so you wouldn't need to pay much until your store was successful.  In practice, the cost may be somewhat prohibitive for entry-level stores. For a baseline, single instance store, based on the available pricing, you would need:

~740 Compute Hours / month = ~$90
~1G storage = $~0.15
~SQL Azure Web (unless you have a very very large store) = $10
~Bandwidth between Azure and SQL = varies
~Bandwidth between Azure and users = varies

So, I'm _guessing_ about $120 a month, but the billing and monitoring features are not available yet, so I can't confirm.  Definitely higher than typical shared hosting, but comparable to a dedicated server, and better than Amazon EC2 Windows.

At this point, for our customers, if the store is just starting out or is small, we would still advise using a shared hosting provider like Discount ASP.  As business demands load balancing, high availability, geographic proximity, etc., move to Azure.

Another compelling option is the BizSpark program, which comes with ~750 free Azure hours /mo for 3 years, if you qualify.

At some point, I'd also like to look into writing providers that work against Azure table/blobs/queues.  That would offer infinite storage at .15c/GB/mo instead of the 1GB limited SQL database at $10/GB/mo.

Yet another option would be to set up a multi-tenanted solution, where multiple stores could be hosted on a single instance for a SaaS-type solution.  I haven't looked to see if nopCommerce supports this, but from a business perspective, this is where the cloud concept really takes off.  Instead of buying hardware to scale out your turnkey eCommerce company, you can simply add or remove instances to cover demand.  For example, every 50 stores, you may turn on another instance, literally with the flip of a switch.  If 50 stores went out of business, you could similarly scale back.  This would be way more cost-effective than buying and setting up servers to cover demand.

Monday, October 12, 2009

T4 Mocks

Test-driven-development is a gateway drug to some pretty hard-core code, such as mocking frameworks.  If you’re not familiar with the concept of mocking frameworks, they allow you to create fake implementations of interfaces that you can use when testing.  This means you can unit test code that typically would require a database, file access, network IO, etc. without actually having to have any of those things. Honestly, I’m still learning about the proper use of mocking frameworks, but one thing I’ve learned is that each have their own syntax, which can sometimes be a little intimidating to newcomers:

var mocks = new MockRepository();
var customerServiceMock = mocks.DynamicMock<ICustomerService>();
var testCustomers = new[]{new Customer(), new Customer(), new Customer()};
Expect.Call(customerServiceMock.GetAll).Return(testCustomers.AsQueryable());            
mocks.ReplayAll();
 
var target = new CustomerController(customerServiceMock);
 
var result = target.Index(1);
 
mocks.VerifyAll();

Now to be fair, this isn’t unreasonably complicated code.  We’re saying “I don’t feel like making a _real_ implementation of ICustomerService, so make one for me at runtime. Expect GetAll to be called, and when it is, return testCustomers.” As a “DynamicMock”, any other calls to methods on customerServiceMock will be ignored or return null. There’s an odd ‘ReplayAll/VerifyAll’, which tells the mocking framework “OK, we’re done setting up the mock, get ready…” and then “Ok, verify everything we expected just happened.  With this in place, we can test the method without the overhead of a database, and isolate the piece being tested. This works well, and once you learn the syntax, is fairly straight-forward.  Other frameworks, such as Moq, have some syntax sugar that makes mocking even more straight forward, but in general, they all work by creating the class at runtime and exposing some methods that you use to set up the mock and verify the code under test.  Like I said, I can’t really find much wrong with this, except that it felt a little wordy. It was mostly out of curiosity that I decided to spike creating my own mocking framework, with a slight twist.  Instead of generating the classes at runtime, what if we used code-generation to generate them at compile-time?  We could look at our project for interfaces to be mocked, and create our own implementations that enabled functionality useful at test-time.   The end result would be more intellisense-friendly, and (I think) straight-forward. This would enable code like this:

var mock = new MockCustomerRepository();
mock.GetAllMethod.Returns(new Customer(), new Customer(), new Customer(), new Customer());
 
var target = new CustomerController(mock);
var actual = target.Index();
 
mock.GetAllMethod.MustBeCalled();

So, I wrote some T4 based on Oleg Sych’s samples.  It uses Microsoft’s Introspection engine to peek at an assembly and find interfaces to be mocked.  For each interface, it generates a class named “Mock[InterfaceName]”, with all of the interface implemented (actually, it currently only does the methods, leaving out events and properties).  In addition, for each method, it generates a corresponding property named [MethodName]Method.  This property enables the mocking functionality via a secret sauce of generics and extension methods.  Say there is a method like this on the interface:

Customer GetCustomerById(int id)

The T4 generated Mock looks like this:

Customer GetCustomerById(int id){…}
public class GetCustomerByIdParameters {public int id {get;set;}}
MockMethodInfo<GetCustomerByIdParameters, Customer> GetCustomerByIdMethod {get;set;}

Which means inside the test, you can do this:

mock.GetCustomerByIdMethod.Returns(new Customer()});

It also allows for lambdas, so for more complex tests, you can do things like:

mock.GetCustomerByIdMethod.Return(p=>return new Customer(){Id=p.id});

In addition, extension methods on MockMethodInfo enable checking that calls were made:

mock.GetCustomerByIdMethod.MustBeCalled().WithParameters(p=>p.id==1);

That’s about all it does right now.  There are a few things I like about this over ‘traditional’ mocking frameworks.  First, the generated code is very discoverable via intellisense.  If a developer can get to “new MockMyService()”, he can probably fall into the rest fairly quickly.  Second, the syntax (I think) is slightly more readable and concise.  As with previous code-gen experiments, this is more of a learning exercise than actual production-ready project, but if you’re interested, you can download the code and let me know what you think!

Monday, October 5, 2009

A Safe, jQuery-less, Downlevel-friendly ASP.NET MVC Action Link Button

One gaping security hole in some web apps is the exposure of ‘Delete’ or other functionality via a url.  The problem goes something like this: suppose you have code that handles requests to myapp/controller/delete/1.  A hacker could trick you into deleting a record by sending you to a page with an image like: <img src=”myapp/controller/delete/1”/>  Your browser will connect, passing your authentication cookies, windows login, etc., to said page and it will execute whatever code it usually does, as you.   ‘Deletes’ are the most commonly discussed scenario, but you could have similar issues with routes that insert or update data. <img src=”http://mybank.com/transactions/sendmoney?to=hacker&amount=1000000”/>, if you catch my drift. 

The partial  fix is to use forms and require the request be a HTTP POST request.  This prevents the image-based attack above, but still allows for a slightly more complex attack where the hacker creates a form similar to yours and posts it to your server.  Web Forms has ways of handling this in most cases, so that it typically is not an issue, though it is completely possible to introduce similar bugs in such apps if you try.  In ASP.NET MVC, you’re given greater control over the rendered HTML, but  with great power comes great responsibility. Fortunately, MVC gives you the tools to handle these scenarios in the form of AntiForgeryValidationToken.  You can read up more on this class of vulnerability and the solution over on Steve Sanderson’s blog, but in a nutshell, the fix is to do this in your view:

<% using(Html.Form("Delete", "Delete")) { %>
       <%= Html.AntiForgeryToken() %>
       <!-- rest of form goes here –>
   <% } %>

And this in your controller:

[ValidateAntiForgeryToken]
  public ViewResult Delete()  

For any action method that updates, inserts, or deletes data, or does anything in the app besides just query data.  This causes a hidden field to be put in the form that contains a token that the server creates for the user’s session.  If not present in the form’s post, the action method will throw an exception.

This sounds easy enough, but in many cases, it sure is harder than just doing the old insecure thing:

<%=Html.ActionLink(“Delete”,”Delete”,new{id=item.id})%>

Faced with this in a recent app, I decided to write a helper method to make creating this sort of form a little more developer-friendly.  What I wanted was something like this:

<%=Html.ActionLinkForm(”Delete”, Url.Action(“Delete”, new{id=item.id}))%>

It’s slightly harder than you may think.  Creating the form is easy enough, but if you really want a link, a little javascript is in order to let the link submit the form.  However, the form then no longer supports non-javascript-enabled browsers, so some tweaks are required to address that.  I found several similar implementations, including a nice jQuery one from Phil Haack.  However, it seemed a little heavy to require jQuery.  Don’t get me wrong – I love jQuery and use it all the time, but something about requiring it in such a basic thing smells a bit to me.  I felt the same effect could be had with less dependencies. So, I built my own, making use of the <noscript> element, and a little creative plain-jane javascript:

public static string ActionLinkForm(this HtmlHelper helper, string linkText, string url)
{
   var result = new StringBuilder();
   result.AppendFormat("<form action=\"{0}\" method=\"post\">", url);
   result.Append(helper.AntiForgeryToken());
   result.AppendFormat("<script type=\"text/javascript\">document.write('<a href=\"#\" onClick=\"this.parentNode.submit()\">{0}</a>');</script>",linkText);
   result.AppendFormat("<noscript><input type=\"submit\" value=\"{0}\"/></noscript>", linkText);
   result.Append("</form>");
   return result.ToString();
}    

This requires no additional scripts on the page, or special css elements and is compatible with IE and FF.  With javascript off, the link turns into a button and the code works as it normally would.  I’m fairly certain it works in most other browsers as well, though you should test it in others before using this in production.

But Wait, There’s More!  Act Now, and Receive an Entity Framework Delete Link Button Absolutely Free!

The particular scenario I wanted to use this in was for a delete link inside MvcContrib’s awesome GridView helper.  My model used Entity Framework, and what I really wanted was this:

        column.For(x => Html.DeleteLink(x)).DoNotEncode();

There’s plenty not to like about EF in it’s current implementation, but one nice thing is that it exposes the primary keys via an interface that is usable inside your code.  With the convention of a link named ‘Delete’, that works against an action named ‘Delete’ on the same controller as the current request, and accepts the primary key fields as parameters, I came up with this implementation:

public static string DeleteLink(this HtmlHelper helper, IEntityWithKey entity)
{
    var routeValues = new RouteValueDictionary();
    foreach (var member in entity.EntityKey.EntityKeyValues)
    {
        routeValues.Add(member.Key, member.Value);
    }
    var url = new UrlHelper(helper.ViewContext.RequestContext);
    return ActionLinkForm(helper, "Delete", url.Action("Delete", routeValues));
}

Obviously, additional overloads and methods can be added for other common scenarios, but the end result is a great reduction in code and clean, safe links.

Thursday, October 1, 2009

More on nopCommerce for Azure

I penned my previous post on nopCommerce for Azure around 1 AM one morning, having just spent the last few hours playing with the code and deploying to Azure.  In my sleepy stupor, I left off a few details about how it’s implemented and the changes required.  So I thought I’d post a few more of the gory details for those interested.  A zipped version of the nopCommerce 1.30 code, including the slightly altered Azure projects, is available here.  You’ll need Windows Azure Tools and Windows Azure SDK installed on your PC, a Windows Azure CTP account, and a SQL Azure account (register and download here).  To use, simply follow the steps below to set up your database, and then change ConnectionStrings.config to use the connection string to your database.  You should then be able to run locally (choose AzureNopCommerceStore as the startup project) against the SQL Azure database.  Once you’re happy with that, rt click AzureNopCommerceStore  and choose publish to publish the project to Azure.

I used the SQL Azure Migration Wizard, but had to address some minor discrepencies between SQL 2008 and SQL Azure manually.  One procedure, [dbo].[Nop_Maintenance_ReindexTables], contains code that cannot be modified to run on SQL Azure, so that functionality won’t work.  A fixed up script is in the App_Data folder of the AzureNopCommerce_WebRole project.  Simply run that script against your SQL Azure database and then copy data from a live database, or run the sample data script also included in that folder.  Tip: Sql Management Studio will give you some errors, but if you follow these steps exactly, you can get a query window connected to SQL Azure to run these scripts.

The code for the most part worked as-is, except that any part that tries to write to disk will throw a FileIOPermissions exception.  The main place I found this was happening was the image generation code in GetPictureUrl method inside PictureManager.  This I changed to ‘hard code’ a path to a page that will be responsible for dynamically serving up the image, GenPicture.aspx (see the last line):

/// <summary>
/// Get a picture URL
/// </summary>
/// <param name="picture">Picture instance</param>
/// <param name="TargetSize">The target picture size (longest side)</param>
/// <param name="showDefaultPicture">A value indicating whether the default picture is shown</param>
/// <returns></returns>
public static string GetPictureUrl(Picture picture, int TargetSize, bool showDefaultPicture)
{
  string url = string.Empty;
  if (picture == null)
  {
      if (showDefaultPicture)
          url = GetDefaultPictureUrl(TargetSize);
      return url;
  }
 
  string[] parts = picture.Extension.Split('/');
  string lastPart = parts[parts.Length - 1];
  switch (lastPart)
  {
      case "pjpeg":
          lastPart = "jpg";
          break;
      case "x-png":
          lastPart = "png";
          break;
      case "x-icon":
          lastPart = "ico";
          break;
  }
 
  string localFilename = string.Empty;
  if (picture.IsNew)
  {
      string filter = string.Format("{0}*.*", picture.PictureID.ToString("0000000"));
      string[] currentFiles = System.IO.Directory.GetFiles(PictureManager.LocalThumbImagePath, filter);
      foreach (string currentFileName in currentFiles)
          File.Delete(Path.Combine(PictureManager.LocalThumbImagePath, currentFileName));
 
      picture = PictureManager.UpdatePicture(picture.PictureID, picture.PictureBinary, picture.Extension, false);
  }
 
  return CommonHelper.GetStoreLocation(false) + "images/genpicture.aspx?p=" + picture.PictureID + "&s=" + TargetSize;
}

An overload for GetDefaultPictureUrl also had to be modified to, for now, not do any resizing.

public static string GetDefaultPictureUrl(PictureTypeEnum DefaultPictureType, int TargetSize)
{
   string defaultImageName = string.Empty;
   switch (DefaultPictureType)
   {
       case PictureTypeEnum.Entity:
           defaultImageName = SettingManager.GetSettingValue("Media.DefaultImageName");
           break;
       case PictureTypeEnum.Avatar:
           defaultImageName = SettingManager.GetSettingValue("Media.Customer.DefaultAvatarImageName");
           break;
       default:
           defaultImageName = SettingManager.GetSettingValue("Media.DefaultImageName");
           break;
   }
   
 
   string relPath = CommonHelper.GetStoreLocation(false) +
           "images/" + defaultImageName;
 
   return relPath;
}

GenPicture.aspx then reuses some of PictureManager’s logic to generate the image on the fly and render it to the client.  To be clear, this of course is terribly inefficient and is just a proof-of-concept hack. If you use this in production, you’ll be laughed at and maybe worse.  The better solution would be to change PictureManager to use a provider model. A FileSystemPictureProvider could then handle pictures for traditional web apps, while an AzurePictureProvider could use Azure Blob storage.  But for demo purposes, this was the fastest route to a solution.  All markup, except the @Page declaration is removed from the aspx, and the full listing for GenPicture.aspx.cs is here:

using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using NopSolutions.NopCommerce.Common.Media;
using NopSolutions.NopCommerce.DataAccess.Media;
 
namespace AzureNopCommerceStore_WebRole.images
{
    public partial class GenPicture : System.Web.UI.Page
    {
        private static Picture DBMapping(DBPicture dbItem)
        {
            if (dbItem == null)
                return null;
 
            Picture item = new Picture();
            item.PictureID = dbItem.PictureID;
            item.PictureBinary = dbItem.PictureBinary;
            item.Extension = dbItem.Extension;
            item.IsNew = dbItem.IsNew;
 
            return item;
        }
        /// <summary>
        /// Gets a picture
        /// </summary>
        /// <param name="PictureID">Picture identifier</param>
        /// <returns>Picture</returns>
        public static Picture GetPictureByID(int PictureID)
        {
            DBPicture dbItem = DBPictureProvider.Provider.GetPictureByID(PictureID);
            Picture picture = DBMapping(dbItem);
            return picture;
        }
        public static Size CalculateDimensions(Size OriginalSize, int TargetSize)
        {
            Size newSize = new Size();
            if (OriginalSize.Height > OriginalSize.Width) // portrait 
            {
                newSize.Width = (int)(OriginalSize.Width * (float)(TargetSize / (float)OriginalSize.Height));
                newSize.Height = TargetSize;
            }
            else // landscape or square
            {
                newSize.Height = (int)(OriginalSize.Height * (float)(TargetSize / (float)OriginalSize.Width));
                newSize.Width = TargetSize;
            }
            return newSize;
        }
        protected void Page_Load(object sender, EventArgs e)
        {
            if (String.IsNullOrEmpty(Request["s"]) || String.IsNullOrEmpty("p")) return;
            var TargetSize = int.Parse(Request["s"]);
            var pictureId = int.Parse(Request["p"]);
            var picture = GetPictureByID(pictureId);            
 
            if (TargetSize == 0)
            {
                using (MemoryStream stream = new MemoryStream(picture.PictureBinary))
                {
                   // Response.ContentType = picture.Extension;
                    stream.WriteTo(Response.OutputStream);
                }
            }
            else
            {
                using (MemoryStream stream = new MemoryStream(picture.PictureBinary))
                {
                    Bitmap b = new Bitmap(stream);
 
                    Size newSize = CalculateDimensions(b.Size, TargetSize);
 
                    if (newSize.Width < 1)
                        newSize.Width = 1;
                    if (newSize.Height < 1)
                        newSize.Height = 1;
 
                    Bitmap newBitMap = new Bitmap(newSize.Width, newSize.Height);
                    Graphics g = Graphics.FromImage(newBitMap);
                    g.DrawImage(b, 0, 0, newSize.Width, newSize.Height);                    
                    newBitMap.Save(Response.OutputStream, ImageFormat.Jpeg);
                    newBitMap.Dispose();
                    b.Dispose();
                }
            }
        }
    }
}
 
 

That is all the code changes that were required, I believe. As I said before, Azure is really not as scary as it sounds – for all the hype, you can really think of it as really cool hosting for .NET (and PHP) apps. With some notable exceptions, most code will ‘just work’ on Azure.  Because of the well thought out separation of concerns in nopCommerce, it should be possible to migrate much of the functionality to Azure Table Storage, which is cheaper than SQL Azure, and unlimited in storage size.  I may play with that in the days to come, but for now, hopefully this will wet people’s appetite for what’s possible on Azure.