Welcome to Community Server Sign in | Join | Help

In previous versions of IIS, it has sometimes been difficult to isolate web application pools from each other. If multiple web application pools are configured to run as the same identity (e.g. Network Service) then code running inside one web application pool would be able to use File System objects to access configuration files, web pages and similar resources belonging to another web application pool. This was because it was impossible to allow one process running as Network Services access to a file, but prevent another process also running as Network Service access to the same file.

In IIS 7.0 it is possible, with some work, to prevent this from occurring. As part of IIS 7.0 inbuilt functionality, each web application pool has an application pool configuration file generated on-the-fly when that application pool is started. These are stored, by default, in the %systemdrive%\inetpub\temp\appPools folder. Each web application pool has an additional SID (Security Identifier) generated for it, and this in injected into the relevant w3wp.exe process. The application pool's configuration file is ACLed to allow only that SID access. Since each w3wp.exe process has it's own SID, each application pool's configuration file is ACLed to a different SID:

IIS Application Pool Isolation

Using the icacls.exe tool it is possible to determine the SID applied to any given application pool's configuration file. This can be done by using the command:

icacls.exe %systemdrive%\inetpub\temp\appPools\appPool.config /save output.txt

The actual SID always starts with the well-known identity prefix: S-1-5-8-82 followed by a hash of the Application Pool's name.

The retrieved SID can now be used to secure web site content in the same way. To do this:
Edit: Thomas Deml (from the IIS Product Group) has shown me an easier way to perform Step 4 below

  1. Configure each website (or web application) to run in its own web application pool
  2. Configure anonymous authentication to use the application pool identity rather than the IUSR account (this can be done by editing the Anonymous Authentication properties for the website in question)
  3. Remove NTFS permissions for the IUSRS group and the IUSR account from the website's files and folders.
  4. Use the icacls.exe tool to permit the App Pool's individual SID Read (and optionally Execute and Write) access to the web site's files and folders. You don't need to initially retrieve the SID using iCacls. Instead simply use: IIS APPPOOL\ApplicationPoolName as the user to grant read permissions to (see screenshot below for an example for the Default App Pool)

After configuring these NTFS permissions, only the SID that has been injected into a particular w3wp.exe process will be able to read the contents of the website in question. All code running in other w3wp.exe processes, even though the process identity may also be Network Service, will be unable to read this particular website's content. This technique may be most useful to web hosters or similar administrators, that need to accept content from various external or untrusted parties.

Edit #2: Here's a screenshot of the dynamic SID injection in action for the Default App Pool (using the excellent Process Explorer tool). The username highlighted can be used with icacls.exe to ACL your web content.

IIS 7 App Pool Isolation - Dynamic SID injection

Well, I'm writing a blog post on IIS application sandboxing, and this item crosses my inbox. It appears that all the time spent mucking about with Windows Home Server, and Windows Media Centre might now actually result in MCTS certification. So, I can justify the endless hours spent mucking with drivers and backups as a work-related endeavour! Yay

Hi,

For all those wondering what options you have post-Application Center 2000 for synchronisation (let alone load balancing etc), the IIS Product Group has released a technical preview of a new tool: msdeploy.exe. This tool can sync or migrate:

  • IIS 7.0 configuration settings
  • Web content
  • Registry keys and values
  • SSL certificates

The product group is planning to have Powershell cmdlet support by the final release. Read more and download the bits from the MSDeploy blog

(Comments Off)
Filed under:

I see that Ben Armstrong has posted instructions on how to use Windows Internet Connection Sharing (ICS) to give your Hyper-V virtual machines access to networks via a wireless adapter.

However ICS does not appear to work if the network that your wireless adapter is connected to uses the 192.168.0/24 subnet (as this is used on the internal side of ICS).

If you are in this situation, then instead of using ICS, the inbuilt Routing and Remote Access (RRAS) service can be used instead. The benefit of RRAS is that any arbitrary subnet(s) can be used on the internal interface (and you can have as many as you want).

To install RRAS use the Server Manager to install the Network Policy And Access Role. RRAS now exists as a sub-feature of this role. You can add this role using Server Manager.

Then open the RRAS MMC Administrative console, and use the wizard that runs at first use to choose NAT routing, and then configure your external (WLAN) interface and internal interface (an internal network created by Hyper-V Management MMC).

Edit: You should give the internal adapter an IP address before running the wizard - so that the NAT routing wizard knows what IP addresses your internal LAN is going to be using, and can configure routing appropriately.

It is possible to have the RRAS service provide DHCP addresses to your Hyper-V machines. However since most of these are probably servers (and thus have static addresses), you can configure a static address pool in RRAS.

If you wish to have your Hyper-V machines able to contact your host PC on the internal interface, configure exceptions (or disable) in th Windows Firewall on the individual adapter configured by Hyper-V (this can be done on the Advanced tab in the Windows Firewall control panel on the host).

Edit: If you haven't using RRAS before, then John Paul Cook has an excellent step-by-step guide on configuring this entire configuration (with screenshots) on his blog.

Windows Server 2008 Hyper-V stores a list of virtual machines in %systemroot%\ProgramData\Microsoft\Windows\virtualisation\Virtual Machines. In that folder are a set of symbolic links, that are linked to the actual config files for each virtual machine.

To move a virtual machine:

  1. Shut down or suspend the virtual machine
  2. Delete the symbolic link in the folder mentioned above. The VM will disappear from the Windows Virtualisation Management MMC console (if you have it open). To delete a symbolic link, you can use the del command in a command window
  3. Move the virtual machines files (VHD virtual hard disk file, configuration files and so on) to the new location
  4. Open the virtual machine's configuration file (e.g. using Notepad.exe) and update any references to physical paths. Typically you'll need to update the location of the virtual hard disk and saved state location. The configuration file is a GUID with an XML extension, such as 0A8D4907-82C6-11DC-8061-02004C4F4F50.xml
  5. Create a new symbolic link to the virtual machine's XML configuration file. This can be done using the mklink.exe file (mklink.exe /? for how to create a link to a file)

To make it easier to create the links, you can output the contents of the virtual machines folder using dir and then piping it to a text file (e.g. dir > VMs.txt). Open the text file in notepad.exe, and for each machine will you have an entry like:

14/01/2008  12:22 PM    <SYMLINK>      0A8D4907-82C6-11DC-8061-02004C4F4F50.xml [D:\WSVs\SVR03-ISA06-1\Virtual Machines\0A8D4907-82C6-11DC-8061-02004C4F4F50.xml]

It's a simple matter of editing to turn this into:

mklink 0A8D4907-82C6-11DC-8061-02004C4F4F50.xml e:\newLocation\SVR03-ISA06-1\Virtual Machines\0A8D4907-82C6-11DC-8061-02004C4F4F50.xml

Save this as a batch file (.bat) and just doubleclick to create the new link. The VM should then show up in Windows Virtualisation Management MMC console.

EDIT: this technique was tested with Windows Server 2008 RTM and Hyper-V Beta 1. It may not work with subsequent builds of Hyper-V. I will update this post when Hyper-V goes RTM

Meta: IIS and Kerberos Part 6 is coming (for anyone interested in IIS still reading this blog)

The Windows Server 2008 backup feature no longer supports direct backup to disk (you'll need third party backup software to do that). You can backup to disk though - either to a SAN (or network share). Or to disk based backup media like the Dell RD1000.

Previously I've been backing up my server to an external enclosure via eSATA - which works, but doesn't provide the scalability of tape. The RD1000 gives you catridges similar to tape, but they appear as removable disk media to Windows Server 2008.

RD1000
The RD1000 - next to two stacked 3.5" hard disks.

The RD1000 (also available from Imation in their RDX series) is available both internally (as 3.5" or 5.25" connected via SATA) or externally (connected via USB 2.0). The actual enclosure isn't much bigger than two 3.5" hard disks.

RD1000
RD1000 seen from the front

The power supply is pretty small as well:

RD1000 Review - power supply

and catridges are about the same size as LTO / Ultrium tapes:

RD1000 catridges compared to LTO tapes
RD1000 catridge -vs- LTO tape

Internally, the RD1000 catridges appear to contain 2.5" 7200 RPM SATA disks. The SATA connector is visible by peering into the catridge.

The backup performance of the external USB-connected RD1000 is approximately 1GB/minute. The following screenshot shows a test run backing up both the system partition (with Windows Server 2008 running) as well as a second partition hosting Hyper-V virtual machines. At the time of the backup, two Hyper-V machines were running (an Active Directory domain controller, and second machine running SQL Server 2005).

RD1000 backup performance

Note: I paid for my RD1000 and backup disks. I didn't receive this from Dell - i.e. no conflict of interest etc.

In Australia the internal RD1000 device costs approximately A$400 (external A$700), and a 300GB cartridge costs approximately A$550 (at time of writing)

Well Frank's posted the various gadgets he's accumulated, as IT Pro, I've been accumulating a few things in the last month (a new LTO2 tape drive, a 5GB Toshiba PCMCIA hard disk, a Dell 24" monitor, and other boring stuff). One thing I do like is my new Dell XPS M1330.

There are plenty of reviews of the XPS M1330 out there on the 'net already. Here are some additional impressions beyond "the screen is bright" and "it comes with a finger print reader". I also compare it to the Sony SZ48 series, which is remakably similar (except for the price).

FWIW the specs of the Dell XPS M1330 that I bought are:

  • Core 2 Duo 2.2GHz
  • 4GB of RAM
  • 200GB 7200 RPM hard disk
  • LED backlit screen (with 0.6 MP webcam built in)
  • Wireless N, Bluetooth, 5520 WWAN option (Vodafone)
  • Everything else standard (DVD burner, fingerprint reader, nVidia 8400M GS card etc)

Dell XPS M1330 -vs- Sony SZ48
Figure 1 - both laptops closed

The Sony, with its carbon fibre case, seems marginally lighter than the Dell (at least in the configuration that I bought). However the Sony has a much larger power brick than the Dell. After combining these two together, the weight seems similar. Physically, both are remakably alike. The Sony has a thinner screen (even though both boast LED backlighting), but the base is marginally thicker, leading to an overal similar thickness. Both boast a 13.3" screen, giving the same width,

XPS M1330 -vs- Sony SZ48
Figure 2 - both laptops open

Once again, it's remarkable how similar the form factors are. The Sony has the fingerprint reader between the two trackpad buttons (making the buttons too small to be usable IMHO). On the other hand, the Dell has both the Wireless-N and 5520 WWAN (Vodafone in my case) mini-PCI cards under the trackpad making it *very* hot and unusable.

Spec wise, the two are very similar:

  • 13.3" screen, 1280x800 maximum resolution,  LED backlit screens, 0.6 MP built in camera
  • Core 2 Duo CPUs (currently available up to 2.4GHz)
  • Up to 4GB of RAM
  • Only 2 USB ports, but both have Firewire ports
  • nVidia 8400M GS GPU

The Sony as the following benefits:

  • Both PC-Card and Express Card (only EC34) support
  • Much thinner screen
  • Lighter laptop body
  • Memory stick slot (no SD card slot, but comes with an ExpressCard SD adapter in box)
  • EDIT: So far, no calls to Sony support required to keep this thing running

The Dell has the following benefits:

  • WWAN support (Dell 5520 card) - no need to carry around a separate card or USB dongle for WWAN access
  • ExpressCard 54 slot (but no PC-Card slot
  • HDMI output (in addition to VGA)
  • Wireless-N Intel WiFi card
  • Comes with a 200GB 7200 RPM drive (or a 5400RPM 320GB drive for the same price). There is an option for an SSD drive as well (but at $1000 more)
  • Ability to enable Intel VT support in the BIOS (important for running VMs)
  • About $1000 cheaper than the corresponding Sony, even with the extended warranty (Australian pricing)
  • Higher end graphics card (8-series -vs- 7-series) - but does that really matter much in a laptop? C&C3 - Tiberium Wars plays flawlessly on both :-)
  • If you want to upgrade the hard-disk you unscrew a single screw on the bottom of the case. Upgrading the Sony requires a bit more work
  • EDIT: Onsite support as standard. However I've had to have Dell support out twice to fix issues (the second time because of the support guy breaking the LCD bezel the first time he was out)
  • EDIT: Dell will give you a regular Vista installation DVD, and an additional DVD for installing drivers and apps. The Sony recovery DVDs install a whole bunch of Sony apps (including a SQL Server 2005 Express Edition installation). You need to spend a fair bit of time uninstalling/removing what you don't want.

Both have drawbacks:

  • Only 2 USB slots. After using one USB slot for you mouse, you are limited in what you can attach. I've been forced to use the Microsoft Wireless Presenter 8000 Bluetooth mouse to keep the USB slots free for other devices.
  • Low resolutions screens (1280x800). After using a minimum of 1400x1050 for the past 4 years, this is a real downer. Running multiple virtual machines is difficult at this resolution.

XPS M1330 -vs- Sony SZ48
Figure 3 - The Sony has a much thinner screen (2mm or so)

Compared to my work supplied Dell Latitude D830, the 1330 is a small child. That said, the D830 supports a 2nd hard drive (via the modular D-Bay), as well as 1920x1200 resolution. Unfortunately, it weighs more than a kilo more than the XPS1330

XPS1330 -vs- Latitude D830
Figure 4 - the Latitude D830 -vs- the XPS M1330

Various Laptops
Figure 5 - various laptops

In Figure 5, I tried to capture the various sizes of these laptops, but it didn't quite work out how i hoped. From bottom to top: Latitude D830, Apple Macbook, Sony SZ48, Dell XPS M1330, and my trusty Toshiba M400 tablet PC.

Various laptops
Figure 6 - Various laptops

An older shot, showing the Sony SZ48, Toshiba M400 tablet, and Toshiba Tecra M5 on the same table. HP ML330 in the background.

After all's said and done however, I'm happy with my purchase. The Dell XPS M1330, even in a top-of-the-line configuration, is quite cheap (compared to what we were paying a couple of years ago), is thin and light. It's a good complement to the fully featured (but heavy) Latitude D830. The only downsides to my previous personal latop (the M400) are:

  • no inbuilt tablet functionality (but I have a Wacom Bamboo to compensate)
  • low resolution screen (not sure what to do about that, except move more stuff across to the D830)
3 Comments
Filed under:

Well it seems that the IIS product is just sneaking in the final bits and pieces missing from IIS 7.0 at this very late stage of the Windows Server 2008 release cycle. Spotted on Robert McMurray's blog is a GoLive Beta release of the IIS 7.0 WebDAV module. This will be included inbox with the final Windows Server 2008 release, but wasn't included in RC1.

Features of this WebDAV module include:

  • Full integration into the IIS 7.0 Manager MMC Console
  • Per site enabling/disabling of WebDAV functionality (in IIS 6.0, you could only disable/enable WebDAV per server, then needed to ensure your permissions were set correctly on sites where you didn't want authoring to occur)
  • Per URL security (this was doable in IIS 6.0, but it was pretty slow if you did it through the GUI)

Additionally, other modules (such as the Request Filtering module) can be configured to not apply rules to WebDAV requests, allowing authenticated authoring to occur, but disallowing other non-permitted anonymous requests.

Download locations for the x86 and x64 bits.

I hope everyone had a great Christmas and New Year.

Spotted on Mike Volodarsky's blog is an announcement that the Health Model for IIS 7.0 has been published on Microsoft TechNet. This describes the various error conditions that IIS 7.0 (and related services, like the Worker Process Activation Service) might encounter.

If you are familiar with Microsoft Operations Manager, then you'll know that these health models form the basis for developing a management pack for that particular service. And right on cue, a beta of the Management Pack for IIS 7.0 (MOM 2005) has been released on Microsoft Connect.

I took the 3-in1 upgrade exam today (70-649). I honestly thought I'd fail after spending the last few days digging further into a few of the topics that are covered (WDS in particular).

In the end, I scored:

  • 970/1000 for TS 70-643 (Configuring Application Services), which probably isn't surprising as this covers IIS 7.0 and Terminal Services
  • 887/1000 for TS 70-640 (Configuing Active Directory Services), which now covers traditional Directory services, AD Lightweight Services (formarly ADAM), AD RMS (formerly Windows Rights Management Services) and AD CS (formerly just Certificate Services)
  • 850/1000 for TS 70-642 (Configuring Network Infrastructure), which covers stuff like Network Access Protection, RRAS and so on. There was also a WSUS question in there, and some Virtual Server questions.

A question a colleague asked was whether you needed to pass all three components, or just have an overall passing score. At the end of the exam, the testing application told me I'd passed with a score of 850/1000, so it appears that you need to score higher than 700 in each individual section of the exam (as your final score is the lowest score out of the three individual components).

In any case, I'll be having a beer (or two) to celebrate tonight. I'd have more, but upon returning home I found my Amex statement in the mail, which put a bit of a dampener on things. :-)

Good question. I saw this today in my RSS reader:
http://blogs.msdn.com/wesdyer/archive/2007/12/05/volta-redefining-web-development.aspx

Can someone who knows something about development tell this IT Pro what this is all about? I asked around today, and no one (yet) seems to be able to explain the significance of this technology.

Thanks :-)

3 Comments
Filed under: ,

Exchange Server 2007 Service Pack 1 (SP1) shipped earlier today. It incorporates a myriad of enhancements, which arguably should have been in the RTM product. I'll be installing this at home as soon as I can! Be sure to check the system requirements first (an update to .NET Framework may be required for some installs).

Update: I've successfully deployed SP1 on my home Exchange 2007 server, and everything appears to be working fine (after the second attempt). The first attempt failed due to some Hub Transport role configuration issues (which in turn caused a bunch of things to fail until I could correct the issues and start SP1 setup again).

1 Comments
Filed under:

My current preferred PC is a specced out Toshiba Portege M400. It's Core Duo, 4GB of RAM, two 7200 RPM disks, and a high res (1400x1050) screen. All in a compact 12" form factor, and it's a Tablet PC as well.

Recently Avanade upgraded my work-supplied laptop to a Dell Latitude D830, which also has pretty mean specifications: Core 2 Duo 2.4Ghz, 4GB of RAM, a 1920x1200 display, and it also has two 160GB disks. I installed Windows Server 2008 onto this, to test the new Windows Server Virtualisation (Hyper-V) functionality. Unfortunately the D830 isn't a Tablet PC, and there's no functionality in a standard Windows Server 2008 install to support Tablet PC functionality anyway.

So I gambled slightly in purchasing a Wacom Bamboo tablet. However the software appears to install fine under Windows Server 2008 x64 RC0, and provides standard tablet functionality. So I can continue to doodle diagrams in OneNote:

Wacom Bamboo tablet

The tablet is quite small (about 19cm on each edge), thin (<1cm) and weighs about 300 grams. It has four buttons at the top (illuminated in blue) which can be programmed, as well as a little touchpad which allows scrolling up/down in windows using a motion similar to the click wheel in an iPod.

Overall, I'm quite impressed. The finish is very nice and it looks are quite attractive. The tablet area has a nice feel, that feels almost like a mat, rather than a piece of plastic. Out of the box, the tablet area maps to the entire screen of your machine (i.e. the bottom left of the tablet is the bottom left of your screen, and visa versa for the top right). For a very high resolution display, this means that the tablet is very, very sensitive! However this can be configued in the tablet software. You can either change the screen area that the tablet maps to, or change the movement of the tablet pen to emulating a mouse. Lastly, the packaging was gorgeous - similar to what you expect from Apple.

For a purchase that was just over $100, I'm quite impressed. My only negative so far is that my Cross Tablet PC pen doesn't work with it, and that it doesn't come with some kind of carry bag/pouch/sleeve. Those are minor quibbles though.

Bamboo 2

Wow, what a big set of weeks the past few have been (and I've been too busy doing other things to blog)...

First Microsoft has settled with Eolas, allowing Internet Explorer users to move away from having to click on certain embedded controls to activate them. Of course, if you followed this MSDN page, your users didn't have to "click" at all, but shortly Internet Explorer should revert to its previous behaviour. A post on the IE blog has more details.

Secondly, the IIS Product Group has released the FastCGI module for IIS 6.0 (this module will also be included out of the box for IIS 7.0). Previously, when using CGI applications (such as PHP in CGI mode), each request resulted in the creation of a separate process to handle that request. Creating processes is (relatively) quite expensive on the Windows platform, so CGI application scalability was quite poor (instead, within a single process, you'd use mulitple threads). The FastCGI module allows for a persistant process, with one or more threads handling requests (if the CGI application was single threaded, then you'd still serialise requests, but you'd still gain the benefit of avoiding process creation and destruction). The IIS Product Group are claiming 10x and 20x performance boosts for many common CGI applications (PHP, PERL) - so this is definately worth investigating if you are running CGI applications on IIS.

Thirdly, Visual Studio.NET 2008 (and .NET Framework v3.5) was released about a week ago (see how far behind I am with these posts!)

And lastly, I heard from my editor today that Professional IIS 7.0 (Wrox Press) should be released on 21st Jan 2008. If you're really desperate, you can pre-order it on Amazon.com - though the cover will be slightly different (i.e. no random dude)

2 Comments
Filed under:

At home I have a small virtualised lab running a number of server services. Unfortunately when provisioning my server, I didn't really consider the space needs of the virtualised Windows Home Server. Originally, I bought 2 x 500GB disks for my server, but that doesn't leave much space for the WHS after catering for all the other servers:

Virtual Network

With the price of hard drives falling, I purchased an additional 2 x 750 GB disks to add some additional storage. The upgrade plan involved:

  1. Purchasing the necessary converter components from Dell to allow adding additional drives to my PowerEdge SC1430 server
  2. Shut down all virtual machines. Change settings so that the don't start up when the host machine boots
  3. Move the two existing drives to bays 3 and 4. Add the additional drives to bays 1 and 2, and attach to the SAS 5/iR controller. Set the boot order to ensure that the original disks are used to boot.
  4. Restart the host machine, and create a new RAID1 array using the two new disks.
  5. Once the host OS has started, copy the various virtual machines across to the new RAID 1 array.

PowerEdge SC1430

The PowerEdge SC1430 prior to the upgrade

PowerEdge SC1430

Various upgrade components - a couple of new disks, filler face plates (floppy drive and 5.25 bay), and the Dell converter module for the fourth drive bay.

PowerEdge SC1430

After adding the two new drive. My cabling skills could probably improve :-)

And now, after the upgrade - lots of spare disk space.

PowerEdge SC1430

Edit: Write throughput on the SAS 5/iR controller seems to be limited to around 400 MB/minute. Is this normal? Or is there something I might need to change in the settings? Using software RAID is around 2.5x more performant...